Ornstein-Uhlenbeck Processes Simulation
Kuzmina, A.
2012-01-01
In this paper we give a brief introduction to Ornstein-Uhlenbeck processes and their simulation methods. Ornstein-Uhlenbeck processes were introduced by Barndorff-Nielsen and Shephard (2001) as a model to describe volatility in finance. Ornstein-Uhlenbeck processes are based on Levy processes. Levy processes simulation may be found in [1, 2].
Generalized Ornstein-Uhlenbeck processes and associated self-similar processes
Lim, S C; Muniandy, S V
2003-01-01
We consider three types of generalized Ornstein-Uhlenbeck processes: the stationary process obtained from the Lamperti transformation of fractional Brownian motion, the process with stretched exponential covariance and the process obtained from the solution of the fractional Langevin equation. These stationary Gaussian processes have many common properties, such as the fact that their local covariances share a similar structure and they exhibit identical spectral densities at large frequency limit. In addition, the generalized Ornstein-Uhlenbeck processes can be shown to be local stationary representations of fractional Brownian motion. Two new self-similar Gaussian processes, in addition to fractional Brownian motion, are obtained by applying the (inverse) Lamperti transformation to the generalized Ornstein-Uhlenbeck processes. We study some of the properties of these self-similar processes such as the long-range dependence. We give a simulation of their sample paths based on numerical Karhunan-Loeve expansion
Generalized Ornstein-Uhlenbeck processes and associated self-similar processes
Lim, S C
2003-01-01
We consider three types of generalized Ornstein-Uhlenbeck processes: the stationary process obtained from the Lamperti transformation of fractional Brownian motion, the process with stretched exponential covariance and the process obtained from the solution of the fractional Langevin equation. These stationary Gaussian processes have many common properties, such as the fact that their local covariances share a similar structure and they exhibit identical spectral densities at large frequency limit. In addition, the generalized Ornstein-Uhlenbeck processes can be shown to be local stationary representations of fractional Brownian motion. Two new self-similar Gaussian processes, in addition to fractional Brownian motion, are obtained by applying the (inverse) Lamperti transformation to the generalized Ornstein-Uhlenbeck processes. We study some of the properties of these self-similar processes such as the long-range dependence. We give a simulation of their sample paths based on numerical Karhunan-Loeve expansi...
Quasi Ornstein-Uhlenbeck processes
Barndorff-Nielsen, Ole Eiler; Basse-O'Connor, Andreas
The question of existence and properties of stationary solutions to Langevin equations driven by noise processes with stationary increments is discussed, with particular focus on noise processes of pseudo moving average type. On account of the Wold-Karhunen decomposition theorem such solutions...... of the associated autocorrelation functions, both for small and large lags. Applications to Gaussian and Lévy driven fractional Ornstein-Uhlenbeck processes are presented. As an element in the derivations a Fubini theorem for Lévy bases is established....
Weyl and Riemann-Liouville multifractional Ornstein-Uhlenbeck processes
Lim, S C; Teo, L P
2007-01-01
This paper considers two new multifractional stochastic processes, namely the Weyl multifractional Ornstein-Uhlenbeck process and the Riemann-Liouville multifractional Ornstein-Uhlenbeck process. Basic properties of these processes such as locally self-similar property and Hausdorff dimension are studied. The relationship between the multifractional Ornstein-Uhlenbeck processes and the corresponding multifractional Brownian motions is established
Quasi Ornstein-Uhlenbeck processes
Barndorff-Nielsen, Ole Eiler; Basse-O'Connor, Andreas
2011-01-01
The question of existence and properties of stationary solutions to Langevin equations driven by noise processes with stationary increments is discussed, with particular focus on noise processes of pseudo-moving-average type. On account of the Wold–Karhunen decomposition theorem, such solutions are...... of the associated autocorrelation functions, both for small and large lags. Applications to Gaussian- and Lévy-driven fractional Ornstein–Uhlenbeck processes are presented. A Fubini theorem for Lévy bases is established as an element in the derivations....
Representations of Urbanik's classes and multiparameter Ornstein-Uhlenbeck processes
Graversen, Svend-Erik; Pedersen, Jan
2011-01-01
A class of integrals with respect to homogeneous Lévy bases on Rk is considered. In the one-dimensional case k=1 this class corresponds to the selfdecomposable distributions. Necessary and sufficient conditions for existence as well as some representations of the integrals are given. Generalizing...... the one-dimensional case it is shown that the class of integrals corresponds to Urbanik's class Lk-1(R). Finally, multiparameter Ornstein-Uhlenbeck processes are defined and studied....
Spectral properties of superpositions of Ornstein-Uhlenbeck type processes
Barndorff-Nielsen, Ole Eiler; Leonenko, N.N.
2005-01-01
Stationary processes with prescribed one-dimensional marginal laws and long-range dependence are constructed. The asymptotic properties of the spectral densities are studied. The possibility of Mittag-Leffler decay in the autocorrelation function of superpositions of Ornstein-Uhlenbeck type...... processes is proved....
Some properties of the fractional Ornstein-Uhlenbeck process
Yan Litan; Lu Yunsheng; Xu Zhiqiang
2008-01-01
We consider the fractional analogue of the Ornstein-Uhlenbeck process, i.e. the solution of the Langevin equation driven by a fractional Brownian motion in place of the usual Brownian motion. We establish some properties of these processes. We show that the process is local nondeterminism. For a two-dimensional process we show that its renormalized self-intersection local time exists in L 2 if and only if 0< H<3/4
A note on a representation and calculation of the long-memory Ornstein-Uhlenbeck process
Høg, Esben
1999-01-01
In this paper we analyze the covariance function for a long memory generalization of Ornstein-Uhlenbeck type processes which are the analogues in continuous time of long memory autoregressions of order 1. A Fractional Brownian Motion with drift is a special case. We find the exact expression...
Integrated stationary Ornstein-Uhlenbeck process, and double integral processes
Abundo, Mario; Pirozzi, Enrica
2018-03-01
We find a representation of the integral of the stationary Ornstein-Uhlenbeck (ISOU) process in terms of Brownian motion Bt; moreover, we show that, under certain conditions on the functions f and g , the double integral process (DIP) D(t) = ∫βt g(s) (∫αs f(u) dBu) ds can be thought as the integral of a suitable Gauss-Markov process. Some theoretical and application details are given, among them we provide a simulation formula based on that representation by which sample paths, probability densities and first passage times of the ISOU process are obtained; the first-passage times of the DIP are also studied.
Ergodicity and Parameter Estimates for Infinite-Dimensional Fractional Ornstein-Uhlenbeck Process
Maslowski, Bohdan; Pospisil, Jan
2008-01-01
Existence and ergodicity of a strictly stationary solution for linear stochastic evolution equations driven by cylindrical fractional Brownian motion are proved. Ergodic behavior of non-stationary infinite-dimensional fractional Ornstein-Uhlenbeck processes is also studied. Based on these results, strong consistency of suitably defined families of parameter estimators is shown. The general results are applied to linear parabolic and hyperbolic equations perturbed by a fractional noise
On the time-homogeneous Ornstein-Uhlenbeck process in the foreign exchange rates
da Fonseca, Regina C. B.; Matsushita, Raul Y.; de Castro, Márcio T.; Figueiredo, Annibal
2015-10-01
Since Gaussianity and stationarity assumptions cannot be fulfilled by financial data, the time-homogeneous Ornstein-Uhlenbeck (THOU) process was introduced as a candidate model to describe time series of financial returns [1]. It is an Ornstein-Uhlenbeck (OU) process in which these assumptions are replaced by linearity and time-homogeneity. We employ the OU and THOU processes to analyze daily foreign exchange rates against the US dollar. We confirm that the OU process does not fit the data, while in most cases the first four cumulants patterns from data can be described by the THOU process. However, there are some exceptions in which the data do not follow linearity or time-homogeneity assumptions.
Fractional Ornstein-Uhlenbeck for index prices of FTSE Bursa Malaysia KLCI
Chen, Kho Chia; Bahar, Arifah; Ting, Chee-Ming
2014-07-01
This paper studies the Ornstein-Uhlenbeck model that incorporates long memory stochastic volatility which is known as fractional Ornstein-Uhlenbeck model. The determination of the existence of long range dependence of the index prices of FTSE Bursa Malaysia KLCI is measured by the Hurst exponent. The empirical distribution of unobserved volatility is estimated using the particle filtering method. The performance between fractional Ornstein -Uhlenbeck and standard Ornstein -Uhlenbeck process had been compared. The mean square errors of the fractional Ornstein-Uhlenbeck model indicated that the model describes index prices better than the standard Ornstein-Uhlenbeck process.
Biyajima, M.
1984-01-01
Stochastic backgrounds of the KNO scaling functions given by Buras and Koba and by Barshay and Yamaguchi are investigated. It is found that they are connected with the stochastic Rayleigh process, and the (1+2)- and (1+4)-dimensional Ornstein-Uhlenbeck process. Moreover those KNO scaling functions are transformed into the KNO scaling functions given by the Perina-McGill formula in terms of a nonlinear transformation. Analyses of data by means of them are made. Probability distributions of the former KNO scaling functions are also calculated by the Poisson transformation. (orig.)
Construction of a Family of Quantum Ornstein-Uhlenbeck Semigroups
Ki Ko, C
2003-01-01
For a given quasi-free state on the CCR algebra over one dimensional Hilbert space, a family of Markovian semigroups which leave the quasi-free state invariant is constructed by means of noncommutative elliptic operators and Dirichlet forms on von Neumann algebras. The generators (Dirichlet operators) of the semigroups are analyzed and the spectrums together with eigenspaces are found. When restricted to a maximal abelian subalgebra, the semigroups are reduced to a unique Markovian semigroup of classical Ornstein-Uhlenbeck process.
Ditlevsen, Susanne Dalager; Ditlevsen, Ove Dalager
2008-01-01
a subjective graphical test of the applicability of the OU process or the Feller process when applied to a reasonably large sample of observed first-passage data. These non-stationary processes have several applications in biomedical research, for example as idealized models of the neuron membrane potential...... random time break through to the material surface and become observable. However, the OU process has as a model of physical phenomena the defect of not being bounded to the negative side. This defect is not present for the Feller process, which therefore may provide a useful modeling alternative...
Pérez-Fructuoso, María José
2017-12-01
Full Text Available Este artículo propone un modelo aleatorio en tiempo continuo para calcular el índice de pérdidas desencadenante de los bonos sobre catástrofes a partir de la cuantía declarada de siniestros hasta el momento de su vencimiento. Bajo la hipótesis de que la cuantía total de una catástrofe se define como la suma de la cuantía declarada de siniestros y la cuantía de siniestros pendiente de declarar, modelizamos la dinámica lineal decreciente de esta última cuantía mediante un proceso browniano aditivo o proceso de Ornstein-Uhlenbeck. La cuantía declarada de siniestros, entonces, se obtiene por diferencia entre la cuantía total de los siniestros y la cuantía de siniestros pendiente de declarar. Finalmente, se comprueba la validez del modelo propuesto estimando sus parámetros fundamentales y contrastando la bondad del ajuste realizado sobre una muestra de series de datos de seis inundaciones ocurridas en diferentes localidades españolas propensas a sufrir este tipo de catástrofes. || This paper develops a continuous-time random model of loss index triggers for cat bonds on the basis of the loss amount incurred until their maturity. Assuming that total loss amount due to a catastrophe is defined as the sum of the incurred loss amount plus the incurred-but-not-yet reported loss amount, we model the decreasing linear dynamics of the latter amount by means of an additive Brownian process (or Ornstein Uhlenbeck process; and get the former by the difference between the total loss amount and the incurred-but-not-yet-reported loss amount. Finally, we test the validity of the model by estimating its core parameters and by contrasting the goodness of fit through a data series of six floods occurred in several Spanish cities prone to suffer such kind of catastrophes.
Conducting properties of classical transmission lines with Ornstein-Uhlenbeck type disorder
Lazo, E.; Diez, E.
2011-01-01
In this work we study the behavior of bands of extended states and localized states which appear in classical disordered electrical transmission lines, when we use a ternary map and the Ornstein-Uhlenbeck process to generate the long-range correlated disorder, instead of using the Fourier filtering method. By performing finite-size scaling we obtain the asymptotic value of the map parameter b in the thermodynamic limit in a selected range of values of the parameters γ and C of the Ornstein-Uhlenbeck process. With these data we obtain the phase diagrams which separate the localized states from the extended states. These are the fundamental results of this article. - Highlights: → We study disordered classical transmission lines. → We use the Ornstein-Uhlenbeck process to generate long-range correlated disorder. → We obtain the phase diagram of the transition in the thermodynamic limit.
Memory effects on a resonate-and-fire neuron model subjected to Ornstein-Uhlenbeck noise
Paekivi, S.; Mankin, R.; Rekker, A.
2017-10-01
We consider a generalized Langevin equation with an exponentially decaying memory kernel as a model for the firing process of a resonate-and-fire neuron. The effect of temporally correlated random neuronal input is modeled as Ornstein-Uhlenbeck noise. In the noise-induced spiking regime of the neuron, we derive exact analytical formulas for the dependence of some statistical characteristics of the output spike train, such as the probability distribution of the interspike intervals (ISIs) and the survival probability, on the parameters of the input stimulus. Particularly, on the basis of these exact expressions, we have established sufficient conditions for the occurrence of memory-time-induced transitions between unimodal and multimodal structures of the ISI density and a critical damping coefficient which marks a dynamical transition in the behavior of the system.
The Fractional Ornstein-Uhlenbeck Process
Høg, Esben; Frederiksen, Per H.
The paper revisits dynamic term structure models (DTSMs) and proposes a new way in dealing with the limitation of the classical affine models. In particular, this paper expands the flexibility of the DTSMs by applying a fractional Brownian motion as the governing force of the state variable inste...... of the bond is recovered by solving a fractional version of the fundamental bond pricing equation. Besides this theoretical contribution, the paper proposes an estimation methodology based on the Kalman filter approach, which is applied to the US term structure of interest rates....
Stochastic Resonance in Neuronal Network Motifs with Ornstein-Uhlenbeck Colored Noise
Xuyang Lou
2014-01-01
Full Text Available We consider here the effect of the Ornstein-Uhlenbeck colored noise on the stochastic resonance of the feed-forward-loop (FFL network motif. The FFL motif is modeled through the FitzHugh-Nagumo neuron model as well as the chemical coupling. Our results show that the noise intensity and the correlation time of the noise process serve as the control parameters, which have great impacts on the stochastic dynamics of the FFL motif. We find that, with a proper choice of noise intensities and the correlation time of the noise process, the signal-to-noise ratio (SNR can display more than one peak.
On the stochastic pendulum with Ornstein-Uhlenbeck noise
Mallick, Kirone; Marcq, Philippe
2004-01-01
We study a frictionless pendulum subject to multiplicative random noise. Because of destructive interference between the angular displacement of the system and the noise term, the energy fluctuations are reduced when the noise has a non-zero correlation time. We derive the long time behaviour of the pendulum in the case of Ornstein-Uhlenbeck noise by a recursive adiabatic elimination procedure. An analytical expression for the asymptotic probability distribution function of the energy is obtained and the results agree with numerical simulations. Lastly, we compare our method with other approximation schemes
Bai, Zhan-Wu; Zhang, Wei
2018-01-01
The diffusion behaviors of Brownian particles in a tilted periodic potential under the influence of an internal white noise and an external Ornstein-Uhlenbeck noise are investigated through numerical simulation. In contrast to the case when the bias force is smaller or absent, the diffusion coefficient exhibits a nonmonotonic dependence on the correlation time of the external noise when bias force is large. A mechanism different from locked-to-running transition theory is presented for the diffusion enhancement by a bias force in intermediate to large damping. In the underdamped regime and the presence of external noise, the diffusion coefficient is a monotonically decreasing function of low temperature rather than a nonmonotonic function when external noise is absent. The diffusive process undergoes four regimes when bias force approaches but is less than its critical value and noises intensities are small. These behaviors can be attributed to the locked-to-running transition of particles.
Breed, Greg A; Golson, Emily A; Tinker, M Tim
2017-01-01
The home-range concept is central in animal ecology and behavior, and numerous mechanistic models have been developed to understand home range formation and maintenance. These mechanistic models usually assume a single, contiguous home range. Here we describe and implement a simple home-range model that can accommodate multiple home-range centers, form complex shapes, allow discontinuities in use patterns, and infer how external and internal variables affect movement and use patterns. The model assumes individuals associate with two or more home-range centers and move among them with some estimable probability. Movement in and around home-range centers is governed by a two-dimensional Ornstein-Uhlenbeck process, while transitions between centers are modeled as a stochastic state-switching process. We augmented this base model by introducing environmental and demographic covariates that modify transition probabilities between home-range centers and can be estimated to provide insight into the movement process. We demonstrate the model using telemetry data from sea otters (Enhydra lutris) in California. The model was fit using a Bayesian Markov Chain Monte Carlo method, which estimated transition probabilities, as well as unique Ornstein-Uhlenbeck diffusion and centralizing tendency parameters. Estimated parameters could then be used to simulate movement and space use that was virtually indistinguishable from real data. We used Deviance Information Criterion (DIC) scores to assess model fit and determined that both wind and reproductive status were predictive of transitions between home-range centers. Females were less likely to move between home-range centers on windy days, less likely to move between centers when tending pups, and much more likely to move between centers just after weaning a pup. These tendencies are predicted by theoretical movement rules but were not previously known and show that our model can extract meaningful behavioral insight from complex
Estimation of stochastic volatility by using Ornstein-Uhlenbeck type models
Mariani, Maria C.; Bhuiyan, Md Al Masum; Tweneboah, Osei K.
2018-02-01
In this study, we develop a technique for estimating the stochastic volatility (SV) of a financial time series by using Ornstein-Uhlenbeck type models. Using the daily closing prices from developed and emergent stock markets, we conclude that the incorporation of stochastic volatility into the time varying parameter estimation significantly improves the forecasting performance via Maximum Likelihood Estimation. Furthermore, our estimation algorithm is feasible with large data sets and have good convergence properties.
Randomness and variability of the neuronal activity described by the Ornstein-Uhlenbeck model
Košťál, Lubomír; Lánský, Petr; Zucca, Ch.
2007-01-01
Roč. 18, č. 1 (2007), s. 63-75 ISSN 0954-898X R&D Projects: GA MŠk(CZ) LC554; GA AV ČR(CZ) 1ET400110401; GA AV ČR(CZ) KJB100110701 Grant - others:MIUR(IT) PRIN-Cofin 2005 Institutional research plan: CEZ:AV0Z50110509 Keywords : Ornstein-Uhlenbeck * entropy * randomness Subject RIV: FH - Neurology Impact factor: 1.385, year: 2007
A cautionary note on the use of Ornstein Uhlenbeck models in macroevolutionary studies.
Cooper, Natalie; Thomas, Gavin H; Venditti, Chris; Meade, Andrew; Freckleton, Rob P
2016-05-01
Phylogenetic comparative methods are increasingly used to give new insights into the dynamics of trait evolution in deep time. For continuous traits the core of these methods is a suite of models that attempt to capture evolutionary patterns by extending the Brownian constant variance model. However, the properties of these models are often poorly understood, which can lead to the misinterpretation of results. Here we focus on one of these models - the Ornstein Uhlenbeck (OU) model. We show that the OU model is frequently incorrectly favoured over simpler models when using Likelihood ratio tests, and that many studies fitting this model use datasets that are small and prone to this problem. We also show that very small amounts of error in datasets can have profound effects on the inferences derived from OU models. Our results suggest that simulating fitted models and comparing with empirical results is critical when fitting OU and other extensions of the Brownian model. We conclude by making recommendations for best practice in fitting OU models in phylogenetic comparative analyses, and for interpreting the parameters of the OU model.
Keanini, R.G.; Srivastava, N.; Tkacik, P.T. [Department of Mechanical Engineering, University of North Carolina at Charlotte, 9201 University City Blvd., Charlotte, NC 28078 (United States); Weggel, D.C. [Department of Civil and Environmental Engineering, University of North Carolina at Charlotte, 9201 University City Blvd., Charlotte, NC 28078 (United States); Knight, P.D. [Mitchell Aerospace and Engineering, Statesville, North Carolina 28677 (United States)
2011-06-15
A long-standing, though ill-understood problem in rocket dynamics, rocket response to random, altitude-dependent nozzle side-loads, is investigated. Side loads arise during low altitude flight due to random, asymmetric, shock-induced separation of in-nozzle boundary layers. In this paper, stochastic evolution of the in-nozzle boundary layer separation line, an essential feature underlying side load generation, is connected to random, altitude-dependent rotational and translational rocket response via a set of simple analytical models. Separation line motion, extant on a fast boundary layer time scale, is modeled as an Ornstein-Uhlenbeck process. Pitch and yaw responses, taking place on a long, rocket dynamics time scale, are shown to likewise evolve as OU processes. Stochastic, altitude-dependent rocket translational motion follows from linear, asymptotic versions of the full nonlinear equations of motion; the model is valid in the practical limit where random pitch, yaw, and roll rates all remain small. Computed altitude-dependent rotational and translational velocity and displacement statistics are compared against those obtained using recently reported high fidelity simulations [Srivastava, Tkacik, and Keanini, J. Appl. Phys. 108, 044911 (2010)]; in every case, reasonable agreement is observed. As an important prelude, evidence indicating the physical consistency of the model introduced in the above article is first presented: it is shown that the study's separation line model allows direct derivation of experimentally observed side load amplitude and direction densities. Finally, it is found that the analytical models proposed in this paper allow straightforward identification of practical approaches for: (i) reducing pitch/yaw response to side loads, and (ii) enhancing pitch/yaw damping once side loads cease. (Copyright copyright 2011 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
A non-Gaussian Ornstein-Uhlenbeck model for pricing wind power futures
Benth, Fred Espen; Pircalabu, Anca
2018-01-01
generated assuming a recent level of installed capacity. Also, based on one year of observed prices for wind power futures with different delivery periods, we study the market price of risk. Generally, we find a negative risk premium whose magnitude decreases as the length of the delivery period increases....
Breed, Greg A.; Golson, Emily A.; Tinker, M. Tim
2017-01-01
The home‐range concept is central in animal ecology and behavior, and numerous mechanistic models have been developed to understand home range formation and maintenance. These mechanistic models usually assume a single, contiguous home range. Here we describe and implement a simple home‐range model that can accommodate multiple home‐range centers, form complex shapes, allow discontinuities in use patterns, and infer how external and internal variables affect movement and use patterns. The model assumes individuals associate with two or more home‐range centers and move among them with some estimable probability. Movement in and around home‐range centers is governed by a two‐dimensional Ornstein‐Uhlenbeck process, while transitions between centers are modeled as a stochastic state‐switching process. We augmented this base model by introducing environmental and demographic covariates that modify transition probabilities between home‐range centers and can be estimated to provide insight into the movement process. We demonstrate the model using telemetry data from sea otters (Enhydra lutris) in California. The model was fit using a Bayesian Markov Chain Monte Carlo method, which estimated transition probabilities, as well as unique Ornstein‐Uhlenbeck diffusion and centralizing tendency parameters. Estimated parameters could then be used to simulate movement and space use that was virtually indistinguishable from real data. We used Deviance Information Criterion (DIC) scores to assess model fit and determined that both wind and reproductive status were predictive of transitions between home‐range centers. Females were less likely to move between home‐range centers on windy days, less likely to move between centers when tending pups, and much more likely to move between centers just after weaning a pup. These tendencies are predicted by theoretical movement rules but were not previously known and show that our model can extract meaningful
Simple simulation schemes for CIR and Wishart processes
Pisani, Camilla
2013-01-01
We develop some simple simulation algorithms for CIR and Wishart processes. The main idea is the splitting of their generator into the sum of the square of an Ornstein-Uhlenbeck matrix process and a deterministic process. Joint work with Paolo Baldi, Tor Vergata University, Rome...
Mixtures in nonstable Levy processes
Petroni, N Cufaro
2007-01-01
We analyse the Levy processes produced by means of two interconnected classes of nonstable, infinitely divisible distribution: the variance gamma and the Student laws. While the variance gamma family is closed under convolution, the Student one is not: this makes its time evolution more complicated. We prove that-at least for one particular type of Student processes suggested by recent empirical results, and for integral times-the distribution of the process is a mixture of other types of Student distributions, randomized by means of a new probability distribution. The mixture is such that along the time the asymptotic behaviour of the probability density functions always coincide with that of the generating Student law. We put forward the conjecture that this can be a general feature of the Student processes. We finally analyse the Ornstein-Uhlenbeck process driven by our Levy noises and show a few simulations of it
Uma, B.; Swaminathan, T. N.; Ayyaswamy, P. S.; Eckmann, D. M.; Radhakrishnan, R.
2011-09-01
A direct numerical simulation (DNS) procedure is employed to study the thermal motion of a nanoparticle in an incompressible Newtonian stationary fluid medium with the generalized Langevin approach. We consider both the Markovian (white noise) and non-Markovian (Ornstein-Uhlenbeck noise and Mittag-Leffler noise) processes. Initial locations of the particle are at various distances from the bounding wall to delineate wall effects. At thermal equilibrium, the numerical results are validated by comparing the calculated translational and rotational temperatures of the particle with those obtained from the equipartition theorem. The nature of the hydrodynamic interactions is verified by comparing the velocity autocorrelation functions and mean square displacements with analytical results. Numerical predictions of wall interactions with the particle in terms of mean square displacements are compared with analytical results. In the non-Markovian Langevin approach, an appropriate choice of colored noise is required to satisfy the power-law decay in the velocity autocorrelation function at long times. The results obtained by using non-Markovian Mittag-Leffler noise simultaneously satisfy the equipartition theorem and the long-time behavior of the hydrodynamic correlations for a range of memory correlation times. The Ornstein-Uhlenbeck process does not provide the appropriate hydrodynamic correlations. Comparing our DNS results to the solution of an one-dimensional generalized Langevin equation, it is observed that where the thermostat adheres to the equipartition theorem, the characteristic memory time in the noise is consistent with the inherent time scale of the memory kernel. The performance of the thermostat with respect to equilibrium and dynamic properties for various noise schemes is discussed.
Reduced equations of motion for quantum systems driven by diffusive Markov processes.
Sarovar, Mohan; Grace, Matthew D
2012-09-28
The expansion of a stochastic Liouville equation for the coupled evolution of a quantum system and an Ornstein-Uhlenbeck process into a hierarchy of coupled differential equations is a useful technique that simplifies the simulation of stochastically driven quantum systems. We expand the applicability of this technique by completely characterizing the class of diffusive Markov processes for which a useful hierarchy of equations can be derived. The expansion of this technique enables the examination of quantum systems driven by non-Gaussian stochastic processes with bounded range. We present an application of this extended technique by simulating Stark-tuned Förster resonance transfer in Rydberg atoms with nonperturbative position fluctuations.
Chen, Yong; Ge, Hao; Xiong, Jie; Xu, Lihu
2016-01-01
Fluctuation theorem is one of the major achievements in the field of nonequilibrium statistical mechanics during the past two decades. There exist very few results for steady-state fluctuation theorem of sample entropy production rate in terms of large deviation principle for diffusion processes due to the technical difficulties. Here we give a proof for the steady-state fluctuation theorem of a diffusion process in magnetic fields, with explicit expressions of the free energy function and rate function. The proof is based on the Karhunen-Loève expansion of complex-valued Ornstein-Uhlenbeck process.
A General Model for Estimating Macroevolutionary Landscapes.
Boucher, Florian C; Démery, Vincent; Conti, Elena; Harmon, Luke J; Uyeda, Josef
2018-03-01
The evolution of quantitative characters over long timescales is often studied using stochastic diffusion models. The current toolbox available to students of macroevolution is however limited to two main models: Brownian motion and the Ornstein-Uhlenbeck process, plus some of their extensions. Here, we present a very general model for inferring the dynamics of quantitative characters evolving under both random diffusion and deterministic forces of any possible shape and strength, which can accommodate interesting evolutionary scenarios like directional trends, disruptive selection, or macroevolutionary landscapes with multiple peaks. This model is based on a general partial differential equation widely used in statistical mechanics: the Fokker-Planck equation, also known in population genetics as the Kolmogorov forward equation. We thus call the model FPK, for Fokker-Planck-Kolmogorov. We first explain how this model can be used to describe macroevolutionary landscapes over which quantitative traits evolve and, more importantly, we detail how it can be fitted to empirical data. Using simulations, we show that the model has good behavior both in terms of discrimination from alternative models and in terms of parameter inference. We provide R code to fit the model to empirical data using either maximum-likelihood or Bayesian estimation, and illustrate the use of this code with two empirical examples of body mass evolution in mammals. FPK should greatly expand the set of macroevolutionary scenarios that can be studied since it opens the way to estimating macroevolutionary landscapes of any conceivable shape. [Adaptation; bounds; diffusion; FPK model; macroevolution; maximum-likelihood estimation; MCMC methods; phylogenetic comparative data; selection.].
Barndorff-Nielsen, Ole Eiler; Stelzer, Robert
Univariate superpositions of Ornstein-Uhlenbeck (OU) type processes, called supOU processes, provide a class of continuous time processes capable of exhibiting long memory behaviour. This paper introduces multivariate supOU processes and gives conditions for their existence and finiteness...... of moments. Moreover, the second order moment structure is explicitly calculated, and examples exhibit the possibility of long range dependence. Our supOU processes are defined via homogeneous and factorisable Lévy bases. We show that the behaviour of supOU processes is particularly nice when the mean...... reversion parameter is restricted to normal matrices and especially to strictly negative definite ones.For finite variation Lévy bases we are able to give conditions for supOU processes to have locally bounded càdlàg paths of finite variation and to show an analogue of the stochastic differential equation...
Fractional Poincaré inequalities for general measures
Mouhot, Clément
2011-01-01
We prove a fractional version of Poincaré inequalities in the context of Rn endowed with a fairly general measure. Namely we prove a control of an L2 norm by a non-local quantity, which plays the role of the gradient in the standard Poincaré inequality. The assumption on the measure is the fact that it satisfies the classical Poincaré inequality, so that our result is an improvement of the latter inequality. Moreover we also quantify the tightness at infinity provided by the control on the fractional derivative in terms of a weight growing at infinity. The proof goes through the introduction of the generator of the Ornstein-Uhlenbeck semigroup and some careful estimates of its powers. To our knowledge this is the first proof of fractional Poincaré inequality for measures more general than Lévy measures. © 2010 Elsevier Masson SAS.
Maximum likelihood estimation for integrated diffusion processes
Baltazar-Larios, Fernando; Sørensen, Michael
We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...
Neural network connectivity and response latency modelled by stochastic processes
Tamborrino, Massimiliano
is connected to thousands of other neurons. The rst question is: how to model neural networks through stochastic processes? A multivariate Ornstein-Uhlenbeck process, obtained as a diffusion approximation of a jump process, is the proposed answer. Obviously, dependencies between neurons imply dependencies......Stochastic processes and their rst passage times have been widely used to describe the membrane potential dynamics of single neurons and to reproduce neuronal spikes, respectively.However, cerebral cortex in human brains is estimated to contain 10-20 billions of neurons and each of them...... between their spike times. Therefore, the second question is: how to detect neural network connectivity from simultaneously recorded spike trains? Answering this question corresponds to investigate the joint distribution of sequences of rst passage times. A non-parametric method based on copulas...
On diffusion processes with variable drift rates as models for decision making during learning
Eckhoff, P; Holmes, P; Law, C; Connolly, P M; Gold, J I
2008-01-01
We investigate Ornstein-Uhlenbeck and diffusion processes with variable drift rates as models of evidence accumulation in a visual discrimination task. We derive power-law and exponential drift-rate models and characterize how parameters of these models affect the psychometric function describing performance accuracy as a function of stimulus strength and viewing time. We fit the models to psychophysical data from monkeys learning the task to identify parameters that best capture performance as it improves with training. The most informative parameter was the overall drift rate describing the signal-to-noise ratio of the sensory evidence used to form the decision, which increased steadily with training. In contrast, secondary parameters describing the time course of the drift during motion viewing did not exhibit steady trends. The results indicate that relatively simple versions of the diffusion model can fit behavior over the course of training, thereby giving a quantitative account of learning effects on the underlying decision process
Fractional Poincaré inequalities for general measures
Mouhot, Clé ment; Russ, Emmanuel; Sire, Yannick
2011-01-01
on the fractional derivative in terms of a weight growing at infinity. The proof goes through the introduction of the generator of the Ornstein-Uhlenbeck semigroup and some careful estimates of its powers. To our knowledge this is the first proof of fractional
Lyapunov exponent of the random frequency oscillator: cumulant expansion approach
Anteneodo, C; Vallejos, R O
2010-01-01
We consider a one-dimensional harmonic oscillator with a random frequency, focusing on both the standard and the generalized Lyapunov exponents, λ and λ* respectively. We discuss the numerical difficulties that arise in the numerical calculation of λ* in the case of strong intermittency. When the frequency corresponds to a Ornstein-Uhlenbeck process, we compute analytically λ* by using a cumulant expansion including up to the fourth order. Connections with the problem of finding an analytical estimate for the largest Lyapunov exponent of a many-body system with smooth interactions are discussed.
McLean, Bryan S; Helgen, Kristofer M; Goodwin, H Thomas; Cook, Joseph A
2018-03-01
Our understanding of mechanisms operating over deep timescales to shape phenotypic diversity often hinges on linking variation in one or few trait(s) to specific evolutionary processes. When distinct processes are capable of similar phenotypic signatures, however, identifying these drivers is difficult. We explored ecomorphological evolution across a radiation of ground-dwelling squirrels whose history includes convergence and constraint, two processes that can yield similar signatures of standing phenotypic diversity. Using four ecologically relevant trait datasets (body size, cranial, mandibular, and molariform tooth shape), we compared and contrasted variation, covariation, and disparity patterns in a new phylogenetic framework. Strong correlations existed between body size and two skull traits (allometry) and among skull traits themselves (integration). Inferred evolutionary modes were also concordant across traits (Ornstein-Uhlenbeck with two adaptive regimes). However, despite these broad similarities, we found divergent dynamics on the macroevolutionary landscape, with phenotypic disparity being differentially shaped by convergence and conservatism. Such among-trait heterogeneity in process (but not always pattern) reiterates the mosaic nature of morphological evolution, and suggests ground squirrel evolution is poorly captured by single process descriptors. Our results also highlight how use of single traits can bias macroevolutionary inference, affirming the importance of broader trait-bases in understanding phenotypic evolutionary dynamics. © 2018 The Author(s). Evolution © 2018 The Society for the Study of Evolution.
Biyajima, M.; Ide, M.; Mizoguchi, T.; Suzuki, N.
2002-01-01
Recently interesting data on dN ch /dη in Au-Au collisions (η=-ln tan(θ/2)) with the centrality cuts have been reported by PHOBOS and BRAHMS Collaborations. Their data are usually divided by the number of participants (nucleons) in collisions. Instead of this way, using the total multiplicity N ch =∫(dN ch /dη)dη, we find that there are scaling phenomena among (N ch ) -1 dN ch /dη=dn/dη with different centrality cuts at √s NN = 130 GeV and 200 GeV, respectively. To explain these scaling behaviors of dn/dη, we consider the stochastic approach named Ornstein-Uhlenbeck process with two sources. The Langevin equation is adopted for the present explanation. Among dn/dη at 130 GeV and 200 GeV, no significant difference has been found. Possible detection method of the quark-gluon plasma (QGP) through dN ch /dη is presented. (author)
Some continual integrals from gaussian forms
Mazmanishvili, A.S.
1985-01-01
The result summary of continual integration of gaussian functional type is given. The summary contains 124 continual integrals which are the mathematical expectation of the corresponding gaussian form by the continuum of random trajectories of four types: real-valued Ornstein-Uhlenbeck process, Wiener process, complex-valued Ornstein-Uhlenbeck process and the stochastic harmonic one. The summary includes both the known continual integrals and the unpublished before integrals. Mathematical results of the continual integration carried in the work may be applied in the problem of the theory of stochastic process, approaching to the finding of mean from gaussian forms by measures generated by the pointed stochastic processes
The stochastic versus the Euclidean approach to quantum fields on a static space-time
De Angelis, G.F.; de Falco, D.
1986-01-01
Equations are presented which modify the definition of the Gaussian field in the Rindler chart in order to make contact with the Wightman state, the Hartle-Hawking state, and the Euclidean field. By taking Ornstein-Uhlenbeck processes the authors have chosen, in the sense of stochastic mechanics, to place precisely the Fulling modes in their harmonic oscillator ground state. In this respect, together with the periodicity of Minkowski space-time, the authors observe that the covariance of the Ornstein-Uhlenbeck process can be obtained by analytical continuation of the Wightman function of the harmonic oscillator at zero temperature
Non-parametric Estimation of Diffusion-Paths Using Wavelet Scaling Methods
Høg, Esben
In continuous time, diffusion processes have been used for modelling financial dynamics for a long time. For example the Ornstein-Uhlenbeck process (the simplest mean-reverting process) has been used to model non-speculative price processes. We discuss non--parametric estimation of these processes...
Non-Parametric Estimation of Diffusion-Paths Using Wavelet Scaling Methods
Høg, Esben
2003-01-01
In continuous time, diffusion processes have been used for modelling financial dynamics for a long time. For example the Ornstein-Uhlenbeck process (the simplest mean--reverting process) has been used to model non-speculative price processes. We discuss non--parametric estimation of these processes...
Evolutionary patterns and processes in the radiation of phyllostomid bats
Monteiro Leandro R
2011-05-01
Full Text Available Abstract Background The phyllostomid bats present the most extensive ecological and phenotypic radiation known among mammal families. This group is an important model system for studies of cranial ecomorphology and functional optimisation because of the constraints imposed by the requirements of flight. A number of studies supporting phyllostomid adaptation have focused on qualitative descriptions or correlating functional variables and diet, but explicit tests of possible evolutionary mechanisms and scenarios for phenotypic diversification have not been performed. We used a combination of morphometric and comparative methods to test hypotheses regarding the evolutionary processes behind the diversification of phenotype (mandible shape and size and diet during the phyllostomid radiation. Results The different phyllostomid lineages radiate in mandible shape space, with each feeding specialisation evolving towards different axes. Size and shape evolve quite independently, as the main directions of shape variation are associated with mandible elongation (nectarivores or the relative size of tooth rows and mandibular processes (sanguivores and frugivores, which are not associated with size changes in the mandible. The early period of phyllostomid diversification is marked by a burst of shape, size, and diet disparity (before 20 Mya, larger than expected by neutral evolution models, settling later to a period of relative phenotypic and ecological stasis. The best fitting evolutionary model for both mandible shape and size divergence was an Ornstein-Uhlenbeck process with five adaptive peaks (insectivory, carnivory, sanguivory, nectarivory and frugivory. Conclusions The radiation of phyllostomid bats presented adaptive and non-adaptive components nested together through the time frame of the family's evolution. The first 10 My of the radiation were marked by strong phenotypic and ecological divergence among ancestors of modern lineages, whereas the
Evolutionary patterns and processes in the radiation of phyllostomid bats
2011-01-01
Background The phyllostomid bats present the most extensive ecological and phenotypic radiation known among mammal families. This group is an important model system for studies of cranial ecomorphology and functional optimisation because of the constraints imposed by the requirements of flight. A number of studies supporting phyllostomid adaptation have focused on qualitative descriptions or correlating functional variables and diet, but explicit tests of possible evolutionary mechanisms and scenarios for phenotypic diversification have not been performed. We used a combination of morphometric and comparative methods to test hypotheses regarding the evolutionary processes behind the diversification of phenotype (mandible shape and size) and diet during the phyllostomid radiation. Results The different phyllostomid lineages radiate in mandible shape space, with each feeding specialisation evolving towards different axes. Size and shape evolve quite independently, as the main directions of shape variation are associated with mandible elongation (nectarivores) or the relative size of tooth rows and mandibular processes (sanguivores and frugivores), which are not associated with size changes in the mandible. The early period of phyllostomid diversification is marked by a burst of shape, size, and diet disparity (before 20 Mya), larger than expected by neutral evolution models, settling later to a period of relative phenotypic and ecological stasis. The best fitting evolutionary model for both mandible shape and size divergence was an Ornstein-Uhlenbeck process with five adaptive peaks (insectivory, carnivory, sanguivory, nectarivory and frugivory). Conclusions The radiation of phyllostomid bats presented adaptive and non-adaptive components nested together through the time frame of the family's evolution. The first 10 My of the radiation were marked by strong phenotypic and ecological divergence among ancestors of modern lineages, whereas the remaining 20 My were
On the dependence structure of Gaussian queues
Es-Saghouani, A.; Mandjes, M.R.H.
2009-01-01
In this article we study Gaussian queues (that is, queues fed by Gaussian processes, such as fractional Brownian motion (fBm) and the integrated Ornstein-Uhlenbeck (iOU) process), with a focus on the dependence structure of the workload process. The main question is to what extent does the workload
Random attractors for stochastic lattice reversible Gray-Scott systems with additive noise
Hongyan Li
2015-10-01
Full Text Available In this article, we prove the existence of a random attractor of the stochastic three-component reversible Gray-Scott system on infinite lattice with additive noise. We use a transformation of addition involved with Ornstein-Uhlenbeck process, for proving the pullback absorbing property and the pullback asymptotic compactness of the reaction diffusion system with cubic nonlinearity.
Afrika Statistika ISSN 2316-090X On drift estimation for non-ergodic ...
Key words: Drift estimation; Discrete observations; Ornstein-Uhlenbeck process; Non- ergodicity. AMS 2010 Mathematics Subject Classification : 60G22; 62M05; 62F12. ∗Corresponding author Khalifa Es-Sebaiy: k.Essebaiy@uca.ma. Djibril Ndiaye : djibykhady@yahoo.fr. 1Supported by ”La commission de l'UEMOA dans le ...
The Morris-Lecar neuron model embeds a leaky integrate-and-fire model
Ditlevsen, Susanne; Greenwood, Priscilla
2013-01-01
We showthat the stochastic Morris–Lecar neuron, in a neighborhood of its stable point, can be approximated by a two-dimensional Ornstein Uhlenbeck (OU) modulation of a constant circular motion. The associated radial OU process is an example of a leaky integrate-and-fire (LIF) model prior to firing...
Frank, T.D.
2006-01-01
First-order approximations of time-dependent solutions are determined for stochastic systems perturbed by time-delayed feedback forces. To this end, the theory of delay Fokker-Planck equations is applied in combination with Bayes' theorem. Applications to a time-delayed Ornstein-Uhlenbeck process and the geometric Brownian walk of financial physics are discussed
Veestraeten, D.
2015-01-01
The Laplace transforms of the transition probability density and distribution functions for the Ornstein-Uhlenbeck process contain the product of two parabolic cylinder functions, namely Dv(x)Dv(y) and Dv(x)Dv−1(y), respectively. The inverse transforms of these products have as yet not been
Equations involving Malliavin calculus operators applications and numerical approximation
Levajković, Tijana
2017-01-01
This book provides a comprehensive and unified introduction to stochastic differential equations and related optimal control problems. The material is new and the presentation is reader-friendly. A major contribution of the book is the development of generalized Malliavin calculus in the framework of white noise analysis, based on chaos expansion representation of stochastic processes and its application for solving several classes of stochastic differential equations with singular data involving the main operators of Malliavin calculus. In addition, applications in optimal control and numerical approximations are discussed. The book is divided into four chapters. The first, entitled White Noise Analysis and Chaos Expansions, includes notation and provides the reader with the theoretical background needed to understand the subsequent chapters. In Chapter 2, Generalized Operators of Malliavin Calculus, the Malliavin derivative operator, the Skorokhod integral and the Ornstein-Uhlenbeck operator are introdu...
General distributions in process algebra
Katoen, Joost P.; d' Argenio, P.R.; Brinksma, Hendrik; Hermanns, H.; Katoen, Joost P.
2001-01-01
This paper is an informal tutorial on stochastic process algebras, i.e., process calculi where action occurrences may be subject to a delay that is governed by a (mostly continuous) random variable. Whereas most stochastic process algebras consider delays determined by negative exponential
Experiments to Distribute Map Generalization Processes
Berli, Justin; Touya, Guillaume; Lokhat, Imran; Regnauld, Nicolas
2018-05-01
Automatic map generalization requires the use of computationally intensive processes often unable to deal with large datasets. Distributing the generalization process is the only way to make them scalable and usable in practice. But map generalization is a highly contextual process, and the surroundings of a generalized map feature needs to be known to generalize the feature, which is a problem as distribution might partition the dataset and parallelize the processing of each part. This paper proposes experiments to evaluate the past propositions to distribute map generalization, and to identify the main remaining issues. The past propositions to distribute map generalization are first discussed, and then the experiment hypotheses and apparatus are described. The experiments confirmed that regular partitioning was the quickest strategy, but also the less effective in taking context into account. The geographical partitioning, though less effective for now, is quite promising regarding the quality of the results as it better integrates the geographical context.
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Rough electricity: a new fractal multi-factor model of electricity spot prices
Bennedsen, Mikkel
We introduce a new mathematical model of electricity spot prices which accounts for the most important stylized facts of these time series: seasonality, spikes, stochastic volatility and mean reversion. Empirical studies have found a possible fifth stylized fact, fractality, and our approach...... explicitly incorporates this into the model of the prices. Our setup generalizes the popular Ornstein Uhlenbeck-based multi-factor framework of Benth et al. (2007) and allows us to perform statistical tests to distinguish between an Ornstein Uhlenbeck-based model and a fractal model. Further, through...... the multi-factor approach we account for seasonality and spikes before estimating - and making inference on - the degree of fractality. This is novel in the literature and we present simulation evidence showing that these precautions are crucial to accurate estimation. Lastly, we estimate our model...
The multivariate supOU stochastic volatility model
Barndorff-Nielsen, Ole; Stelzer, Robert
Using positive semidefinite supOU (superposition of Ornstein-Uhlenbeck type) processes to describe the volatility, we introduce a multivariate stochastic volatility model for financial data which is capable of modelling long range dependence effects. The finiteness of moments and the second order...... structure of the volatility, the log returns, as well as their "squares" are discussed in detail. Moreover, we give several examples in which long memory effects occur and study how the model as well as the simple Ornstein-Uhlenbeck type stochastic volatility model behave under linear transformations....... In particular, the models are shown to be preserved under invertible linear transformations. Finally, we discuss how (sup)OU stochastic volatility models can be combined with a factor modelling approach....
General Notes on Processes and Their Spectra
Gustav Cepciansky
2012-01-01
Full Text Available The frequency spectrum performs one of the main characteristics of a process. The aim of the paper is to show the coherence between the process and its own spectrum and how the behaviour and properties of a process itself can be deduced from its spectrum. Processes are categorized and general principles of their spectra calculation and recognition are given. The main stress is put on power spectra of electric and optic signals, as they also perform a kind of processes. These spectra can be directly measured, observed and examined by means of spectral analyzers and they are very important characteristics which can not be omitted at transmission techniques in telecommunication technologies. Further, the paper also deals with non electric processes, mainly with processes and spectra at mass servicing and how these spectra can be utilised in praxis.
Optimal consumption problem in the Vasicek model
Jakub Trybuła
2015-01-01
Full Text Available We consider the problem of an optimal consumption strategy on the infinite time horizon based on the hyperbolic absolute risk aversion utility when the interest rate is an Ornstein-Uhlenbeck process. Using the method of subsolution and supersolution we obtain the existence of solutions of the dynamic programming equation. We illustrate the paper with a numerical example of the optimal consumption strategy and the value function.
A generalized integral fluctuation theorem for general jump processes
Liu Fei; Ouyang Zhongcan; Luo Yupin; Huang Mingchang
2009-01-01
Using the Feynman-Kac and Cameron-Martin-Girsanov formulae, we obtain a generalized integral fluctuation theorem (GIFT) for discrete jump processes by constructing a time-invariable inner product. The existing discrete IFTs can be derived as its specific cases. A connection between our approach and the conventional time-reversal method is also established. Unlike the latter approach that has been extensively employed in the existing literature, our approach can naturally bring out the definition of a time reversal of a Markovian stochastic system. Additionally, we find that the robust GIFT usually does not result in a detailed fluctuation theorem. (fast track communication)
OVPD-processed OLED for general lighting
Bösing, Manuel
2012-01-01
Due to continuous advancements of materials for organic light emitting diodes (OLED) a new field of application currently opens up for OLED technology: General lighting. A significant reduction of OLED production cost might be achieved by employing organic vapor phase deposition (OVPD). OVPD is a novel process for depositing organic thin films from the gas phase. In contrast to the well established process of vacuum thermal evaporation (VTE), OVPD allows to achieve much higher deposition rate...
General simulation algorithm for autocorrelated binary processes.
Serinaldi, Francesco; Lombardo, Federico
2017-02-01
The apparent ubiquity of binary random processes in physics and many other fields has attracted considerable attention from the modeling community. However, generation of binary sequences with prescribed autocorrelation is a challenging task owing to the discrete nature of the marginal distributions, which makes the application of classical spectral techniques problematic. We show that such methods can effectively be used if we focus on the parent continuous process of beta distributed transition probabilities rather than on the target binary process. This change of paradigm results in a simulation procedure effectively embedding a spectrum-based iterative amplitude-adjusted Fourier transform method devised for continuous processes. The proposed algorithm is fully general, requires minimal assumptions, and can easily simulate binary signals with power-law and exponentially decaying autocorrelation functions corresponding, for instance, to Hurst-Kolmogorov and Markov processes. An application to rainfall intermittency shows that the proposed algorithm can also simulate surrogate data preserving the empirical autocorrelation.
General simulation algorithm for autocorrelated binary processes
Serinaldi, Francesco; Lombardo, Federico
2017-02-01
The apparent ubiquity of binary random processes in physics and many other fields has attracted considerable attention from the modeling community. However, generation of binary sequences with prescribed autocorrelation is a challenging task owing to the discrete nature of the marginal distributions, which makes the application of classical spectral techniques problematic. We show that such methods can effectively be used if we focus on the parent continuous process of beta distributed transition probabilities rather than on the target binary process. This change of paradigm results in a simulation procedure effectively embedding a spectrum-based iterative amplitude-adjusted Fourier transform method devised for continuous processes. The proposed algorithm is fully general, requires minimal assumptions, and can easily simulate binary signals with power-law and exponentially decaying autocorrelation functions corresponding, for instance, to Hurst-Kolmogorov and Markov processes. An application to rainfall intermittency shows that the proposed algorithm can also simulate surrogate data preserving the empirical autocorrelation.
Quantum thermodynamics of general quantum processes.
Binder, Felix; Vinjanampathy, Sai; Modi, Kavan; Goold, John
2015-03-01
Accurately describing work extraction from a quantum system is a central objective for the extension of thermodynamics to individual quantum systems. The concepts of work and heat are surprisingly subtle when generalizations are made to arbitrary quantum states. We formulate an operational thermodynamics suitable for application to an open quantum system undergoing quantum evolution under a general quantum process by which we mean a completely positive and trace-preserving map. We derive an operational first law of thermodynamics for such processes and show consistency with the second law. We show that heat, from the first law, is positive when the input state of the map majorizes the output state. Moreover, the change in entropy is also positive for the same majorization condition. This makes a strong connection between the two operational laws of thermodynamics.
A general software reliability process simulation technique
Tausworthe, Robert C.
1991-01-01
The structure and rationale of the generalized software reliability process, together with the design and implementation of a computer program that simulates this process are described. Given assumed parameters of a particular project, the users of this program are able to generate simulated status timelines of work products, numbers of injected anomalies, and the progress of testing, fault isolation, repair, validation, and retest. Such timelines are useful in comparison with actual timeline data, for validating the project input parameters, and for providing data for researchers in reliability prediction modeling.
Noise suppression via generalized-Markovian processes
Marshall, Jeffrey; Campos Venuti, Lorenzo; Zanardi, Paolo
2017-11-01
It is by now well established that noise itself can be useful for performing quantum information processing tasks. We present results which show how one can effectively reduce the error rate associated with a noisy quantum channel by counteracting its detrimental effects with another form of noise. In particular, we consider the effect of adding on top of a purely Markovian (Lindblad) dynamics, a more general form of dissipation, which we refer to as generalized-Markovian noise. This noise has an associated memory kernel and the resulting dynamics are described by an integrodifferential equation. The overall dynamics are characterized by decay rates which depend not only on the original dissipative time scales but also on the new integral kernel. We find that one can engineer this kernel such that the overall rate of decay is lowered by the addition of this noise term. We illustrate this technique for the case where the bare noise is described by a dephasing Pauli channel. We analytically solve this model and show that one can effectively double (or even triple) the length of the channel, while achieving the same fidelity, entanglement, and error threshold. We numerically verify this scheme can also be used to protect against thermal Markovian noise (at nonzero temperature), which models spontaneous emission and excitation processes. A physical interpretation of this scheme is discussed, whereby the added generalized-Markovian noise causes the system to become periodically decoupled from the background Markovian noise.
Generalized epidemic process on modular networks.
Chung, Kihong; Baek, Yongjoo; Kim, Daniel; Ha, Meesoon; Jeong, Hawoong
2014-05-01
Social reinforcement and modular structure are two salient features observed in the spreading of behavior through social contacts. In order to investigate the interplay between these two features, we study the generalized epidemic process on modular networks with equal-sized finite communities and adjustable modularity. Using the analytical approach originally applied to clique-based random networks, we show that the system exhibits a bond-percolation type continuous phase transition for weak social reinforcement, whereas a discontinuous phase transition occurs for sufficiently strong social reinforcement. Our findings are numerically verified using the finite-size scaling analysis and the crossings of the bimodality coefficient.
General programmed system for physiological signal processing
Tournier, E; Monge, J; Magnet, C; Sonrel, C
1975-01-01
Improvements made to the general programmed signal acquisition and processing system, Plurimat S, are described, the aim being to obtain a less specialized system adapted to the biological and medical field. In this modified system the acquisition will be simplified. The standard processings offered will be integrated to a real advanced language which will enable the user to create his own processings, the loss of speed being compensated by a greater flexibility and universality. The observation screen will be large and the quality of the recording very good so that a large signal fraction may be displayed. The data will be easily indexed and filed for subsequent display and processing. This system will be used for two kinds of task: it can either be specialized, as an integral part of measurement and diagnostic preparation equipment used routinely in clinical work (e.g. vectocardiographic examination), or its versatility can be used for studies of limited duration to gain information in a given field or to study new diagnosis or treatment methods.
Pincheira-Donoso, Daniel; Harvey, Lilly P; Ruta, Marcello
2015-08-07
radiations in continents, but may emerge less frequently (compared to islands) when major events (e.g., climatic, geographic) significantly modify environments. In contrast, body size diversification conforms to an Ornstein-Uhlenbeck model with multiple trait optima. Despite this asymmetric diversification between both lineages and phenotype, links are expected to exist between the two processes, as shown by our trait-dependent analyses of diversification. We finally suggest that the definition of adaptive radiation should not be conditioned by the existence of early-bursts of diversification, and should instead be generalized to lineages in which species and ecological diversity have evolved from a single ancestor.
Møller, Jonas B; Overgaard, Rune V; Madsen, Henrik; Hansen, Torben; Pedersen, Oluf; Ingwersen, Steen H
2010-02-01
Several articles have investigated stochastic differential equations (SDEs) in PK/PD models, but few have quantitatively investigated the benefits to predictive performance of models based on real data. Estimation of first phase insulin secretion which reflects beta-cell function using models of the OGTT is a difficult problem in need of further investigation. The present work aimed at investigating the power of SDEs to predict the first phase insulin secretion (AIR (0-8)) in the IVGTT based on parameters obtained from the minimal model of the OGTT, published by Breda et al. (Diabetes 50(1):150-158, 2001). In total 174 subjects underwent both an OGTT and a tolbutamide modified IVGTT. Estimation of parameters in the oral minimal model (OMM) was performed using the FOCE-method in NONMEM VI on insulin and C-peptide measurements. The suggested SDE models were based on a continuous AR(1) process, i.e. the Ornstein-Uhlenbeck process, and the extended Kalman filter was implemented in order to estimate the parameters of the models. Inclusion of the Ornstein-Uhlenbeck (OU) process caused improved description of the variation in the data as measured by the autocorrelation function (ACF) of one-step prediction errors. A main result was that application of SDE models improved the correlation between the individual first phase indexes obtained from OGTT and AIR (0-8) (r = 0.36 to r = 0.49 and r = 0.32 to r = 0.47 with C-peptide and insulin measurements, respectively). In addition to the increased correlation also the properties of the indexes obtained using the SDE models more correctly assessed the properties of the first phase indexes obtained from the IVGTT. In general it is concluded that the presented SDE approach not only caused autocorrelation of errors to decrease but also improved estimation of clinical measures obtained from the glucose tolerance tests. Since, the estimation time of extended models was not heavily increased compared to basic models, the applied method
Negative ion formation processes: A general review
Alton, G.D.
1990-01-01
The principal negative ion formation processes will be briefly reviewed. Primary emphasis will be placed on the more efficient and universal processes of charge transfer and secondary ion formation through non-thermodynamic surface ionization. 86 refs., 20 figs
General Process for Business Idea Generation
Halinen, Anu
2017-01-01
This thesis presents a process for generating ideas with the intent to propagate new business within a micro-company. Utilizing this newly proposed process, generation of new ideas will be initiated allowing for subsequent business plans to be implemented to grow the existing customer base. Cloudberrywind is a family-owned and family-operated micro company in the Finnish region that offers information technology consulting services and support for project management to improve company efficie...
Distribution of return point memory states for systems with stochastic inputs
Amann, A; Brokate, M; Rachinskii, D; Temnov, G
2011-01-01
We consider the long term effect of stochastic inputs on the state of an open loop system which exhibits the so-called return point memory. An example of such a system is the Preisach model; more generally, systems with the Preisach type input-state relationship, such as in spin-interaction models, are considered. We focus on the characterisation of the expected memory configuration after the system has been effected by the input for sufficiently long period of time. In the case where the input is given by a discrete time random walk process, or the Wiener process, simple closed form expressions for the probability density of the vector of the main input extrema recorded by the memory state, and scaling laws for the dimension of this vector, are derived. If the input is given by a general continuous Markov process, we show that the distribution of previous memory elements can be obtained from a Markov chain scheme which is derived from the solution of an associated one-dimensional escape type problem. Formulas for transition probabilities defining this Markov chain scheme are presented. Moreover, explicit formulas for the conditional probability densities of previous main extrema are obtained for the Ornstein-Uhlenbeck input process. The analytical results are confirmed by numerical experiments.
Barndorff-Nielsen, Ole Eiler; Maejima, M.; Sato, K.
2006-01-01
The class of distributions on R generated by convolutions of Γ-distributions and the class generated by convolutions of mixtures of exponential distributions are generalized to higher dimensions and denoted by T(Rd) and B(Rd) . From the Lévy process {Xt(μ)} on Rd with distribution μ at t=1, Υ...... divisible distributions and of self-decomposable distributions on Rd , respectively. The relations with the mapping Φ from μ to the distribution at each time of the stationary process of Ornstein-Uhlenbeck type with background driving Lévy process {Xt(μ)} are studied. Developments of these results......(μ) is defined as the distribution of the stochastic integral ∫01log(1/t)dXt(μ) . This mapping is a generalization of the mapping Υ introduced by Barndorff-Nielsen and Thorbjørnsen in one dimension. It is proved that ϒ(ID(Rd))=B(Rd) and ϒ(L(Rd))=T(Rd) , where ID(Rd) and L(Rd) are the classes of infinitely...
Estimation of stochastic volatility with long memory for index prices of FTSE Bursa Malaysia KLCI
Chen, Kho Chia; Kane, Ibrahim Lawal; Rahman, Haliza Abd [Department of Mathematical Sciences, Faculty of Science, Universiti Teknologi Malaysia, 81310, Johor Bahru (Malaysia); Bahar, Arifah [UTM Centre for Industrial and Applied Mathematics (UTM-CIAM), Universiti Teknologi Malaysia, 81310, Johor Bahru and Department of Mathematical Sciences, Faculty of Science, Universiti Teknologi Malaysia, 81310, Johor Bahru (Malaysia); Ting, Chee-Ming [Center for Biomedical Engineering, Universiti Teknologi Malaysia, 81310, Johor Bahru (Malaysia)
2015-02-03
In recent years, modeling in long memory properties or fractionally integrated processes in stochastic volatility has been applied in the financial time series. A time series with structural breaks can generate a strong persistence in the autocorrelation function, which is an observed behaviour of a long memory process. This paper considers the structural break of data in order to determine true long memory time series data. Unlike usual short memory models for log volatility, the fractional Ornstein-Uhlenbeck process is neither a Markovian process nor can it be easily transformed into a Markovian process. This makes the likelihood evaluation and parameter estimation for the long memory stochastic volatility (LMSV) model challenging tasks. The drift and volatility parameters of the fractional Ornstein-Unlenbeck model are estimated separately using the least square estimator (lse) and quadratic generalized variations (qgv) method respectively. Finally, the empirical distribution of unobserved volatility is estimated using the particle filtering with sequential important sampling-resampling (SIR) method. The mean square error (MSE) between the estimated and empirical volatility indicates that the performance of the model towards the index prices of FTSE Bursa Malaysia KLCI is fairly well.
Estimation of stochastic volatility with long memory for index prices of FTSE Bursa Malaysia KLCI
Chen, Kho Chia; Bahar, Arifah; Kane, Ibrahim Lawal; Ting, Chee-Ming; Rahman, Haliza Abd
2015-02-01
In recent years, modeling in long memory properties or fractionally integrated processes in stochastic volatility has been applied in the financial time series. A time series with structural breaks can generate a strong persistence in the autocorrelation function, which is an observed behaviour of a long memory process. This paper considers the structural break of data in order to determine true long memory time series data. Unlike usual short memory models for log volatility, the fractional Ornstein-Uhlenbeck process is neither a Markovian process nor can it be easily transformed into a Markovian process. This makes the likelihood evaluation and parameter estimation for the long memory stochastic volatility (LMSV) model challenging tasks. The drift and volatility parameters of the fractional Ornstein-Unlenbeck model are estimated separately using the least square estimator (lse) and quadratic generalized variations (qgv) method respectively. Finally, the empirical distribution of unobserved volatility is estimated using the particle filtering with sequential important sampling-resampling (SIR) method. The mean square error (MSE) between the estimated and empirical volatility indicates that the performance of the model towards the index prices of FTSE Bursa Malaysia KLCI is fairly well.
Estimation of stochastic volatility with long memory for index prices of FTSE Bursa Malaysia KLCI
Chen, Kho Chia; Kane, Ibrahim Lawal; Rahman, Haliza Abd; Bahar, Arifah; Ting, Chee-Ming
2015-01-01
In recent years, modeling in long memory properties or fractionally integrated processes in stochastic volatility has been applied in the financial time series. A time series with structural breaks can generate a strong persistence in the autocorrelation function, which is an observed behaviour of a long memory process. This paper considers the structural break of data in order to determine true long memory time series data. Unlike usual short memory models for log volatility, the fractional Ornstein-Uhlenbeck process is neither a Markovian process nor can it be easily transformed into a Markovian process. This makes the likelihood evaluation and parameter estimation for the long memory stochastic volatility (LMSV) model challenging tasks. The drift and volatility parameters of the fractional Ornstein-Unlenbeck model are estimated separately using the least square estimator (lse) and quadratic generalized variations (qgv) method respectively. Finally, the empirical distribution of unobserved volatility is estimated using the particle filtering with sequential important sampling-resampling (SIR) method. The mean square error (MSE) between the estimated and empirical volatility indicates that the performance of the model towards the index prices of FTSE Bursa Malaysia KLCI is fairly well
Level crossings and excess times due to a superposition of uncorrelated exponential pulses
Theodorsen, A.; Garcia, O. E.
2018-01-01
A well-known stochastic model for intermittent fluctuations in physical systems is investigated. The model is given by a superposition of uncorrelated exponential pulses, and the degree of pulse overlap is interpreted as an intermittency parameter. Expressions for excess time statistics, that is, the rate of level crossings above a given threshold and the average time spent above the threshold, are derived from the joint distribution of the process and its derivative. Limits of both high and low intermittency are investigated and compared to previously known results. In the case of a strongly intermittent process, the distribution of times spent above threshold is obtained analytically. This expression is verified numerically, and the distribution of times above threshold is explored for other intermittency regimes. The numerical simulations compare favorably to known results for the distribution of times above the mean threshold for an Ornstein-Uhlenbeck process. This contribution generalizes the excess time statistics for the stochastic model, which find applications in a wide diversity of natural and technological systems.
Coupling regularizes individual units in noisy populations
Ly Cheng; Ermentrout, G. Bard
2010-01-01
The regularity of a noisy system can modulate in various ways. It is well known that coupling in a population can lower the variability of the entire network; the collective activity is more regular. Here, we show that diffusive (reciprocal) coupling of two simple Ornstein-Uhlenbeck (O-U) processes can regularize the individual, even when it is coupled to a noisier process. In cellular networks, the regularity of individual cells is important when a select few play a significant role. The regularizing effect of coupling surprisingly applies also to general nonlinear noisy oscillators. However, unlike with the O-U process, coupling-induced regularity is robust to different kinds of coupling. With two coupled noisy oscillators, we derive an asymptotic formula assuming weak noise and coupling for the variance of the period (i.e., spike times) that accurately captures this effect. Moreover, we find that reciprocal coupling can regularize the individual period of higher dimensional oscillators such as the Morris-Lecar and Brusselator models, even when coupled to noisier oscillators. Coupling can have a counterintuitive and beneficial effect on noisy systems. These results have implications for the role of connectivity with noisy oscillators and the modulation of variability of individual oscillators.
Wu, Wei; Wang, Jin
2014-01-01
We have established a general non-equilibrium thermodynamic formalism consistently applicable to both spatially homogeneous and, more importantly, spatially inhomogeneous systems, governed by the Langevin and Fokker-Planck stochastic dynamics with multiple state transition mechanisms, using the potential-flux landscape framework as a bridge connecting stochastic dynamics with non-equilibrium thermodynamics. A set of non-equilibrium thermodynamic equations, quantifying the relations of the non-equilibrium entropy, entropy flow, entropy production, and other thermodynamic quantities, together with their specific expressions, is constructed from a set of dynamical decomposition equations associated with the potential-flux landscape framework. The flux velocity plays a pivotal role on both the dynamic and thermodynamic levels. On the dynamic level, it represents a dynamic force breaking detailed balance, entailing the dynamical decomposition equations. On the thermodynamic level, it represents a thermodynamic force generating entropy production, manifested in the non-equilibrium thermodynamic equations. The Ornstein-Uhlenbeck process and more specific examples, the spatial stochastic neuronal model, in particular, are studied to test and illustrate the general theory. This theoretical framework is particularly suitable to study the non-equilibrium (thermo)dynamics of spatially inhomogeneous systems abundant in nature. This paper is the second of a series
Renewal processes based on generalized Mittag-Leffler waiting times
Cahoy, Dexter O.; Polito, Federico
2013-03-01
The fractional Poisson process has recently attracted experts from several fields of study. Its natural generalization of the ordinary Poisson process made the model more appealing for real-world applications. In this paper, we generalized the standard and fractional Poisson processes through the waiting time distribution, and showed their relations to an integral operator with a generalized Mittag-Leffler function in the kernel. The waiting times of the proposed renewal processes have the generalized Mittag-Leffler and stretched-squashed Mittag-Leffler distributions. Note that the generalizations naturally provide greater flexibility in modeling real-life renewal processes. Algorithms to simulate sample paths and to estimate the model parameters are derived. Note also that these procedures are necessary to make these models more usable in practice. State probabilities and other qualitative or quantitative features of the models are also discussed.
A general model for membrane-based separation processes
Soni, Vipasha; Abildskov, Jens; Jonsson, Gunnar Eigil
2009-01-01
behaviour will play an important role. In this paper, modelling of membrane-based processes for separation of gas and liquid mixtures are considered. Two general models, one for membrane-based liquid separation processes (with phase change) and another for membrane-based gas separation are presented....... The separation processes covered are: membrane-based gas separation processes, pervaporation and various types of membrane distillation processes. The specific model for each type of membrane-based process is generated from the two general models by applying the specific system descriptions and the corresponding...
Wittmann, René; Maggi, C.; Sharma, A.; Scacchi, A.; Brader, J. M.; Marini Bettolo Marconi, U.
2017-11-01
The equations of motion of active systems can be modeled in terms of Ornstein-Uhlenbeck processes (OUPs) with appropriate correlators. For further theoretical studies, these should be approximated to yield a Markovian picture for the dynamics and a simplified steady-state condition. We perform a comparative study of the unified colored noise approximation (UCNA) and the approximation scheme by Fox recently employed within this context. We review the approximations necessary to define effective interaction potentials in the low-density limit and study the conditions for which these represent the behavior observed in two-body simulations for the OUPs model and active Brownian particles. The demonstrated limitations of the theory for potentials with a negative slope or curvature can be qualitatively corrected by a new empirical modification. In general, we find that in the presence of translational white noise the Fox approach is more accurate. Finally, we examine an alternative way to define a force-balance condition in the limit of small activity.
Scales, Jeffrey A; Butler, Marguerite A
2016-01-01
Despite the complexity of nature, most comparative studies of phenotypic evolution consider selective pressures in isolation. When competing pressures operate on the same system, it is commonly expected that trade-offs will occur that will limit the evolution of phenotypic diversity, however, it is possible that interactions among selective pressures may promote diversity instead. We explored the evolution of locomotor performance in lizards in relation to possible selective pressures using the Ornstein-Uhlenbeck process. Here, we show that a combination of selection based on foraging mode and predator escape is required to explain variation in performance phenotypes. Surprisingly, habitat use contributed little explanatory power. We find that it is possible to evolve very different abilities in performance which were previously thought to be tightly correlated, supporting a growing literature that explores the many-to-one mapping of morphological design. Although we generally find the expected trade-off between maximal exertion and speed, this relationship surprisingly disappears when species experience selection for both performance types. We conclude that functional integration need not limit adaptive potential, and that an integrative approach considering multiple major influences on a phenotype allows a more complete understanding of adaptation and the evolution of diversity. © 2015 The Author(s). Evolution © 2015 The Society for the Study of Evolution.
The best of both worlds: Phylogenetic eigenvector regression and mapping
José Alexandre Felizola Diniz Filho
2015-09-01
Full Text Available Eigenfunction analyses have been widely used to model patterns of autocorrelation in time, space and phylogeny. In a phylogenetic context, Diniz-Filho et al. (1998 proposed what they called Phylogenetic Eigenvector Regression (PVR, in which pairwise phylogenetic distances among species are submitted to a Principal Coordinate Analysis, and eigenvectors are then used as explanatory variables in regression, correlation or ANOVAs. More recently, a new approach called Phylogenetic Eigenvector Mapping (PEM was proposed, with the main advantage of explicitly incorporating a model-based warping in phylogenetic distance in which an Ornstein-Uhlenbeck (O-U process is fitted to data before eigenvector extraction. Here we compared PVR and PEM in respect to estimated phylogenetic signal, correlated evolution under alternative evolutionary models and phylogenetic imputation, using simulated data. Despite similarity between the two approaches, PEM has a slightly higher prediction ability and is more general than the original PVR. Even so, in a conceptual sense, PEM may provide a technique in the best of both worlds, combining the flexibility of data-driven and empirical eigenfunction analyses and the sounding insights provided by evolutionary models well known in comparative analyses.
Dynamic Looping of a Free-Draining Polymer
Ye, Felix X. -F.; Stinis, Panos; Qian, Hong
2018-01-11
Here, we revisit the celebrated Wilemski--Fixman (WF) treatment for the looping time of a free-draining polymer. The WF theory introduces a sink term into the Fokker--Planck equation for the $3(N+1)$-dimensional Ornstein--Uhlenbeck process of polymer dynamics, which accounts for the appropriate boundary condition due to the formation of a loop. The assumption for WF theory is considerably relaxed. A perturbation method approach is developed that justifies and generalizes the previous results using either a delta sink or a Heaviside sink. For both types of sinks, we show that under the condition of a small dimensionless $\\epsilon$, the ratio of capture radius to the Kuhn length, we are able to systematically produce all known analytical and asymptotic results obtained by other methods. This includes most notably the transition regime between the $N^2$ scaling of Doi, and $N\\sqrt{N}/\\epsilon$ scaling of Szabo, Schulten, and Schulten. The mathematical issue at play is the nonuniform convergence of $\\epsilon\\to 0$ and $N\\to\\infty$, the latter being an inherent part of the theory of a Gaussian polymer. Our analysis yields a novel term in the analytical expression for the looping time with small $\\epsilon$, which was previously unknown. Monte Carlo numerical simulations corroborate the analytical findings. The systematic method developed here can be applied to other systems modeled by multidimensional Smoluchowski equations.
Seyrich, Maximilian; Sornette, Didier
2016-04-01
We present a plausible micro-founded model for the previously postulated power law finite time singular form of the crash hazard rate in the Johansen-Ledoit-Sornette (JLS) model of rational expectation bubbles. The model is based on a percolation picture of the network of traders and the concept that clusters of connected traders share the same opinion. The key ingredient is the notion that a shift of position from buyer to seller of a sufficiently large group of traders can trigger a crash. This provides a formula to estimate the crash hazard rate by summation over percolation clusters above a minimum size of a power sa (with a>1) of the cluster sizes s, similarly to a generalized percolation susceptibility. The power sa of cluster sizes emerges from the super-linear dependence of group activity as a function of group size, previously documented in the literature. The crash hazard rate exhibits explosive finite time singular behaviors when the control parameter (fraction of occupied sites, or density of traders in the network) approaches the percolation threshold pc. Realistic dynamics are generated by modeling the density of traders on the percolation network by an Ornstein-Uhlenbeck process, whose memory controls the spontaneous excursion of the control parameter close to the critical region of bubble formation. Our numerical simulations recover the main stylized properties of the JLS model with intermittent explosive super-exponential bubbles interrupted by crashes.
20 CFR 405.701 - Expedited appeals process-general.
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Expedited appeals process-general. 405.701 Section 405.701 Employees' Benefits SOCIAL SECURITY ADMINISTRATION ADMINISTRATIVE REVIEW PROCESS FOR ADJUDICATING INITIAL DISABILITY CLAIMS Expedited Appeals Process for Constitutional Issues § 405.701 Expedited...
Learning Theory Estimates with Observations from General Stationary Stochastic Processes.
Hang, Hanyuan; Feng, Yunlong; Steinwart, Ingo; Suykens, Johan A K
2016-12-01
This letter investigates the supervised learning problem with observations drawn from certain general stationary stochastic processes. Here by general, we mean that many stationary stochastic processes can be included. We show that when the stochastic processes satisfy a generalized Bernstein-type inequality, a unified treatment on analyzing the learning schemes with various mixing processes can be conducted and a sharp oracle inequality for generic regularized empirical risk minimization schemes can be established. The obtained oracle inequality is then applied to derive convergence rates for several learning schemes such as empirical risk minimization (ERM), least squares support vector machines (LS-SVMs) using given generic kernels, and SVMs using gaussian kernels for both least squares and quantile regression. It turns out that for independent and identically distributed (i.i.d.) processes, our learning rates for ERM recover the optimal rates. For non-i.i.d. processes, including geometrically [Formula: see text]-mixing Markov processes, geometrically [Formula: see text]-mixing processes with restricted decay, [Formula: see text]-mixing processes, and (time-reversed) geometrically [Formula: see text]-mixing processes, our learning rates for SVMs with gaussian kernels match, up to some arbitrarily small extra term in the exponent, the optimal rates. For the remaining cases, our rates are at least close to the optimal rates. As a by-product, the assumed generalized Bernstein-type inequality also provides an interpretation of the so-called effective number of observations for various mixing processes.
Visual Processing in Generally Gifted and Mathematically Excelling Adolescents
Paz-Baruch, Nurit; Leikin, Roza; Leikin, Mark
2016-01-01
Little empirical data are available concerning the cognitive abilities of gifted individuals in general and especially those who excel in mathematics. We examined visual processing abilities distinguishing between general giftedness (G) and excellence in mathematics (EM). The research population consisted of 190 students from four groups of 10th-…
A general conservative extension theorem in process algebras with inequalities
d' Argenio, P.R.; Verhoef, Chris
1997-01-01
We prove a general conservative extension theorem for transition system based process theories with easy-to-check and reasonable conditions. The core of this result is another general theorem which gives sufficient conditions for a system of operational rules and an extension of it in order to
Amplitudes of solar p modes: Modelling of the eddy time-correlation function
Belkacem, K [Institut d' Astrophysique et de Geophysique, Universite de Liege, Allee du 6 Aout 17-B 4000 Liege (Belgium); Samadi, R; Goupil, M J, E-mail: Kevin.Belkacem@ulg.ac.BE [LESIA, UMR8109, Universite Pierre et Marie Curie, Universite Denis Diderot, Obs. de Paris, 92195 Meudon Cedex (France)
2011-01-01
Modelling amplitudes of stochastically excited oscillations in stars is a powerful tool for understanding the properties of the convective zones. For instance, it gives us information on the way turbulent eddies are temporally correlated in a very large Reynolds number regime. We discuss the way the time correlation between eddies is modelled and we present recent theoretical developments as well as observational results. Eventually, we discuss the physical underlying meaning of the results by introducing the Ornstein-Uhlenbeck process, which is a sub-class of a Gaussian Markov process.
A feedback control strategy for the airfoil system under non-Gaussian colored noise excitation
Huang, Yong, E-mail: hy@njust.edu.cn, E-mail: taogang@njust.edu.cn; Tao, Gang, E-mail: hy@njust.edu.cn, E-mail: taogang@njust.edu.cn [School of Energy and Power Engineering, Nanjing University of Science and Technology, 200 XiaoLingwei Street, Nanjing 210094 (China)
2014-09-01
The stability of a binary airfoil with feedback control under stochastic disturbances, a non-Gaussian colored noise, is studied in this paper. First, based on some approximated theories and methods the non-Gaussian colored noise is simplified to an Ornstein-Uhlenbeck process. Furthermore, via the stochastic averaging method and the logarithmic polar transformation, one dimensional diffusion process can be obtained. At last by applying the boundary conditions, the largest Lyapunov exponent which can determine the almost-sure stability of the system and the effective region of control parameters is calculated.
A feedback control strategy for the airfoil system under non-Gaussian colored noise excitation.
Huang, Yong; Tao, Gang
2014-09-01
The stability of a binary airfoil with feedback control under stochastic disturbances, a non-Gaussian colored noise, is studied in this paper. First, based on some approximated theories and methods the non-Gaussian colored noise is simplified to an Ornstein-Uhlenbeck process. Furthermore, via the stochastic averaging method and the logarithmic polar transformation, one dimensional diffusion process can be obtained. At last by applying the boundary conditions, the largest Lyapunov exponent which can determine the almost-sure stability of the system and the effective region of control parameters is calculated.
Entropy Production and Fluctuation Theorems for Active Matter
Mandal, Dibyendu; Klymko, Katherine; DeWeese, Michael R.
2017-12-01
Active biological systems reside far from equilibrium, dissipating heat even in their steady state, thus requiring an extension of conventional equilibrium thermodynamics and statistical mechanics. In this Letter, we have extended the emerging framework of stochastic thermodynamics to active matter. In particular, for the active Ornstein-Uhlenbeck model, we have provided consistent definitions of thermodynamic quantities such as work, energy, heat, entropy, and entropy production at the level of single, stochastic trajectories and derived related fluctuation relations. We have developed a generalization of the Clausius inequality, which is valid even in the presence of the non-Hamiltonian dynamics underlying active matter systems. We have illustrated our results with explicit numerical studies.
THE VOLATILITY OF TEMPERATURE AND PRICING OF WEATHER DERIVATIVES
Benth, Fred Espen; Saltyte-Benth, Jurate
2005-01-01
We propose an Ornstein-Uhlenbeck process with seasonal volatility to model the time dynamics of daily average temperatures. The model is fitted to almost 43 years of daily observations recorded in Stockholm, one of the European cities for which there is a trade in weather futures and options on the Chicago Mercantile Exchange (CME). Explicit pricing dynamics for futures contracts written on the number of heating/cooling degree-days (so-called HDD/CDD-futures) and the cumulative average daily ...
Only through perturbation can relaxation times be estimated
Ditlevsen, Susanne; Lansky, Petr
2012-01-01
Estimation of model parameters is as important as model building, but is often neglected in model studies. Here we show that despite the existence of well known results on parameter estimation in a simple homogenous Ornstein-Uhlenbeck process, in most practical situations the methods suffer greatly...... on computer experiments based on applications in neuroscience and pharmacokinetics, which show a striking improvement of the quality of estimation. The results are important for judicious designs of experiments to obtain maximal information from each data point, especially when samples are expensive...
Fractional Number Operator and Associated Fractional Diffusion Equations
Rguigui, Hafedh
2018-03-01
In this paper, we study the fractional number operator as an analog of the finite-dimensional fractional Laplacian. An important relation with the Ornstein-Uhlenbeck process is given. Using a semigroup approach, the solution of the Cauchy problem associated to the fractional number operator is presented. By means of the Mittag-Leffler function and the Laplace transform, we give the solution of the Caputo time fractional diffusion equation and Riemann-Liouville time fractional diffusion equation in infinite dimensions associated to the fractional number operator.
A General Accelerated Degradation Model Based on the Wiener Process.
Liu, Le; Li, Xiaoyang; Sun, Fuqiang; Wang, Ning
2016-12-06
Accelerated degradation testing (ADT) is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses.
Generalized Poisson processes in quantum mechanics and field theory
Combe, P.; Rodriguez, R.; Centre National de la Recherche Scientifique, 13 - Marseille; Hoegh-Krohn, R.; Centre National de la Recherche Scientifique, 13 - Marseille; Sirugue, M.; Sirugue-Collin, M.; Centre National de la Recherche Scientifique, 13 - Marseille
1981-01-01
In section 2 we describe more carefully the generalized Poisson processes, giving a realization of the underlying probability space, and we characterize these processes by their characteristic functionals. Section 3 is devoted to the proof of the previous formula for quantum mechanical systems, with possibly velocity dependent potentials and in section 4 we give an application of the previous theory to some relativistic Bose field models. (orig.)
General Template for the FMEA Applications in Primary Food Processing.
Özilgen, Sibel; Özilgen, Mustafa
Data on the hazards involved in the primary steps of processing cereals, fruit and vegetables, milk and milk products, meat and meat products, and fats and oils are compiled with a wide-ranging literature survey. After determining the common factors from these data, a general FMEA template is offered, and its use is explained with a case study on pasteurized milk production.
Process error rates in general research applications to the Human ...
Objective. To examine process error rates in applications for ethics clearance of health research. Methods. Minutes of 586 general research applications made to a human health research ethics committee (HREC) from April 2008 to March 2009 were examined. Rates of approval were calculated and reasons for requiring ...
Weldability of general purpose heat source new-process iridium
Kanne, W.R.
1987-01-01
Weldability tests on General Purpose Heat Source (GPHS) iridium capsules showed that a new iridium fabrication process reduced susceptibility to underbead cracking. Seventeen capsules were welded (a total of 255 welds) in four categories and the number of cracks in each weld was measured
Domain-General Factors Influencing Numerical and Arithmetic Processing
André Knops
2017-12-01
Full Text Available This special issue contains 18 articles that address the question how numerical processes interact with domain-general factors. We start the editorial with a discussion of how to define domain-general versus domain-specific factors and then discuss the contributions to this special issue grouped into two core numerical domains that are subject to domain-general influences (see Figure 1. The first group of contributions addresses the question how numbers interact with spatial factors. The second group of contributions is concerned with factors that determine and predict arithmetic understanding, performance and development. This special issue shows that domain-general (Table 1a as well as domain-specific (Table 1b abilities influence numerical and arithmetic performance virtually at all levels and make it clear that for the field of numerical cognition a sole focus on one or several domain-specific factors like the approximate number system or spatial-numerical associations is not sufficient. Vice versa, in most studies that included domain-general and domain-specific variables, domain-specific numerical variables predicted arithmetic performance above and beyond domain-general variables. Therefore, a sole focus on domain-general aspects such as, for example, working memory, to explain, predict and foster arithmetic learning is also not sufficient. Based on the articles in this special issue we conclude that both domain-general and domain-specific factors contribute to numerical cognition. But the how, why and when of their contribution still needs to be better understood. We hope that this special issue may be helpful to readers in constraining future theory and model building about the interplay of domain-specific and domain-general factors.
A General Accelerated Degradation Model Based on the Wiener Process
Le Liu
2016-12-01
Full Text Available Accelerated degradation testing (ADT is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses.
Integrable Floquet dynamics, generalized exclusion processes and "fused" matrix ansatz
Vanicat, Matthieu
2018-04-01
We present a general method for constructing integrable stochastic processes, with two-step discrete time Floquet dynamics, from the transfer matrix formalism. The models can be interpreted as a discrete time parallel update. The method can be applied for both periodic and open boundary conditions. We also show how the stationary distribution can be built as a matrix product state. As an illustration we construct parallel discrete time dynamics associated with the R-matrix of the SSEP and of the ASEP, and provide the associated stationary distributions in a matrix product form. We use this general framework to introduce new integrable generalized exclusion processes, where a fixed number of particles is allowed on each lattice site in opposition to the (single particle) exclusion process models. They are constructed using the fusion procedure of R-matrices (and K-matrices for open boundary conditions) for the SSEP and ASEP. We develop a new method, that we named "fused" matrix ansatz, to build explicitly the stationary distribution in a matrix product form. We use this algebraic structure to compute physical observables such as the correlation functions and the mean particle current.
MOTRIMS as a generalized probe of AMO processes
Bredy, R.; Nguyen, H.; Camp, H.; Flechard, X.; De Paola, B.D.
2003-01-01
Magneto-optical trap recoil ion momentum spectroscopy (MOTRIMS) is one of the newest offshoots of the generalized TRIMS approach to ion-atom collisions. By using lasers instead of the more usual supersonic expansion to cool the target, MOTRIMS has demonstrated two distinct advantages over conventional TRIMS. The first is better resolution, now limited by detectors instead of target temperature. The second is its suitability for use in the study of laser-excited targets. In this presentation we will present a third advantage: The use of MOTRIMS as a general-purpose probe of AMO processes in cold atomic clouds of atoms and molecules. Specifically, the projectile ion beam can be used as a probe of processes as diverse as target dressing by femtosecond optical pulses, photo-association (laser-assisted cold collisions) photo-ionization, and electromagnetically-induced transparency. We will present data for the processes we have investigated, and speculations on what we expect to see for the processes we plan to investigate in the future
Markov Jump Processes Approximating a Non-Symmetric Generalized Diffusion
Limić, Nedžad
2011-01-01
Consider a non-symmetric generalized diffusion X(⋅) in ℝ d determined by the differential operator A(x) = -Σ ij ∂ i a ij (x)∂ j + Σ i b i (x)∂ i . In this paper the diffusion process is approximated by Markov jump processes X n (⋅), in homogeneous and isotropic grids G n ⊂ℝ d , which converge in distribution in the Skorokhod space D([0,∞),ℝ d ) to the diffusion X(⋅). The generators of X n (⋅) are constructed explicitly. Due to the homogeneity and isotropy of grids, the proposed method for d≥3 can be applied to processes for which the diffusion tensor {a ij (x)} 11 dd fulfills an additional condition. The proposed construction offers a simple method for simulation of sample paths of non-symmetric generalized diffusion. Simulations are carried out in terms of jump processes X n (⋅). For piece-wise constant functions a ij on ℝ d and piece-wise continuous functions a ij on ℝ 2 the construction and principal algorithm are described enabling an easy implementation into a computer code.
Real-time radar signal processing using GPGPU (general-purpose graphic processing unit)
Kong, Fanxing; Zhang, Yan Rockee; Cai, Jingxiao; Palmer, Robert D.
2016-05-01
This study introduces a practical approach to develop real-time signal processing chain for general phased array radar on NVIDIA GPUs(Graphical Processing Units) using CUDA (Compute Unified Device Architecture) libraries such as cuBlas and cuFFT, which are adopted from open source libraries and optimized for the NVIDIA GPUs. The processed results are rigorously verified against those from the CPUs. Performance benchmarked in computation time with various input data cube sizes are compared across GPUs and CPUs. Through the analysis, it will be demonstrated that GPGPUs (General Purpose GPU) real-time processing of the array radar data is possible with relatively low-cost commercial GPUs.
Lévy processes on a generalized fractal comb
Sandev, Trifce; Iomin, Alexander; Méndez, Vicenç
2016-09-01
Comb geometry, constituted of a backbone and fingers, is one of the most simple paradigm of a two-dimensional structure, where anomalous diffusion can be realized in the framework of Markov processes. However, the intrinsic properties of the structure can destroy this Markovian transport. These effects can be described by the memory and spatial kernels. In particular, the fractal structure of the fingers, which is controlled by the spatial kernel in both the real and the Fourier spaces, leads to the Lévy processes (Lévy flights) and superdiffusion. This generalization of the fractional diffusion is described by the Riesz space fractional derivative. In the framework of this generalized fractal comb model, Lévy processes are considered, and exact solutions for the probability distribution functions are obtained in terms of the Fox H-function for a variety of the memory kernels, and the rate of the superdiffusive spreading is studied by calculating the fractional moments. For a special form of the memory kernels, we also observed a competition between long rests and long jumps. Finally, we considered the fractal structure of the fingers controlled by a Weierstrass function, which leads to the power-law kernel in the Fourier space. This is a special case, when the second moment exists for superdiffusion in this competition between long rests and long jumps.
The role of culture in the general practice consultation process.
Ali, Nasreen; Atkin, Karl; Neal, Richard
2006-11-01
In this paper, we will examine the importance of culture and ethnicity in the general practice consultation process. Good communication is associated with positive health outcomes. We will, by presenting qualitative material from an empirical study, examine the way in which communication within the context of a general practitioner (GP) consultation may be affected by ethnicity and cultural factors. The aim of the study was to provide a detailed understanding of the ways in which white and South Asian patients communicate with white GPs and to explore any similarities and differences in communication. This paper reports on South Asian and white patients' explanations of recent videotaped consultations with their GP. We specifically focus on the ways in which issues of ethnic identity impacted upon the GP consultation process, by exploring how our sample of predominantly white GPs interacted with their South Asian patients and the extent to which the GP listened to the patients' needs, gave patients information, engaged in social conversation and showed friendliness. We then go on to examine patients' suggestions on improvements (if any) to the consultation. We conclude, by showing how a non-essentialist understanding of culture helps to comprehend the consultation process when the patients are from Great Britain's ethnicised communities. Our findings, however, raise generic issues of relevance to all multi-racial and multi-ethnic societies.
Lévy processes on a generalized fractal comb
Sandev, Trifce; Iomin, Alexander; Méndez, Vicenç
2016-01-01
Comb geometry, constituted of a backbone and fingers, is one of the most simple paradigm of a two-dimensional structure, where anomalous diffusion can be realized in the framework of Markov processes. However, the intrinsic properties of the structure can destroy this Markovian transport. These effects can be described by the memory and spatial kernels. In particular, the fractal structure of the fingers, which is controlled by the spatial kernel in both the real and the Fourier spaces, leads to the Lévy processes (Lévy flights) and superdiffusion. This generalization of the fractional diffusion is described by the Riesz space fractional derivative. In the framework of this generalized fractal comb model, Lévy processes are considered, and exact solutions for the probability distribution functions are obtained in terms of the Fox H -function for a variety of the memory kernels, and the rate of the superdiffusive spreading is studied by calculating the fractional moments. For a special form of the memory kernels, we also observed a competition between long rests and long jumps. Finally, we considered the fractal structure of the fingers controlled by a Weierstrass function, which leads to the power-law kernel in the Fourier space. This is a special case, when the second moment exists for superdiffusion in this competition between long rests and long jumps. (paper)
GENERAL ISSUES CONSIDERING BRAND EQUITY WITHIN THE NATION BRANDING PROCESS
Denisa, COTÎRLEA
2014-11-01
Full Text Available The present work-paper was written in order to provide an overview of the intangible values that actively contribute to brand capital formation within the nation branding process; through this article, the author tried to emphasize the differences existent between brand capital and brand equity within the context of the nation branding process, which has became a widely approached subject both in the national and international literature. Also, the evolution of brand capital and brand equity was approached, in order to identify and explain their components and their role, by highlighting the entire process of their evolution under a sequence of steps scheme. The results of this paper are focused on the identification of a structured flowchart through which the process of nation branding -and the brand capital itself- are to be perceived as holistic concepts, integrator and inter-correlated ones, easily understood.The methodology used in order to write the present article resumes to all appropriate methods and techniques used for collecting and processing empirical data and information, respectively to observing, sorting, correlating, categorizing, comparing and analyzing data, so that the addressed theoretical elements could have been founded; in the center of the qualitative thematic research addressed in the present article lie general elements belonging to Romania's image and identity promotion.
A Poisson process approximation for generalized K-5 confidence regions
Arsham, H.; Miller, D. R.
1982-01-01
One-sided confidence regions for continuous cumulative distribution functions are constructed using empirical cumulative distribution functions and the generalized Kolmogorov-Smirnov distance. The band width of such regions becomes narrower in the right or left tail of the distribution. To avoid tedious computation of confidence levels and critical values, an approximation based on the Poisson process is introduced. This aproximation provides a conservative confidence region; moreover, the approximation error decreases monotonically to 0 as sample size increases. Critical values necessary for implementation are given. Applications are made to the areas of risk analysis, investment modeling, reliability assessment, and analysis of fault tolerant systems.
Information in general medical practices: the information processing model.
Crowe, Sarah; Tully, Mary P; Cantrill, Judith A
2010-04-01
The need for effective communication and handling of secondary care information in general practices is paramount. To explore practice processes on receiving secondary care correspondence in a way that integrates the information needs and perceptions of practice staff both clinical and administrative. Qualitative study using semi-structured interviews with a wide range of practice staff (n = 36) in nine practices in the Northwest of England. Analysis was based on the framework approach using N-Vivo software and involved transcription, familiarization, coding, charting, mapping and interpretation. The 'information processing model' was developed to describe the six stages involved in practice processing of secondary care information. These included the amendment or updating of practice records whilst simultaneously or separately actioning secondary care recommendations, using either a 'one-step' or 'two-step' approach, respectively. Many factors were found to influence each stage and impact on the continuum of patient care. The primary purpose of processing secondary care information is to support patient care; this study raises the profile of information flow and usage within practices as an issue requiring further consideration.
Generalized Hofmann quantum process fidelity bounds for quantum filters
Sedlák, Michal; Fiurášek, Jaromír
2016-04-01
We propose and investigate bounds on the quantum process fidelity of quantum filters, i.e., probabilistic quantum operations represented by a single Kraus operator K . These bounds generalize the Hofmann bounds on the quantum process fidelity of unitary operations [H. F. Hofmann, Phys. Rev. Lett. 94, 160504 (2005), 10.1103/PhysRevLett.94.160504] and are based on probing the quantum filter with pure states forming two mutually unbiased bases. Determination of these bounds therefore requires far fewer measurements than full quantum process tomography. We find that it is particularly suitable to construct one of the probe bases from the right eigenstates of K , because in this case the bounds are tight in the sense that if the actual filter coincides with the ideal one, then both the lower and the upper bounds are equal to 1. We theoretically investigate the application of these bounds to a two-qubit optical quantum filter formed by the interference of two photons on a partially polarizing beam splitter. For an experimentally convenient choice of factorized input states and measurements we study the tightness of the bounds. We show that more stringent bounds can be obtained by more sophisticated processing of the data using convex optimization and we compare our methods for different choices of the input probe states.
Use of general purpose graphics processing units with MODFLOW
Hughes, Joseph D.; White, Jeremy T.
2013-01-01
To evaluate the use of general-purpose graphics processing units (GPGPUs) to improve the performance of MODFLOW, an unstructured preconditioned conjugate gradient (UPCG) solver has been developed. The UPCG solver uses a compressed sparse row storage scheme and includes Jacobi, zero fill-in incomplete, and modified-incomplete lower-upper (LU) factorization, and generalized least-squares polynomial preconditioners. The UPCG solver also includes options for sequential and parallel solution on the central processing unit (CPU) using OpenMP. For simulations utilizing the GPGPU, all basic linear algebra operations are performed on the GPGPU; memory copies between the central processing unit CPU and GPCPU occur prior to the first iteration of the UPCG solver and after satisfying head and flow criteria or exceeding a maximum number of iterations. The efficiency of the UPCG solver for GPGPU and CPU solutions is benchmarked using simulations of a synthetic, heterogeneous unconfined aquifer with tens of thousands to millions of active grid cells. Testing indicates GPGPU speedups on the order of 2 to 8, relative to the standard MODFLOW preconditioned conjugate gradient (PCG) solver, can be achieved when (1) memory copies between the CPU and GPGPU are optimized, (2) the percentage of time performing memory copies between the CPU and GPGPU is small relative to the calculation time, (3) high-performance GPGPU cards are utilized, and (4) CPU-GPGPU combinations are used to execute sequential operations that are difficult to parallelize. Furthermore, UPCG solver testing indicates GPGPU speedups exceed parallel CPU speedups achieved using OpenMP on multicore CPUs for preconditioners that can be easily parallelized.
Kolmogorov's refined similarity hypotheses for turbulence and general stochastic processes
Stolovitzky, G.; Sreenivasan, K.R.
1994-01-01
Kolmogorov's refined similarity hypotheses are shown to hold true for a variety of stochastic processes besides high-Reynolds-number turbulent flows, for which they were originally proposed. In particular, just as hypothesized for turbulence, there exists a variable V whose probability density function attains a universal form. Analytical expressions for the probability density function of V are obtained for Brownian motion as well as for the general case of fractional Brownian motion---the latter under some mild assumptions justified a posteriori. The properties of V for the case of antipersistent fractional Brownian motion with the Hurst exponent of 1/3 are similar in many details to those of high-Reynolds-number turbulence in atmospheric boundary layers a few meters above the ground. The one conspicuous difference between turbulence and the antipersistent fractional Brownian motion is that the latter does not possess the required skewness. Broad implications of these results are discussed
Cortical processes of speech illusions in the general population.
Schepers, E; Bodar, L; van Os, J; Lousberg, R
2016-10-18
There is evidence that experimentally elicited auditory illusions in the general population index risk for psychotic symptoms. As little is known about underlying cortical mechanisms of auditory illusions, an experiment was conducted to analyze processing of auditory illusions in a general population sample. In a follow-up design with two measurement moments (baseline and 6 months), participants (n = 83) underwent the White Noise task under simultaneous recording with a 14-lead EEG. An auditory illusion was defined as hearing any speech in a sound fragment containing white noise. A total number of 256 speech illusions (SI) were observed over the two measurements, with a high degree of stability of SI over time. There were 7 main effects of speech illusion on the EEG alpha band-the most significant indicating a decrease in activity at T3 (t = -4.05). Other EEG frequency bands (slow beta, fast beta, gamma, delta, theta) showed no significant associations with SI. SIs are characterized by reduced alpha activity in non-clinical populations. Given the association of SIs with psychosis, follow-up research is required to examine the possibility of reduced alpha activity mediating SIs in high risk and symptomatic populations.
A general theory for radioactive processes in rare earth compounds
Acevedo, R.; Meruane, T.
1998-01-01
The formal theory of radiative processes in centrosymmetric coordination compounds of the Ln X 3+ is a trivalent lanthanide ion and X -1 =Cl -1 , Br -1 ) is put forward based on a symmetry vibronic crystal field-ligand polarisation model. This research considers a truncated basis set for the intermediate states of the central metal ion and have derived general master equations to account for both the overall observed spectral intensities and the measured relative vibronic intensity distributions for parity forbidden but vibronically allowed electronic transitions. In addition, a procedure which includes the closure approximation over the intermediate electronic states is included in order to estimate quantitative crystal field contribution to the total transition dipole moments of various and selected electronic transitions. This formalism is both general and flexible and it may be employed in any electronic excitations involving f N type configurations for the rare earths in centrosymmetric co-ordination compounds in cubic environments and also in doped host crystals belonging to the space group Fm 3m. (author)
Sobre uma nova teoria de precificação de opções e outros derivativos
Ailton Cassettari
2001-09-01
Full Text Available Este artigo desenvolve uma nova teoria de precificação de títulos derivativos, implementando-a para a situação particular de opções de compra européias de ações sem dividendos a partir da premissa básica de que o drift do ativo subjacente desempenha papel relevante no processo de precificação, no contexto dos fenômenos de transporte. É feita uma confrontação sistemática com os bem-conhecidos modelos Black-Scholes e Ornstein-Uhlenbeck bivariado que mostra a plausibilidade e efetividade desta abordagem.This paper develops a new theory of derivative securities pricing and implementes it for the specific case of European call options on a hypothetical non-dividend-paying stock. The basic premise is that the drift of the underlying asset plays a very important role in the pricing process, in the context of transport phenomena. A systematic confrontation to well-known Black-Scholes and bivariate trending Ornstein-Uhlenbeck models is also carried out, providing plausibility and effectiveness for this approach.
A unified view on weakly correlated recurrent networks
Dmytro eGrytskyy
2013-10-01
Full Text Available The diversity of neuron models used in contemporary theoretical neuroscience to investigate specific properties of covariances in the spiking activity raises the question how these models relate to each other. In particular it is hard to distinguish between generic properties of covariances and peculiarities due to the abstracted model. Here we present a unified view on pairwise covariances in recurrent networks in the irregular regime. We consider the binary neuron model, the leaky integrate-and-fire model, and the Hawkes process. We show that linear approximation maps each of these models to either of two classes of linear rate models, including the Ornstein-Uhlenbeck process as a special case. The distinction between both classes is the location of additive noise in the rate dynamics, which is located on the output side for spiking models and on the input side for the binary model. Both classes allow closed form solutions for the covariance. For output noise it separates into an echo term and a term due to correlated input. The unified framework enables us to transfer results between models. For example, we generalize the binary model and the Hawkes process to the situation with synaptic conduction delays and simplify derivations for established results. Our approach is applicable to general network structures and suitable for the calculation of population averages. The derived averages are exact for fixed out-degree network architectures and approximate for fixed in-degree. We demonstrate how taking into account fluctuations in the linearization procedure increases the accuracy of the effective theory and we explain the class dependent differences between covariances in the time and the frequency domain. Finally we show that the oscillatory instability emerging in networks of integrate-and-fire models with delayed inhibitory feedback is a model-invariant feature: the same structure of poles in the complex frequency plane determines the
Information processing during general anesthesia: Evidence for unconscious memory
A.E. Bonebakker (Annette); B. Bonke (Benno); J. Klein (Jan); G. Wolters (G.); Th. Stijnen (Theo); J. Passchier (Jan); P.M. Merikle (P.)
1996-01-01
textabstractMemory for words presented during general anesthesia was studied in two experiments. In Experiment 1, surgical patients (n=80) undergoing elective procedures under general anesthesia were presented shortly before and during surgery with words via headphones. At the earliest convenient
Noise in strong laser-atom interactions: Phase telegraph noise
Eberly, J.H.; Wodkiewicz, K.; Shore, B.W.
1984-01-01
We discuss strong laser-atom interactions that are subjected to jump-type (random telegraph) random-phase noise. Physically, the jumps may arise from laser fluctuations, from collisions of various kinds, or from other external forces. Our discussion is carried out in two stages. First, direct and partially heuristic calculations determine the laser spectrum and also give a third-order differential equation for the average inversion of a two-level atom on resonance. At this stage a number of general features of the interaction are able to be studied easily. The optical analog of motional narrowing, for example, is clearly predicted. Second, we show that the theory of generalized Poisson processes allows laser-atom interactions in the presence of random telegraph noise of all kinds (not only phase noise) to be treated systematically, by means of a master equation first used in the context of quantum optics by Burshtein. We use the Burshtein equation to obtain an exact expression for the two-level atom's steady-state resonance fluorescence spectrum, when the exciting laser exhibits phase telegraph noise. Some comparisons are made with results obtained from other noise models. Detailed treatments of the effects ofmly jumps, or as a model of finite laser bandwidth effects, in which the laser frequency exhibits random jumps. We show that these two types of frequency noise can be distinguished in light-scattering spectra. We also discuss examples which demonstrate both temporal and spectral motional narrowing, nonexponential correlations, and non-Lorentzian spectra. Its exact solubility in finite terms makes the frequency-telegraph noise model an attractive alternative to the white-noise Ornstein-Uhlenbeck frequency noise model which has been previously applied to laser-atom interactions
Outcrossings of safe regions by generalized hyperbolic processes
Klüppelberg, Claudia; Rasmussen, Morten Grud
2013-01-01
We present a simple Gaussian mixture model in space and time with generalized hyperbolic marginals. Starting with Rice’s celebrated formula for level upcrossings and outcrossings of safe regions we investigate the consequences of the mean-variance mixture model on such quantities. We obtain...
Schiefer, Jonathan; Niederbühl, Alexander; Pernice, Volker; Lennartz, Carolin; Hennig, Jürgen; LeVan, Pierre; Rotter, Stefan
2018-03-01
Knowing brain connectivity is of great importance both in basic research and for clinical applications. We are proposing a method to infer directed connectivity from zero-lag covariances of neuronal activity recorded at multiple sites. This allows us to identify causal relations that are reflected in neuronal population activity. To derive our strategy, we assume a generic linear model of interacting continuous variables, the components of which represent the activity of local neuronal populations. The suggested method for inferring connectivity from recorded signals exploits the fact that the covariance matrix derived from the observed activity contains information about the existence, the direction and the sign of connections. Assuming a sparsely coupled network, we disambiguate the underlying causal structure via L1-minimization, which is known to prefer sparse solutions. In general, this method is suited to infer effective connectivity from resting state data of various types. We show that our method is applicable over a broad range of structural parameters regarding network size and connection probability of the network. We also explored parameters affecting its activity dynamics, like the eigenvalue spectrum. Also, based on the simulation of suitable Ornstein-Uhlenbeck processes to model BOLD dynamics, we show that with our method it is possible to estimate directed connectivity from zero-lag covariances derived from such signals. In this study, we consider measurement noise and unobserved nodes as additional confounding factors. Furthermore, we investigate the amount of data required for a reliable estimate. Additionally, we apply the proposed method on full-brain resting-state fast fMRI datasets. The resulting network exhibits a tendency for close-by areas being connected as well as inter-hemispheric connections between corresponding areas. In addition, we found that a surprisingly large fraction of more than one third of all identified connections were of
General framework for adsorption processes on dynamic interfaces
Schmuck, Markus; Kalliadasis, Serafim
2016-01-01
We propose a novel and general variational framework modelling particle adsorption mechanisms on evolving immiscible fluid interfaces. A by-product of our thermodynamic approach is that we systematically obtain analytic adsorption isotherms for given equilibrium interfacial geometries. We validate computationally our mathematical methodology by demonstrating the fundamental properties of decreasing interfacial free energies by increasing interfacial particle densities and of decreasing surface pressure with increasing surface area. (paper)
General birth-death processes: probabilities, inference, and applications
Crawford, Forrest Wrenn
2012-01-01
A birth-death process is a continuous-time Markov chain that counts the number of particles in a system over time. Each particle can give birth to another particle or die, and the rate of births and deaths at any given time depends on how many extant particles there are. Birth-death processes are popular modeling tools in evolution, population biology, genetics, epidemiology, and ecology. Despite the widespread interest in birth-death models, no efficient method exists to evaluate the fini...
Chidume, C.E.; Ofoedu, E.U.
2007-07-01
In this paper, we introduce a new iteration process and prove that it converges strongly to a common fixed point for a finite family of generalized Lipschitz nonlinear mappings in a real reflexive Banach space E with a with uniformly Gateaux differentiable norm if at least one member of the family is pseudo-contractive. We also prove that a slight modification of the process converges to a common zero for a finite family of generalized Lipschitz accretive operators defined on E. Results for nonexpansive families are obtained as easy corollaries. Finally, our new iteration process and our method of proof are of independent interest. (author)
A General Representation Theorem for Integrated Vector Autoregressive Processes
Franchi, Massimo
We study the algebraic structure of an I(d) vector autoregressive process, where d is restricted to be an integer. This is useful to characterize its polynomial cointegrating relations and its moving average representation, that is to prove a version of the Granger representation theorem valid...
Tuned with a tune: Talker normalization via general auditory processes
Erika J C Laing
2012-06-01
Full Text Available Voices have unique acoustic signatures, contributing to the acoustic variability listeners must contend with in perceiving speech, and it has long been proposed that listeners normalize speech perception to information extracted from a talker’s speech. Initial attempts to explain talker normalization relied on extraction of articulatory referents, but recent studies of context-dependent auditory perception suggest that general auditory referents such as the long-term average spectrum (LTAS of a talker’s speech similarly affect speech perception. The present study aimed to differentiate the contributions of articulatory/linguistic versus auditory referents for context-driven talker normalization effects and, more specifically, to identify the specific constraints under which such contexts impact speech perception. Synthesized sentences manipulated to sound like different talkers influenced categorization of a subsequent speech target only when differences in the sentences’ LTAS were in the frequency range of the acoustic cues relevant for the target phonemic contrast. This effect was true both for speech targets preceded by spoken sentence contexts and for targets preceded by nonspeech tone sequences that were LTAS-matched to the spoken sentence contexts. Specific LTAS characteristics, rather than perceived talker, predicted the results suggesting that general auditory mechanisms play an important role in effects considered to be instances of perceptual talker normalization.
Neural Generalized Predictive Control of a non-linear Process
Sørensen, Paul Haase; Nørgård, Peter Magnus; Ravn, Ole
1998-01-01
The use of neural network in non-linear control is made difficult by the fact the stability and robustness is not guaranteed and that the implementation in real time is non-trivial. In this paper we introduce a predictive controller based on a neural network model which has promising stability qu...... detail and discuss the implementation difficulties. The neural generalized predictive controller is tested on a pneumatic servo sys-tem.......The use of neural network in non-linear control is made difficult by the fact the stability and robustness is not guaranteed and that the implementation in real time is non-trivial. In this paper we introduce a predictive controller based on a neural network model which has promising stability...... qualities. The controller is a non-linear version of the well-known generalized predictive controller developed in linear control theory. It involves minimization of a cost function which in the present case has to be done numerically. Therefore, we develop the numerical algorithms necessary in substantial...
DYNSYL: a general-purpose dynamic simulator for chemical processes
Patterson, G.K.; Rozsa, R.B.
1978-01-01
Lawrence Livermore Laboratory is conducting a safeguards program for the Nuclear Regulatory Commission. The goal of the Material Control Project of this program is to evaluate material control and accounting (MCA) methods in plants that handle special nuclear material (SNM). To this end we designed and implemented the dynamic chemical plant simulation program DYNSYL. This program can be used to generate process data or to provide estimates of process performance; it simulates both steady-state and dynamic behavior. The MCA methods that may have to be evaluated range from sophisticated on-line material trackers such as Kalman filter estimators, to relatively simple material balance procedures. This report describes the overall structure of DYNSYL and includes some example problems. The code is still in the experimental stage and revision is continuing
Towards Device-Independent Information Processing on General Quantum Networks
Lee, Ciarán M.; Hoban, Matty J.
2018-01-01
The violation of certain Bell inequalities allows for device-independent information processing secure against nonsignaling eavesdroppers. However, this only holds for the Bell network, in which two or more agents perform local measurements on a single shared source of entanglement. To overcome the practical constraints that entangled systems can only be transmitted over relatively short distances, large-scale multisource networks have been employed. Do there exist analogs of Bell inequalities for such networks, whose violation is a resource for device independence? In this Letter, the violation of recently derived polynomial Bell inequalities will be shown to allow for device independence on multisource networks, secure against nonsignaling eavesdroppers.
Chidume, C.E.; Ofoedu, E.U.
2007-07-01
Let K be a nonempty closed convex subset of a real Banach space E. Let T : K → K be a generalized Lipschitz pseudo-contractive mapping such that F(T) := { x element of K : Tx = x} ≠ 0. Let { α n } n ≥ 1 , { λ n } n ≥ 1 and { θ n } n ≥ 1 be real sequences in (0, 1) such that α n = o( θ n ), lim n →∞ λ n = 0 and λ n ( α n + θ n ) 1 element of K, let the sequence { x n } n ≥ 1 be iteratively generated by x n+1 = (1 - λ n α n )x n + λ n α n Tx n - λ n θ n (x n - x 1 ), n ≥ 1. Then, { x n } n ≥ 1 is bounded. Moreover, if E is a reflexive Banach space with uniformly Gateaux differentiable norm and if Σ n=1 ∞ λ n θ n = ∞ is additionally assumed, then, under mild conditions, left brace# x n } n ≥ 1 converges strongly to some x* element of F(T). (author)
On the 2-orthogonal polynomials and the generalized birth and death processes
Zerouki Ebtissem
2006-01-01
Full Text Available We discuss the connections between the 2-orthogonal polynomials and the generalized birth and death processes. Afterwards, we find the sufficient conditions to give an integral representation of the transition probabilities from these processes.
2010-11-23
... the Attorney General; Certification Process for State Capital Counsel Systems; Removal of Final Rule... only if the Attorney General has certified ``that [the] State has established a mechanism for providing... State to qualify for the special habeas procedures, the Attorney General must determine that ``the State...
2010-05-25
... Office of the Attorney General; Certification Process for State Capital Counsel Systems; Removal of Final Rule AGENCY: Office of the Attorney General, Department of Justice. ACTION: Notice of proposed... the Attorney General has certified ``that [the] State has established a mechanism for providing...
Ekici, Didem Inel
2016-01-01
This study aimed to determine Turkish junior high-school students' perceptions of the general problem-solving process. The Turkish junior high-school students' perceptions of the general problem-solving process were examined in relation to their gender, grade level, age and their grade point with regards to the science course identified in the…
Hirschmann, H.
1983-06-01
The consequences of the basic assumptions of the semi-Markov process as defined from a homogeneous renewal process with a stationary Markov condition are reviewed. The notion of the semi-Markov process is generalized by its redefinition from a nonstationary Markov renewal process. For both the nongeneralized and the generalized case a representation of the first order conditional state probabilities is derived in terms of the transition probabilities of the Markov renewal process. Some useful calculation rules (regeneration rules) are derived for the conditional state probabilities of the semi-Markov process. Compared to the semi-Markov process in its usual definition the generalized process allows the analysis of a larger class of systems. For instance systems with arbitrarily distributed lifetimes of their components can be described. There is also a chance to describe systems which are modified during time by forces or manipulations from outside. (Auth.)
Toward a General Research Process for Using Dubin's Theory Building Model
Holton, Elwood F.; Lowe, Janis S.
2007-01-01
Dubin developed a widely used methodology for theory building, which describes the components of the theory building process. Unfortunately, he does not define a research process for implementing his theory building model. This article proposes a seven-step general research process for implementing Dubin's theory building model. An example of a…
GENERAL ALGORITHMIC SCHEMA OF THE PROCESS OF THE CHILL AUXILIARIES PROJECTION
A. N. Chichko
2006-01-01
Full Text Available The general algorithmic diagram of systematization of the existing approaches to the process of projection is offered and the foundation of computer system of the chill mold arming construction is laid.
PECULIARITIES OF GENERALIZATION OF SIMILAR PHENOMENA IN THE PROCESS OF FISH HEAT TREATMENT
V. A. Pokhol’chenko
2015-01-01
Full Text Available The theoretical presuppositions for the possibility of generalizing and similarity founding in dehydration and wet materials heating processes are studieded in this article. It is offered to carry out the given processes generalization by using dimensionless numbers of similarity. At the detailed analyzing of regularities of heat treatment processes of fish in different modes a significant amount of experienced material was successfully generalized on the basis of dimensionless simplex (similarity numbers. Using the dimensionless simplex allowed to detect a number of simple mathematical models for the studied phenomena. The generalized kinetic models of fish dehydration, the generalized dynamic models (changing moisture diffusion coefficients, the generalized kinetic models of fish heating (the temperature field changing in the products thickness, average volume and center were founded. These generalized mathematical models showed also relationship of dehydration and heating at the processes of fish semi-hot, hot smoking (drying and frying. The relationship of the results from the physical nature of the dehydration process, including a change in the binding energy of the moisture with the material to the extent of the process and the shrinkage impact on the rate of the product moisture removal is given in the article. The factors influencing the internal structure and properties of the raw material changing and retarding the dehydration processes are described there. There was a heating rate dependence of fish products on the chemical composition the geometric dimensions of the object of heating and on the coolant regime parameters. A unique opportunity is opened by using the generalized models, combined with empirically derived equations and the technique of engineering calculation of these processes, to design a rational modes of heat treatment of raw materials and to optimize the performance of thermal equipment.
A generalized fluctuation-dissipation theorem for the one-dimensional diffusion process
Okabe, Y.
1985-01-01
The [α,β,γ]-Langevin equation describes the time evolution of a real stationary process with T-positivity (reflection positivity) originating in the axiomatic quantum field theory. For this [α,β,γ]-Langevin equation a generalized fluctuation-dissipation theorem is proved. We shall obtain, as its application, a generalized fluctuation-dissipation theorem for the one-dimensional non-linear diffusion process, which presents one solution of Ryogo Kubo's problem in physics. (orig.)
Directed transport of confined Brownian particles with torque
Radtke, Paul K.; Schimansky-Geier, Lutz
2012-05-01
We investigate the influence of an additional torque on the motion of Brownian particles confined in a channel geometry with varying width. The particles are driven by random fluctuations modeled by an Ornstein-Uhlenbeck process with given correlation time τc. The latter causes persistent motion and is implemented as (i) thermal noise in equilibrium and (ii) noisy propulsion in nonequilibrium. In the nonthermal process a directed transport emerges; its properties are studied in detail with respect to the correlation time, the torque, and the channel geometry. Eventually, the transport mechanism is traced back to a persistent sliding of particles along the even boundaries in contrast to scattered motion at uneven or rough ones.
Estimation of mean-reverting oil prices: a laboratory approach
Bjerksund, P.; Stensland, G.
1993-12-01
Many economic decision support tools developed for the oil industry are based on the future oil price dynamics being represented by some specified stochastic process. To meet the demand for necessary data, much effort is allocated to parameter estimation based on historical oil price time series. The approach in this paper is to implement a complex future oil market model, and to condense the information from the model to parameter estimates for the future oil price. In particular, we use the Lensberg and Rasmussen stochastic dynamic oil market model to generate a large set of possible future oil price paths. Given the hypothesis that the future oil price is generated by a mean-reverting Ornstein-Uhlenbeck process, we obtain parameter estimates by a maximum likelihood procedure. We find a substantial degree of mean-reversion in the future oil price, which in some of our decision examples leads to an almost negligible value of flexibility. 12 refs., 2 figs., 3 tabs
Integer valued autoregressive processes with generalized discrete Mittag-Leffler marginals
Kanichukattu K. Jose
2013-05-01
Full Text Available In this paper we consider a generalization of discrete Mittag-Leffler distributions. We introduce and study the properties of a new distribution called geometric generalized discrete Mittag-Leffler distribution. Autoregressive processes with geometric generalized discrete Mittag-Leffler distributions are developed and studied. The distributions are further extended to develop a more general class of geometric generalized discrete semi-Mittag-Leffler distributions. The processes are extended to higher orders also. An application with respect to an empirical data on customer arrivals in a bank counter is also given. Various areas of potential applications like human resource development, insect growth, epidemic modeling, industrial risk modeling, insurance and actuaries, town planning etc are also discussed.
SIMULATION AND PREDICTION OF THE PROCESS BASED ON THE GENERAL LOGISTIC MAPPING
V. V. Skalozub
2013-11-01
Full Text Available Purpose. The aim of the research is to build a model of the generalzed logistic mapping and assessment of the possibilities of its use for the formation of the mathematical description, as well as operational forecasts of parameters of complex dynamic processes described by the time series. Methodology. The research results are obtained on the basis of mathematical modeling and simulation of nonlinear systems using the tools of chaotic dynamics. Findings. A model of the generalized logistic mapping, which is used to interpret the characteristics of dynamic processes was proposed. We consider some examples of representations of processes based on enhanced logistic mapping varying the values of model parameters. The procedures of modeling and interpretation of the data on the investigated processes, represented by the time series, as well as the operational forecasting of parameters using the generalized model of logistic mapping were proposed. Originality. The paper proposes an improved mathematical model, generalized logistic mapping, designed for the study of nonlinear discrete dynamic processes. Practical value. The carried out research using the generalized logistic mapping of railway transport processes, in particular, according to assessment of the parameters of traffic volumes, indicate the great potential of its application in practice for solving problems of analysis, modeling and forecasting complex nonlinear discrete dynamical processes. The proposed model can be used, taking into account the conditions of uncertainty, irregularity, the manifestations of the chaotic nature of the technical, economic and other processes, including the railway ones.
Modeling sheep pox disease from the 1994-1998 epidemic in Evros Prefecture, Greece.
Malesios, C; Demiris, N; Abas, Z; Dadousis, K; Koutroumanidis, T
2014-10-01
Sheep pox is a highly transmissible disease which can cause serious loss of livestock and can therefore have major economic impact. We present data from sheep pox epidemics which occurred between 1994 and 1998. The data include weekly records of infected farms as well as a number of covariates. We implement Bayesian stochastic regression models which, in addition to various explanatory variables like seasonal and environmental/meteorological factors, also contain serial correlation structure based on variants of the Ornstein-Uhlenbeck process. We take a predictive view in model selection by utilizing deviance-based measures. The results indicate that seasonality and the number of infected farms are important predictors for sheep pox incidence. Copyright © 2014 Elsevier Ltd. All rights reserved.
Modeling and Forecasting Average Temperature for Weather Derivative Pricing
Zhiliang Wang
2015-01-01
Full Text Available The main purpose of this paper is to present a feasible model for the daily average temperature on the area of Zhengzhou and apply it to weather derivatives pricing. We start by exploring the background of weather derivatives market and then use the 62 years of daily historical data to apply the mean-reverting Ornstein-Uhlenbeck process to describe the evolution of the temperature. Finally, Monte Carlo simulations are used to price heating degree day (HDD call option for this city, and the slow convergence of the price of the HDD call can be found through taking 100,000 simulations. The methods of the research will provide a frame work for modeling temperature and pricing weather derivatives in other similar places in China.
Value of the future: Discounting in random environments
Farmer, J. Doyne; Geanakoplos, John; Masoliver, Jaume; Montero, Miquel; Perelló, Josep
2015-05-01
We analyze how to value future costs and benefits when they must be discounted relative to the present. We introduce the subject for the nonspecialist and take into account the randomness of the economic evolution by studying the discount function of three widely used processes for the dynamics of interest rates: Ornstein-Uhlenbeck, Feller, and log-normal. Besides obtaining exact expressions for the discount function and simple asymptotic approximations, we show that historical average interest rates overestimate long-run discount rates and that this effect can be large. In other words, long-run discount rates should be substantially less than the average rate observed in the past, otherwise any cost-benefit calculation would be biased in favor of the present and against interventions that may protect the future.
Fast shuttling of a particle under weak spring-constant noise of the moving trap
Lu, Xiao-Jing; Ruschhaupt, A.; Muga, J. G.
2018-05-01
We investigate the excitation of a quantum particle shuttled in a harmonic trap with weak spring-constant colored noise. The Ornstein-Uhlenbeck model for the noise correlation function describes a wide range of possible noises, in particular for short correlation times the white-noise limit examined by Lu et al. [Phys. Rev. A 89, 063414 (2014)], 10.1103/PhysRevA.89.063414 and, by averaging over correlation times, "1 /f flicker noise." We find expressions for the excitation energy in terms of static (independent of trap motion) and dynamical sensitivities, with opposite behavior with respect to shuttling time, and demonstrate that the excitation can be reduced by proper process timing and design of the trap trajectory.
Relaxometry imaging of superparamagnetic magnetite nanoparticles at ambient conditions
Finkler, Amit; Schmid-Lorch, Dominik; Häberle, Thomas; Reinhard, Friedemann; Zappe, Andrea; Slota, Michael; Bogani, Lapo; Wrachtrup, Jörg
We present a novel technique to image superparamagnetic iron oxide nanoparticles via their fluctuating magnetic fields. The detection is based on the nitrogen-vacancy (NV) color center in diamond, which allows optically detected magnetic resonance (ODMR) measurements on its electron spin structure. In combination with an atomic-force-microscope, this atomic-sized color center maps ambient magnetic fields in a wide frequency range from DC up to several GHz, while retaining a high spatial resolution in the sub-nanometer range. We demonstrate imaging of single 10 nm sized magnetite nanoparticles using this spin noise detection technique. By fitting simulations (Ornstein-Uhlenbeck process) to the data, we are able to infer additional information on such a particle and its dynamics, like the attempt frequency and the anisotropy constant. This is of high interest to the proposed application of magnetite nanoparticles as an alternative MRI contrast agent or to the field of particle-aided tumor hyperthermia.
Digital hardware implementation of a stochastic two-dimensional neuron model.
Grassia, F; Kohno, T; Levi, T
2016-11-01
This study explores the feasibility of stochastic neuron simulation in digital systems (FPGA), which realizes an implementation of a two-dimensional neuron model. The stochasticity is added by a source of current noise in the silicon neuron using an Ornstein-Uhlenbeck process. This approach uses digital computation to emulate individual neuron behavior using fixed point arithmetic operation. The neuron model's computations are performed in arithmetic pipelines. It was designed in VHDL language and simulated prior to mapping in the FPGA. The experimental results confirmed the validity of the developed stochastic FPGA implementation, which makes the implementation of the silicon neuron more biologically plausible for future hybrid experiments. Copyright © 2017 Elsevier Ltd. All rights reserved.
Hu, D. L.; Liu, X. B.
Both periodic loading and random forces commonly co-exist in real engineering applications. However, the dynamic behavior, especially dynamic stability of systems under parametric periodic and random excitations has been reported little in the literature. In this study, the moment Lyapunov exponent and stochastic stability of binary airfoil under combined harmonic and non-Gaussian colored noise excitations are investigated. The noise is simplified to an Ornstein-Uhlenbeck process by applying the path-integral method. Via the singular perturbation method, the second-order expansions of the moment Lyapunov exponent are obtained, which agree well with the results obtained by the Monte Carlo simulation. Finally, the effects of the noise and parametric resonance (such as subharmonic resonance and combination additive resonance) on the stochastic stability of the binary airfoil system are discussed.
Chen, H L; Wang, J K; Zhang, L L; Wu, Z Y
2000-04-01
Determining and comparing the contents of general flavonoides in four kinds of differently-processed products of Epimedium acuminatum. Determining the contents by ultraviolet spectrophotometry. The contents were found in the following seguence: unprocessed product, clearly-fried product, alcohol-broiled product, salt-broiled product, sheep-fat-broiled product. The average recovery rate was 96.01%, with a 0.74% RSD(n = 5). Heating causes the contents of general flavonoides in the processed products to decrease. These processed products are still often used in clinical treatment, for the reason that the adjuvant features certain coordinating and promoting functions. The study is to be pursued further.
The Burden of the Fellowship Interview Process on General Surgery Residents and Programs.
Watson, Shawna L; Hollis, Robert H; Oladeji, Lasun; Xu, Shin; Porterfield, John R; Ponce, Brent A
This study evaluated the effect of the fellowship interview process in a cohort of general surgery residents. We hypothesized that the interview process would be associated with significant clinical time lost, monetary expenses, and increased need for shift coverage. An online anonymous survey link was sent via e-mail to general surgery program directors in June 2014. Program directors distributed an additional survey link to current residents in their program who had completed the fellowship interview process. United States allopathic general surgery programs. Overall, 50 general surgery program directors; 72 general surgery residents. Program directors reported a fellowship application rate of 74.4%. Residents most frequently attended 8 to 12 interviews (35.2%). Most (57.7%) of residents reported missing 7 or more days of clinical training to attend interviews; these shifts were largely covered by other residents. Most residents (62.3%) spent over $4000 on the interview process. Program directors rated fellowship burden as an average of 6.7 on a 1 to 10 scale of disruption, with 10 being a significant disruption. Most of the residents (57.3%) were in favor of change in the interview process. We identified potential areas for improvement including options for coordinated interviews and improved content on program websites. The surgical fellowship match is relatively burdensome to residents and programs alike, and merits critical assessment for potential improvement. Published by Elsevier Inc.
Cherstvy, Andrey G; Metzler, Ralf
2015-01-01
We study generalized anomalous diffusion processes whose diffusion coefficient D(x, t) ∼ D 0 |x| α t β depends on both the position x of the test particle and the process time t. This process thus combines the features of scaled Brownian motion and heterogeneous diffusion parent processes. We compute the ensemble and time averaged mean squared displacements of this generalized diffusion process. The scaling exponent of the ensemble averaged mean squared displacement is shown to be the product of the critical exponents of the parent processes, and describes both subdiffusive and superdiffusive systems. We quantify the amplitude fluctuations of the time averaged mean squared displacement as function of the length of the time series and the lag time. In particular, we observe a weak ergodicity breaking of this generalized diffusion process: even in the long time limit the ensemble and time averaged mean squared displacements are strictly disparate. When we start to observe this process some time after its initiation we observe distinct features of ageing. We derive a universal ageing factor for the time averaged mean squared displacement containing all information on the ageing time and the measurement time. External confinement is shown to alter the magnitudes and statistics of the ensemble and time averaged mean squared displacements. (paper)
Generalized Inferences about the Mean Vector of Several Multivariate Gaussian Processes
Pilar Ibarrola
2015-01-01
Full Text Available We consider in this paper the problem of comparing the means of several multivariate Gaussian processes. It is assumed that the means depend linearly on an unknown vector parameter θ and that nuisance parameters appear in the covariance matrices. More precisely, we deal with the problem of testing hypotheses, as well as obtaining confidence regions for θ. Both methods will be based on the concepts of generalized p value and generalized confidence region adapted to our context.
Using Self-Reflection To Increase Science Process Skills in the General Chemistry Laboratory
Veal, William R.; Taylor, Dawne; Rogers, Amy L.
2009-03-01
Self-reflection is a tool of instruction that has been used in the science classroom. Research has shown great promise in using video as a learning tool in the classroom. However, the integration of self-reflective practice using video in the general chemistry laboratory to help students develop process skills has not been done. Immediate video feedback and direct instruction were employed in a general chemistry laboratory course to improve students' mastery and understanding of basic and advanced process skills. Qualitative results and statistical analysis of quantitative data proved that self-reflection significantly helped students develop basic and advanced process skills, yet did not seem to influence the general understanding of the science content.
A generalized logarithmic image processing model based on the gigavision sensor model.
Deng, Guang
2012-03-01
The logarithmic image processing (LIP) model is a mathematical theory providing generalized linear operations for image processing. The gigavision sensor (GVS) is a new imaging device that can be described by a statistical model. In this paper, by studying these two seemingly unrelated models, we develop a generalized LIP (GLIP) model. With the LIP model being its special case, the GLIP model not only provides new insights into the LIP model but also defines new image representations and operations for solving general image processing problems that are not necessarily related to the GVS. A new parametric LIP model is also developed. To illustrate the application of the new scalar multiplication operation, we propose an energy-preserving algorithm for tone mapping, which is a necessary step in image dehazing. By comparing with results using two state-of-the-art algorithms, we show that the new scalar multiplication operation is an effective tool for tone mapping.
The process of patient enablement in general practice nurse consultations: a grounded theory study.
Desborough, Jane; Banfield, Michelle; Phillips, Christine; Mills, Jane
2017-05-01
The aim of this study was to gain insight into the process of patient enablement in general practice nursing consultations. Enhanced roles for general practice nurses may benefit patients through a range of mechanisms, one of which may be increasing patient enablement. In studies with general practitioners enhanced patient enablement has been associated with increases in self-efficacy and skill development. This study used a constructivist grounded theory design. In-depth interviews were conducted with 16 general practice nurses and 23 patients from 21 general practices between September 2013 - March 2014. Data generation and analysis were conducted concurrently using constant comparative analysis and theoretical sampling focussing on the process and outcomes of patient enablement. Use of the storyline technique supported theoretical coding and integration of the data into a theoretical model. A clearly defined social process that fostered and optimised patient enablement was constructed. The theory of 'developing enabling healthcare partnerships between nurses and patients in general practice' incorporates three stages: triggering enabling healthcare partnerships, tailoring care and the manifestation of patient enablement. Patient enablement was evidenced through: 1. Patients' understanding of their unique healthcare requirements informing their health seeking behaviours and choices; 2. Patients taking an increased lead in their partnership with a nurse and seeking choices in their care and 3. Patients getting health care that reflected their needs, preferences and goals. This theoretical model is in line with a patient-centred model of health care and is particularly suited to patients with chronic disease. © 2016 John Wiley & Sons Ltd.
Olexandr Tyhorskyy
2015-08-01
Full Text Available Purpose: to improve the method of training highly skilled bodybuilders during the general preparatory phase. Material and Methods: the study involved eight highly skilled athletes, members of the team of Ukraine on bodybuilding. Results: comparative characteristics of the most commonly used methods of training process in bodybuilding. Developed and substantiated the optimal method of training highly skilled bodybuilders during the general preparatory phase of the preparatory period, which can increase body weight through muscle athletes component. Conclusions: based on studies, recommended the optimum method of training highly skilled bodybuilders depending on mezotsykles and microcycles general preparatory phase
Schenck, Natalya A.; Horvath, Philip A.; Sinha, Amit K.
2018-02-01
While the literature on price discovery process and information flow between dominant and satellite market is exhaustive, most studies have applied an approach that can be traced back to Hasbrouck (1995) or Gonzalo and Granger (1995). In this paper, however, we propose a Generalized Langevin process with asymmetric double-well potential function, with co-integrated time series and interconnected diffusion processes to model the information flow and price discovery process in two, a dominant and a satellite, interconnected markets. A simulated illustration of the model is also provided.
Profile of science process skills of Preservice Biology Teacher in General Biology Course
Susanti, R.; Anwar, Y.; Ermayanti
2018-04-01
This study aims to obtain portrayal images of science process skills among preservice biology teacher. This research took place in Sriwijaya University and involved 41 participants. To collect the data, this study used multiple choice test comprising 40 items to measure the mastery of science process skills. The data were then analyzed in descriptive manner. The results showed that communication aspect outperfomed the other skills with that 81%; while the lowest one was identifying variables and predicting (59%). In addition, basic science process skills was 72%; whereas for integrated skills was a bit lower, 67%. In general, the capability of doing science process skills varies among preservice biology teachers.
General induction at companies - between an administrative process and a sociological phenomenon
Héctor L. Bermúdez Restrepo
2012-12-01
Full Text Available From the example of the process of general induction into the organization and using certain sociological resources, shows paradox are specialists in human management: is to carefor the motivation and the welfare of workers to achieve its high performance, their fidelity and his tenure at the company; However, current mutations of the social architecture in general and of work in particular –as structure of organized action– force thinking that organizational loyalty tends to be increasingly unlikely and that, conversely, the current personnel administration processes appear made inappropriate notions and appear to contribute directly to the adversities of human beings in organizational settings.
Alloza, Clara; Cox, Simon R; Duff, Barbara; Semple, Scott I; Bastin, Mark E; Whalley, Heather C; Lawrie, Stephen M
2016-08-30
Several authors have proposed that schizophrenia is the result of impaired connectivity between specific brain regions rather than differences in local brain activity. White matter abnormalities have been suggested as the anatomical substrate for this dysconnectivity hypothesis. Information processing speed may act as a key cognitive resource facilitating higher order cognition by allowing multiple cognitive processes to be simultaneously available. However, there is a lack of established associations between these variables in schizophrenia. We hypothesised that the relationship between white matter and general intelligence would be mediated by processing speed. White matter water diffusion parameters were studied using Tract-based Spatial Statistics and computed within 46 regions-of-interest (ROI). Principal component analysis was conducted on these white matter ROI for fractional anisotropy (FA) and mean diffusivity, and on neurocognitive subtests to extract general factors of white mater structure (gFA, gMD), general intelligence (g) and processing speed (gspeed). There was a positive correlation between g and gFA (r= 0.67, p =0.001) that was partially and significantly mediated by gspeed (56.22% CI: 0.10-0.62). These findings suggest a plausible model of structure-function relations in schizophrenia, whereby white matter structure may provide a neuroanatomical substrate for general intelligence, which is partly supported by speed of information processing. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
On-line validation of linear process models using generalized likelihood ratios
Tylee, J.L.
1981-12-01
A real-time method for testing the validity of linear models of nonlinear processes is described and evaluated. Using generalized likelihood ratios, the model dynamics are continually monitored to see if the process has moved far enough away from the nominal linear model operating point to justify generation of a new linear model. The method is demonstrated using a seventh-order model of a natural circulation steam generator
General definitions of chaos for continuous and discrete-time processes
Vieru, Andrei
2008-01-01
A precise definition of chaos for discrete processes based on iteration already exists. We shall first reformulate it in a more general frame, taking into account the fact that discrete chaotic behavior is neither necessarily based on iteration nor strictly related to compact metric spaces or to bounded functions. Then we shall apply the central idea of this definition to continuous processes. We shall try to see what chaos is, regardless of the way it is generated.
2011-01-25
... and Development (HFM-40), Center for Biologics Evaluation and Research (CBER), Food and Drug...] Guidance for Industry on Process Validation: General Principles and Practices; Availability AGENCY: Food... of Drug Information, Center for Drug Evaluation and Research, Food and Drug Administration, 10903 New...
Henderson, Emily J; Rubin, Greg P
2013-05-01
To evaluate the utility of Isabel, an online diagnostic decision support system developed by Isabel Healthcare primarily for secondary medical care, in the general practice setting. Focus groups were conducted with clinicians to understand why and how they used the system. A modified online post-use survey asked practitioners about its impact on their decision-making. Normalization process theory (NPT) was used as a theoretical framework to determine whether the system could be incorporated into routine clinical practice. The system was introduced by NHS County Durham and Darlington in the UK in selected general practices as a three-month pilot. General practitioners and nurse practitioners who had access to Isabel as part of the Primary Care Trust's pilot. General practitioners' views, experiences and usage of the system. Seven general practices agreed to pilot Isabel. Two practices did not subsequently use it. The remaining five practices conducted searches on 16 patients. Post-use surveys (n = 10) indicated that Isabel had little impact on diagnostic decision-making. Focus group participants stated that, although the diagnoses produced by Isabel in general did not have an impact on their decision-making, they would find the tool useful if it were better tailored to the primary care setting. Our analysis concluded that normalization was not likely to occur in its current form. Isabel was of limited utility in this short pilot study and may need further modification for use in general practice.
Process mapping as a framework for performance improvement in emergency general surgery.
DeGirolamo, Kristin; D'Souza, Karan; Hall, William; Joos, Emilie; Garraway, Naisan; Sing, Chad Kim; McLaughlin, Patrick; Hameed, Morad
2018-02-01
Emergency general surgery conditions are often thought of as being too acute for the development of standardized approaches to quality improvement. However, process mapping, a concept that has been applied extensively in manufacturing quality improvement, is now being used in health care. The objective of this study was to create process maps for small bowel obstruction in an effort to identify potential areas for quality improvement. We used the American College of Surgeons Emergency General Surgery Quality Improvement Program pilot database to identify patients who received nonoperative or operative management of small bowel obstruction between March 2015 and March 2016. This database, patient charts and electronic health records were used to create process maps from the time of presentation to discharge. Eighty-eight patients with small bowel obstruction (33 operative; 55 nonoperative) were identified. Patients who received surgery had a complication rate of 32%. The processes of care from the time of presentation to the time of follow-up were highly elaborate and variable in terms of duration; however, the sequences of care were found to be consistent. We used data visualization strategies to identify bottlenecks in care, and they showed substantial variability in terms of operating room access. Variability in the operative care of small bowel obstruction is high and represents an important improvement opportunity in general surgery. Process mapping can identify common themes, even in acute care, and suggest specific performance improvement measures.
Is general intelligence little more than the speed of higher-order processing?
Schubert, Anna-Lena; Hagemann, Dirk; Frischkorn, Gidon T
2017-10-01
Individual differences in the speed of information processing have been hypothesized to give rise to individual differences in general intelligence. Consistent with this hypothesis, reaction times (RTs) and latencies of event-related potential have been shown to be moderately associated with intelligence. These associations have been explained either in terms of individual differences in some brain-wide property such as myelination, the speed of neural oscillations, or white-matter tract integrity, or in terms of individual differences in specific processes such as the signal-to-noise ratio in evidence accumulation, executive control, or the cholinergic system. Here we show in a sample of 122 participants, who completed a battery of RT tasks at 2 laboratory sessions while an EEG was recorded, that more intelligent individuals have a higher speed of higher-order information processing that explains about 80% of the variance in general intelligence. Our results do not support the notion that individuals with higher levels of general intelligence show advantages in some brain-wide property. Instead, they suggest that more intelligent individuals benefit from a more efficient transmission of information from frontal attention and working memory processes to temporal-parietal processes of memory storage. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Roopa Shivashankar
2016-01-01
Full Text Available Aim: To assess the level of adherence to diabetes care processes, and associated clinic and patient factors at general practices in Delhi, India. Methods: We interviewed physicians (n = 23 and patients with diabetes (n = 406, and reviewed patient charts at general practices (government = 5; private = 18. We examined diabetes care processes, specifically measurement of weight, blood pressure (BP, glycated hemoglobin (HbA1c, lipids, electrocardiogram, dilated eye, and a foot examination in the last one year. We analyzed clinic and patient factors associated with a number of care processes achieved using multilevel Poisson regression model. Results: The average number of clinic visits per patient was 8.8/year (standard deviation = 5.7, and physicians had access to patient's previous records in only 19.7% of patients. Dilated eye exam, foot exam, and electrocardiogram were completed in 7.4%, 15.1%, and 29.1% of patients, respectively. An estimated 51.7%, 88.4%, and 28.1% had ≥1 measurement of HbA1c, BP, and lipids, respectively. Private clinics, physician access to patient's previous records, use of nonphysicians, patient education, and the presence of diabetes complication were positively associated with a number of care processes in the multivariable model. Conclusion: Adherence to diabetes care processes was suboptimal. Encouraging implementation of quality improvement strategies like Chronic Care Model elements at general practices may improve diabetes care.
Generalization bounds of ERM-based learning processes for continuous-time Markov chains.
Zhang, Chao; Tao, Dacheng
2012-12-01
Many existing results on statistical learning theory are based on the assumption that samples are independently and identically distributed (i.i.d.). However, the assumption of i.i.d. samples is not suitable for practical application to problems in which samples are time dependent. In this paper, we are mainly concerned with the empirical risk minimization (ERM) based learning process for time-dependent samples drawn from a continuous-time Markov chain. This learning process covers many kinds of practical applications, e.g., the prediction for a time series and the estimation of channel state information. Thus, it is significant to study its theoretical properties including the generalization bound, the asymptotic convergence, and the rate of convergence. It is noteworthy that, since samples are time dependent in this learning process, the concerns of this paper cannot (at least straightforwardly) be addressed by existing methods developed under the sample i.i.d. assumption. We first develop a deviation inequality for a sequence of time-dependent samples drawn from a continuous-time Markov chain and present a symmetrization inequality for such a sequence. By using the resultant deviation inequality and symmetrization inequality, we then obtain the generalization bounds of the ERM-based learning process for time-dependent samples drawn from a continuous-time Markov chain. Finally, based on the resultant generalization bounds, we analyze the asymptotic convergence and the rate of convergence of the learning process.
Lawrence I. EDET
2015-09-01
Full Text Available The general account of Nigeria’s post-independence electoral processes has always been characterized by violence. Nigeria’s 2015 general elections marked the fifth multi-party elections in the country and the second handover of civilian administrations since the inception of the Fourth Republic democratic experiment in 1999. This account cannot be analyzed without issues of electoral violence. Electoral violence had been a permanent feature of Nigeria’s democratic process, except 2015 general elections where the international observers described as a “significant improvement” over the previous elections in terms of violence related cases. Electoral related violence in the country particularly in 2011 got to an unprecedented dimension resulting in destruction of lives and property worth millions of naira. This paper expatiates on electoral violence and its general implications on the democratization process in the country, with major emphasis on the 2011 and 2015 general elections. The paper argued that the high incidence of pre and post electoral violence in the country within the periods has to do with the way Nigerian politicians regard politics, weak political institutions and weak electoral management body as well as bias nature of the security agencies, etc. However, the paper examined the general implications of electoral violence on democratization process and how the country can handle the electoral process to avoid threats associated with the electoral violence. Archival analysis, which widely extracted data from newspapers, journals, workshop papers, books, as well as publications of non-governmental organizations was adopted for the study. The major significance of this study is to expose the negative implications associated with electoral violence and how it can be curbed. The position canvassed in this paper will serve as a useful political literature for political leaders, policy makers and the general reading public who
Burgers' turbulence problem with linear or quadratic external potential
Barndorff-Nielsen, Ole Eiler; Leonenko, N.N.
2005-01-01
We consider solutions of Burgers' equation with linear or quadratic external potential and stationary random initial conditions of Ornstein-Uhlenbeck type. We study a class of limit laws that correspond to a scale renormalization of the solutions.......We consider solutions of Burgers' equation with linear or quadratic external potential and stationary random initial conditions of Ornstein-Uhlenbeck type. We study a class of limit laws that correspond to a scale renormalization of the solutions....
Random Young diagrams in a Rectangular Box
Beltoft, Dan; Boutillier, Cédric; Enriquez, Nathanaël
We exhibit the limit shape of random Young diagrams having a distribution proportional to the exponential of their area, and confined in a rectangular box. The Ornstein-Uhlenbeck bridge arises from the fluctuations around the limit shape.......We exhibit the limit shape of random Young diagrams having a distribution proportional to the exponential of their area, and confined in a rectangular box. The Ornstein-Uhlenbeck bridge arises from the fluctuations around the limit shape....
Toward a model framework of generalized parallel componential processing of multi-symbol numbers.
Huber, Stefan; Cornelsen, Sonja; Moeller, Korbinian; Nuerk, Hans-Christoph
2015-05-01
In this article, we propose and evaluate a new model framework of parallel componential multi-symbol number processing, generalizing the idea of parallel componential processing of multi-digit numbers to the case of negative numbers by considering the polarity signs similar to single digits. In a first step, we evaluated this account by defining and investigating a sign-decade compatibility effect for the comparison of positive and negative numbers, which extends the unit-decade compatibility effect in 2-digit number processing. Then, we evaluated whether the model is capable of accounting for previous findings in negative number processing. In a magnitude comparison task, in which participants had to single out the larger of 2 integers, we observed a reliable sign-decade compatibility effect with prolonged reaction times for incompatible (e.g., -97 vs. +53; in which the number with the larger decade digit has the smaller, i.e., negative polarity sign) as compared with sign-decade compatible number pairs (e.g., -53 vs. +97). Moreover, an analysis of participants' eye fixation behavior corroborated our model of parallel componential processing of multi-symbol numbers. These results are discussed in light of concurrent theoretical notions about negative number processing. On the basis of the present results, we propose a generalized integrated model framework of parallel componential multi-symbol processing. (c) 2015 APA, all rights reserved).
Bio-inspired Artificial Intelligence: А Generalized Net Model of the Regularization Process in MLP
Stanimir Surchev
2013-10-01
Full Text Available Many objects and processes inspired by the nature have been recreated by the scientists. The inspiration to create a Multilayer Neural Network came from human brain as member of the group. It possesses complicated structure and it is difficult to recreate, because of the existence of too many processes that require different solving methods. The aim of the following paper is to describe one of the methods that improve learning process of Artificial Neural Network. The proposed generalized net method presents Regularization process in Multilayer Neural Network. The purpose of verification is to protect the neural network from overfitting. The regularization is commonly used in neural network training process. Many methods of verification are present, the subject of interest is the one known as Regularization. It contains function in order to set weights and biases with smaller values to protect from overfitting.
Generalization of the photo process window and its application to OPC test pattern design
Eisenmann, Hans; Peter, Kai; Strojwas, Andrzej J.
2003-07-01
From the early development phase up to the production phase, test pattern play a key role for microlithography. The requirement for test pattern is to represent the design well and to cover the space of all process conditions, e.g. to investigate the full process window and all other process parameters. This paper shows that the current state-of-the-art test pattern do not address these requirements sufficiently and makes suggestions for a better selection of test pattern. We present a new methodology to analyze an existing layout (e.g. logic library, test pattern or full chip) for critical layout situations which does not need precise process data. We call this method "process space decomposition", because it is aimed at decomposing the process impact to a layout feature into a sum of single independent contributions, the dimensions of the process space. This is a generalization of the classical process window, which examines defocus and exposure dependency of given test pattern, e.g. CD value of dense and isolated lines. In our process space we additionally define the dimensions resist effects, etch effects, mask error and misalignment, which describe the deviation of the printed silicon pattern from its target. We further extend it by the pattern space using a product based layout (library, full chip or synthetic test pattern). The criticality of pattern is defined by their deviation due to aerial image, their sensitivity to the respective dimension or several combinations of these. By exploring the process space for a given design, the method allows to find the most critical patterns independent of specific process parameters. The paper provides examples for different applications of the method: (1) selection of design oriented test pattern for lithography development (2) test pattern reduction in process characterization (3) verification/optimization of printability and performance of post processing procedures (like OPC) (4) creation of a sensitive process
Generalized random walk algorithm for the numerical modeling of complex diffusion processes
Vamos, C; Vereecken, H
2003-01-01
A generalized form of the random walk algorithm to simulate diffusion processes is introduced. Unlike the usual approach, at a given time all the particles from a grid node are simultaneously scattered using the Bernoulli repartition. This procedure saves memory and computing time and no restrictions are imposed for the maximum number of particles to be used in simulations. We prove that for simple diffusion the method generalizes the finite difference scheme and gives the same precision for large enough number of particles. As an example, simulations of diffusion in random velocity field are performed and the main features of the stochastic mathematical model are numerically tested.
Generalized random walk algorithm for the numerical modeling of complex diffusion processes
Vamos, Calin; Suciu, Nicolae; Vereecken, Harry
2003-01-01
A generalized form of the random walk algorithm to simulate diffusion processes is introduced. Unlike the usual approach, at a given time all the particles from a grid node are simultaneously scattered using the Bernoulli repartition. This procedure saves memory and computing time and no restrictions are imposed for the maximum number of particles to be used in simulations. We prove that for simple diffusion the method generalizes the finite difference scheme and gives the same precision for large enough number of particles. As an example, simulations of diffusion in random velocity field are performed and the main features of the stochastic mathematical model are numerically tested
Loughry, Thomas A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-02-01
As the volume of data acquired by space-based sensors increases, mission data compression/decompression and forward error correction code processing performance must likewise scale. This competency development effort was explored using the General Purpose Graphics Processing Unit (GPGPU) to accomplish high-rate Rice Decompression and high-rate Reed-Solomon (RS) decoding at the satellite mission ground station. Each algorithm was implemented and benchmarked on a single GPGPU. Distributed processing across one to four GPGPUs was also investigated. The results show that the GPGPU has considerable potential for performing satellite communication Data Signal Processing, with three times or better performance improvements and up to ten times reduction in cost over custom hardware, at least in the case of Rice Decompression and Reed-Solomon Decoding.
General purpose graphic processing unit implementation of adaptive pulse compression algorithms
Cai, Jingxiao; Zhang, Yan
2017-07-01
This study introduces a practical approach to implement real-time signal processing algorithms for general surveillance radar based on NVIDIA graphical processing units (GPUs). The pulse compression algorithms are implemented using compute unified device architecture (CUDA) libraries such as CUDA basic linear algebra subroutines and CUDA fast Fourier transform library, which are adopted from open source libraries and optimized for the NVIDIA GPUs. For more advanced, adaptive processing algorithms such as adaptive pulse compression, customized kernel optimization is needed and investigated. A statistical optimization approach is developed for this purpose without needing much knowledge of the physical configurations of the kernels. It was found that the kernel optimization approach can significantly improve the performance. Benchmark performance is compared with the CPU performance in terms of processing accelerations. The proposed implementation framework can be used in various radar systems including ground-based phased array radar, airborne sense and avoid radar, and aerospace surveillance radar.
Audit and account billing process in a private general hospital: a case study
Raquel Silva Bicalho Zunta
2017-12-01
Full Text Available Our study aimed to map, describe and, validate the audit, account billing and billing reports processes in a large, private general hospital. An exploratory, descriptive, case report study. We conducted non-participatory observation moments in Internal Audit Sectors and Billing Reports from the hospital, aiming to map the processes which were the study objects. The data obtained was validated by internal and external audit specialists in hospital bills. The described and illustrated processes in three flow-charts favor professionals to rationalize their activities and the time spent in hospital billing, avoiding or minimizing the occurrence of flaws and, generating more effective financial results. The mapping, the description and the audit validation process and billing and, the billing reports propitiated more visibility and legitimacy to actions developed by auditor nurses.
Discrete-Event Execution Alternatives on General Purpose Graphical Processing Units
Perumalla, Kalyan S.
2006-01-01
Graphics cards, traditionally designed as accelerators for computer graphics, have evolved to support more general-purpose computation. General Purpose Graphical Processing Units (GPGPUs) are now being used as highly efficient, cost-effective platforms for executing certain simulation applications. While most of these applications belong to the category of time-stepped simulations, little is known about the applicability of GPGPUs to discrete event simulation (DES). Here, we identify some of the issues and challenges that the GPGPU stream-based interface raises for DES, and present some possible approaches to moving DES to GPGPUs. Initial performance results on simulation of a diffusion process show that DES-style execution on GPGPU runs faster than DES on CPU and also significantly faster than time-stepped simulations on either CPU or GPGPU.
Introduction of a pyramid guiding process for general musculoskeletal physical rehabilitation
Stark Timothy W
2006-06-01
Full Text Available Abstract Successful instruction of a complicated subject as Physical Rehabilitation demands organization. To understand principles and processes of such a field demands a hierarchy of steps to achieve the intended outcome. This paper is intended to be an introduction to a proposed pyramid scheme of general physical rehabilitation principles. The purpose of the pyramid scheme is to allow for a greater understanding for the student and patient. As the respected Food Guide Pyramid accomplishes, the student will further appreciate and apply supported physical rehabilitation principles and the patient will understand that there is a progressive method to their functional healing process.
Ruslan Skrynkovskyy
2017-12-01
Full Text Available The purpose of the article is to improve the model of the enterprise (institution, organization management process on the basis of general management functions. The graphic model of the process of management according to the process-structured management is presented. It has been established that in today's business environment, the model of the management process should include such general management functions as: 1 controlling the achievement of results; 2 planning based on the main goal; 3 coordination and corrective actions (in the system of organization of work and production; 4 action as a form of act (conscious, volitional, directed; 5 accounting system (accounting, statistical, operational-technical and managerial; 6 diagnosis (economic, legal with such subfunctions as: identification of the state and capabilities; analysis (economic, legal, systemic with argumentation; assessment of the state, trends and prospects of development. The prospect of further research in this direction is: 1 the formation of a system of interrelation of functions and management methods, taking into account the presented research results; 2 development of the model of effective and efficient communication business process of the enterprise.
MacNamara, Annmarie; Proudfit, Greg Hajcak
2014-01-01
Generalized Anxiety Disorder (GAD) may be characterized by emotion regulation deficits attributable to an imbalance between top-down (i.e., goal-driven) and bottom-up (i.e., stimulus-driven) attention. In prior work, these attentional processes were examined by presenting unpleasant and neutral pictures within a working memory paradigm. The late positive potential (LPP) measured attention toward task-irrelevant pictures. Results from this prior work showed that working memory load reduced the...
The Green-Kubo formula for general Markov processes with a continuous time parameter
Yang Fengxia; Liu Yong; Chen Yong
2010-01-01
For general Markov processes, the Green-Kubo formula is shown to be valid under a mild condition. A class of stochastic evolution equations on a separable Hilbert space and three typical infinite systems of locally interacting diffusions on Z d (irreversible in most cases) are shown to satisfy the Green-Kubo formula, and the Einstein relations for these stochastic evolution equations are shown explicitly as a corollary.
Deschaud, B.; Peyrusse, O.; Rosmej, F.B.
2014-01-01
Generalized atomic processes are proposed to establish a consistent description from the free-atom approach to the heated and even up to the cold solid. It is based on a rigorous introduction of the Fermi-Dirac statistics, Pauli blocking factors and on the respect of the principle of detailed balance via the introduction of direct and inverse processes. A probability formalism driven by the degeneracy of the free electrons enables to establish a link of atomic rates valid from the heated atom up to the cold solid. This allows to describe photoionization processes in atomic population kinetics and subsequent solid matter heating on a femtosecond time scale. The Auger effect is linked to the 3-body recombination via a generalized 3-body recombination that is identified as a key mechanism, along with the collisional ionization, that follows energy deposition by photoionization of inner shells when short, intense and high-energy radiation interacts with matter. Detailed simulations are carried out for aluminum that highlight the importance of the generalized approach. (authors)
General methodology for exergy balance in ProSimPlus® process simulator
Ghannadzadeh, Ali; Thery-Hetreux, Raphaële; Baudouin, Olivier; Baudet, Philippe; Floquet, Pascal; Joulia, Xavier
2012-01-01
This paper presents a general methodology for exergy balance in chemical and thermal processes integrated in ProSimPlus ® as a well-adapted process simulator for energy efficiency analysis. In this work, as well as using the general expressions for heat and work streams, the whole exergy balance is presented within only one software in order to fully automate exergy analysis. In addition, after exergy balance, the essential elements such as source of irreversibility for exergy analysis are presented to help the user for modifications on either process or utility system. The applicability of the proposed methodology in ProSimPlus ® is shown through a simple scheme of Natural Gas Liquids (NGL) recovery process and its steam utility system. The methodology does not only provide the user with necessary exergetic criteria to pinpoint the source of exergy losses, it also helps the user to find the way to reduce the exergy losses. These features of the proposed exergy calculator make it preferable for its implementation in ProSimPlus ® to define the most realistic and profitable retrofit projects on the existing chemical and thermal plants. -- Highlights: ► A set of new expressions for calculation of exergy of material streams is developed. ► A general methodology for exergy balance in ProSimPlus ® is presented. ► A panel of solutions based on exergy analysis is provided to help the user for modifications on process flowsheets. ► The exergy efficiency is chosen as a variable in a bi-criteria optimization.
Cao, Robin; Pastukhov, Alexander; Mattia, Maurizio; Braun, Jochen
2016-06-29
The timing of perceptual decisions depends on both deterministic and stochastic factors, as the gradual accumulation of sensory evidence (deterministic) is contaminated by sensory and/or internal noise (stochastic). When human observers view multistable visual displays, successive episodes of stochastic accumulation culminate in repeated reversals of visual appearance. Treating reversal timing as a "first-passage time" problem, we ask how the observed timing densities constrain the underlying stochastic accumulation. Importantly, mean reversal times (i.e., deterministic factors) differ enormously between displays/observers/stimulation levels, whereas the variance and skewness of reversal times (i.e., stochastic factors) keep characteristic proportions of the mean. What sort of stochastic process could reproduce this highly consistent "scaling property?" Here we show that the collective activity of a finite population of bistable units (i.e., a generalized Ehrenfest process) quantitatively reproduces all aspects of the scaling property of multistable phenomena, in contrast to other processes under consideration (Poisson, Wiener, or Ornstein-Uhlenbeck process). The postulated units express the spontaneous dynamics of attractor assemblies transitioning between distinct activity states. Plausible candidates are cortical columns, or clusters of columns, as they are preferentially connected and spontaneously explore a restricted repertoire of activity states. Our findings suggests that perceptual representations are granular, probabilistic, and operate far from equilibrium, thereby offering a suitable substrate for statistical inference. Spontaneous reversals of high-level perception, so-called multistable perception, conform to highly consistent and characteristic statistics, constraining plausible neural representations. We show that the observed perceptual dynamics would be reproduced quantitatively by a finite population of distinct neural assemblies, each with
ABOUT THE GENERAL CONCEPT OF THE UNIVERSAL STORAGE SYSTEM AND PRACTICE-ORIENTED DATA PROCESSING
L. V. Rudikova
2017-01-01
Full Text Available Approaches evolution and concept of data accumulation in warehouse and subsequent Data Mining use is perspective due to the fact that, Belarusian segment of the same IT-developments is organizing. The article describes the general concept for creation a system of storage and practice-oriented data analysis, based on the data warehousing technology. The main aspect in universal system design on storage layer and working with data is approach uses extended data warehouse, based on universal platform of stored data, which grants access to storage and subsequent data analysis different structure and subject domains have compound’s points (nodes and extended functional with data structure choice option for data storage and subsequent intrasystem integration. Describe the universal system general architecture of storage and analysis practice-oriented data, structural elements. Main components of universal system for storage and processing practice-oriented data are: online data sources, ETL-process, data warehouse, subsystem of analysis, users. An important place in the system is analytical processing of data, information search, document’s storage and providing a software interface for accessing the functionality of the system from the outside. An universal system based on describing concept will allow collection information of different subject domains, get analytical summaries, do data processing and apply appropriate Data Mining methods and algorithms.
Mahomed, Rosemary; St John, Winsome; Patterson, Elizabeth
2012-11-01
To investigate the process of patient satisfaction with nurse-led chronic disease management in Australian general practice. Nurses working in the primary care context of general practice, referred to as practice nurses, are expanding their role in chronic disease management; this is relatively new to Australia. Therefore, determining patient satisfaction with this trend is pragmatically and ethically important. However, the concept of patient satisfaction is not well understood particularly in relation to care provided by practice nurses. A grounded theory study underpinned by a relativist ontological position and a relativist epistemology. Grounded theory was used to develop a theory from data collected through in-depth interviews with 38 participants between November 2007-April 2009. Participants were drawn from a larger project that trialled a practice nurse-led, collaborative model of chronic disease management in three Australian general practices. Theoretical sampling, data collection, and analysis were conducted concurrently consistent with grounded theory methods. Patients undergo a cyclical process of Navigating Care involving three stages, Determining Care Needs, Forming Relationship, and Having Confidence. The latter two processes are inter-related and a feedback loop from them informs subsequent cycles of Determining Care Needs. If any of these steps fails to develop adequately, patients are likely to opt out of nurse-led care. Navigating Care explains how and why time, communication, continuity, and trust in general practitioners and nurses are important to patient satisfaction. It can be used in identifying suitable patients for practice nurse-led care and to inform the practice and organization of practice nurse-led care to enhance patient satisfaction. © 2012 Blackwell Publishing Ltd.
41 CFR 102-37.50 - What is the general process for requesting surplus property for donation?
2010-07-01
... process for requesting surplus property for donation? 102-37.50 Section 102-37.50 Public Contracts and... REGULATION PERSONAL PROPERTY 37-DONATION OF SURPLUS PERSONAL PROPERTY General Provisions Donation Overview § 102-37.50 What is the general process for requesting surplus property for donation? The process for...
Generalized enthalpy model of a high-pressure shift freezing process
Smith, N. A. S.
2012-05-02
High-pressure freezing processes are a novel emerging technology in food processing, offering significant improvements to the quality of frozen foods. To be able to simulate plateau times and thermal history under different conditions, in this work, we present a generalized enthalpy model of the high-pressure shift freezing process. The model includes the effects of pressure on conservation of enthalpy and incorporates the freezing point depression of non-dilute food samples. In addition, the significant heat-transfer effects of convection in the pressurizing medium are accounted for by solving the two-dimensional Navier-Stokes equations. We run the model for several numerical tests where the food sample is agar gel, and find good agreement with experimental data from the literature. © 2012 The Royal Society.
Svitlana G. Lytvynova
2018-04-01
Full Text Available The article analyzes the historical aspect of the formation of computer modeling as one of the perspective directions of educational process development. The notion of “system of computer modeling”, conceptual model of system of computer modeling (SCMod, its components (mathematical, animation, graphic, strategic, functions, principles and purposes of use are grounded. The features of the organization of students work using SCMod, individual and group work, the formation of subject competencies are described; the aspect of students’ motivation to learning is considered. It is established that educational institutions can use SCMod at different levels and stages of training and in different contexts, which consist of interrelated physical, social, cultural and technological aspects. It is determined that the use of SCMod in general secondary school would increase the capacity of teachers to improve the training of students in natural and mathematical subjects and contribute to the individualization of the learning process, in order to meet the pace, educational interests and capabilities of each particular student. It is substantiated that the use of SCMod in the study of natural-mathematical subjects contributes to the formation of subject competencies, develops the skills of analysis and decision-making, increases the level of digital communication, develops vigilance, raises the level of knowledge, increases the duration of attention of students. Further research requires the justification of the process of forming students’ competencies in natural-mathematical subjects and designing cognitive tasks using SCMod.
Moule, Pam; Clompus, Susan; Fieldhouse, Jon; Ellis-Jones, Julie; Barker, Jacqueline
2018-05-25
Underuse of anticoagulants in atrial fibrillation is known to increase the risk of stroke and is an international problem. The National Institute for Health Care and Excellence guidance CG180 seeks to reduce atrial fibrillation related strokes through prescriptions of Non-vitamin K antagonist Oral Anticoagulants. A quality improvement programme was established by the West of England Academic Health Science Network (West of England AHSN) to implement this guidance into General Practice. A realist evaluation identified whether the quality improvement programme worked, determining how and in what circumstances. Six General Practices in 1 region, became the case study sites. Quality improvement team, doctor, and pharmacist meetings within each of the General Practices were recorded at 3 stages: initial planning, review, and final. Additionally, 15 interviews conducted with the practice leads explored experiences of the quality improvement process. Observation and interview data were analysed and compared against the initial programme theory. The quality improvement resources available were used variably, with the training being valued by all. The initial programme theories were refined. In particular, local workload pressures and individual General Practitioner experiences and pre-conceived ideas were acknowledged. Where key motivators were in place, such as prior experience, the programme achieved optimal outcomes and secured a lasting quality improvement legacy. The employment of a quality improvement programme can deliver practice change and improvement legacy outcomes when particular mechanisms are employed and in contexts where there is a commitment to improve service. © 2018 John Wiley & Sons, Ltd.
Mutual information identifies spurious Hurst phenomena in resting state EEG and fMRI data
von Wegner, Frederic; Laufs, Helmut; Tagliazucchi, Enzo
2018-02-01
Long-range memory in time series is often quantified by the Hurst exponent H , a measure of the signal's variance across several time scales. We analyze neurophysiological time series from electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) resting state experiments with two standard Hurst exponent estimators and with the time-lagged mutual information function applied to discretized versions of the signals. A confidence interval for the mutual information function is obtained from surrogate Markov processes with equilibrium distribution and transition matrix identical to the underlying signal. For EEG signals, we construct an additional mutual information confidence interval from a short-range correlated, tenth-order autoregressive model. We reproduce the previously described Hurst phenomenon (H >0.5 ) in the analytical amplitude of alpha frequency band oscillations, in EEG microstate sequences, and in fMRI signals, but we show that the Hurst phenomenon occurs without long-range memory in the information-theoretical sense. We find that the mutual information function of neurophysiological data behaves differently from fractional Gaussian noise (fGn), for which the Hurst phenomenon is a sufficient condition to prove long-range memory. Two other well-characterized, short-range correlated stochastic processes (Ornstein-Uhlenbeck, Cox-Ingersoll-Ross) also yield H >0.5 , whereas their mutual information functions lie within the Markovian confidence intervals, similar to neural signals. In these processes, which do not have long-range memory by construction, a spurious Hurst phenomenon occurs due to slow relaxation times and heteroscedasticity (time-varying conditional variance). In summary, we find that mutual information correctly distinguishes long-range from short-range dependence in the theoretical and experimental cases discussed. Our results also suggest that the stationary fGn process is not sufficient to describe neural data, which
A Dirichlet process mixture of generalized Dirichlet distributions for proportional data modeling.
Bouguila, Nizar; Ziou, Djemel
2010-01-01
In this paper, we propose a clustering algorithm based on both Dirichlet processes and generalized Dirichlet distribution which has been shown to be very flexible for proportional data modeling. Our approach can be viewed as an extension of the finite generalized Dirichlet mixture model to the infinite case. The extension is based on nonparametric Bayesian analysis. This clustering algorithm does not require the specification of the number of mixture components to be given in advance and estimates it in a principled manner. Our approach is Bayesian and relies on the estimation of the posterior distribution of clusterings using Gibbs sampler. Through some applications involving real-data classification and image databases categorization using visual words, we show that clustering via infinite mixture models offers a more powerful and robust performance than classic finite mixtures.
Forbidden Raman scattering processes. I. General considerations and E1--M1 scattering
Harney, R.C.
1979-01-01
The generalized theory of forbidden Raman scattering processes is developed in terms of the multipole expansion of the electromagnetic interaction Hamiltonian. Using the general expressions, the theory of electric dipole--magnetic dipole (E1--M1) Raman scattering is derived in detail. The 1 S 0 → 3 P 1 E1--M1 Raman scattering cross section in atomic magnesium is calculated for two applicable laser wavelengths using published f-value data. Since resonantly enhanced cross sections larger than 10 -29 cm 2 /sr are predicted it should be possible to experimentally observe this scattering phenomenon. In addition, by measuring the frequency dependence of the cross section near resonance, it may be possible to directly determine the relative magnitudes of the Axp and AxA contributions to the scattering cross section. Finally, possible applications of the effect in atomic and molecular physics are discussed
Non-rigid ultrasound image registration using generalized relaxation labeling process
Lee, Jong-Ha; Seong, Yeong Kyeong; Park, MoonHo; Woo, Kyoung-Gu; Ku, Jeonghun; Park, Hee-Jun
2013-03-01
This research proposes a novel non-rigid registration method for ultrasound images. The most predominant anatomical features in medical images are tissue boundaries, which appear as edges. In ultrasound images, however, other features can be identified as well due to the specular reflections that appear as bright lines superimposed on the ideal edge location. In this work, an image's local phase information (via the frequency domain) is used to find the ideal edge location. The generalized relaxation labeling process is then formulated to align the feature points extracted from the ideal edge location. In this work, the original relaxation labeling method was generalized by taking n compatibility coefficient values to improve non-rigid registration performance. This contextual information combined with a relaxation labeling process is used to search for a correspondence. Then the transformation is calculated by the thin plate spline (TPS) model. These two processes are iterated until the optimal correspondence and transformation are found. We have tested our proposed method and the state-of-the-art algorithms with synthetic data and bladder ultrasound images of in vivo human subjects. Experiments show that the proposed method improves registration performance significantly, as compared to other state-of-the-art non-rigid registration algorithms.
Crawford, Forrest W.; Suchard, Marc A.
2011-01-01
A birth-death process is a continuous-time Markov chain that counts the number of particles in a system over time. In the general process with n current particles, a new particle is born with instantaneous rate λn and a particle dies with instantaneous rate μn. Currently no robust and efficient method exists to evaluate the finite-time transition probabilities in a general birth-death process with arbitrary birth and death rates. In this paper, we first revisit the theory of continued fractions to obtain expressions for the Laplace transforms of these transition probabilities and make explicit an important derivation connecting transition probabilities and continued fractions. We then develop an efficient algorithm for computing these probabilities that analyzes the error associated with approximations in the method. We demonstrate that this error-controlled method agrees with known solutions and outperforms previous approaches to computing these probabilities. Finally, we apply our novel method to several important problems in ecology, evolution, and genetics. PMID:21984359
Kerr-de Sitter spacetime, Penrose process, and the generalized area theorem
Bhattacharya, Sourav
2018-04-01
We investigate various aspects of energy extraction via the Penrose process in the Kerr-de Sitter spacetime. We show that the increase in the value of a positive cosmological constant, Λ , always reduces the efficiency of this process. The Kerr-de Sitter spacetime has two ergospheres associated with the black hole and the cosmological event horizons. We prove by analyzing turning points of the trajectory that the Penrose process in the cosmological ergoregion is never possible. We next show that in this process both the black hole and cosmological event horizons' areas increase, and the latter becomes possible when the particle coming from the black hole ergoregion escapes through the cosmological event horizon. We identify a new, local mass function instead of the mass parameter, to prove this generalized area theorem. This mass function takes care of the local spacetime energy due to the cosmological constant as well, including that which arises due to the frame-dragging effect due to spacetime rotation. While the current observed value of Λ is quite small, its effect in this process could be considerable in the early Universe scenario where its value is much larger, where the two horizons could have comparable sizes. In particular, the various results we obtain here are also evaluated in a triply degenerate limit of the Kerr-de Sitter spacetime we find, in which radial values of the inner, the black hole and the cosmological event horizons are nearly coincident.
Olexandr Tyhorskyy
2015-10-01
Full Text Available Purpose: to improve the method of training highly skilled bodybuilders. Material and Methods: the study involved eight highly skilled athletes, members of the team of Ukraine on bodybuilding. Results: comparative characteristics of the most commonly used methods of training process in bodybuilding. Developed and substantiated the optimal method of training highly skilled bodybuilders during the general preparatory phase of the preparatory period, which can increase body weight through muscle athletes component. Conclusions: dynamic load factor to raise the intensity of training loads allows orientation help to increase volumes shoulder muscles
Correia, J R C C C; Martins, C J A P
2017-10-01
Topological defects unavoidably form at symmetry breaking phase transitions in the early universe. To probe the parameter space of theoretical models and set tighter experimental constraints (exploiting the recent advances in astrophysical observations), one requires more and more demanding simulations, and therefore more hardware resources and computation time. Improving the speed and efficiency of existing codes is essential. Here we present a general purpose graphics-processing-unit implementation of the canonical Press-Ryden-Spergel algorithm for the evolution of cosmological domain wall networks. This is ported to the Open Computing Language standard, and as a consequence significant speedups are achieved both in two-dimensional (2D) and 3D simulations.
Convexity of Ruin Probability and Optimal Dividend Strategies for a General Lévy Process
Chuancun Yin
2015-01-01
Full Text Available We consider the optimal dividends problem for a company whose cash reserves follow a general Lévy process with certain positive jumps and arbitrary negative jumps. The objective is to find a policy which maximizes the expected discounted dividends until the time of ruin. Under appropriate conditions, we use some recent results in the theory of potential analysis of subordinators to obtain the convexity properties of probability of ruin. We present conditions under which the optimal dividend strategy, among all admissible ones, takes the form of a barrier strategy.
Convexity of Ruin Probability and Optimal Dividend Strategies for a General Lévy Process
Yuen, Kam Chuen; Shen, Ying
2015-01-01
We consider the optimal dividends problem for a company whose cash reserves follow a general Lévy process with certain positive jumps and arbitrary negative jumps. The objective is to find a policy which maximizes the expected discounted dividends until the time of ruin. Under appropriate conditions, we use some recent results in the theory of potential analysis of subordinators to obtain the convexity properties of probability of ruin. We present conditions under which the optimal dividend strategy, among all admissible ones, takes the form of a barrier strategy. PMID:26351655
Correia, J. R. C. C. C.; Martins, C. J. A. P.
2017-10-01
Topological defects unavoidably form at symmetry breaking phase transitions in the early universe. To probe the parameter space of theoretical models and set tighter experimental constraints (exploiting the recent advances in astrophysical observations), one requires more and more demanding simulations, and therefore more hardware resources and computation time. Improving the speed and efficiency of existing codes is essential. Here we present a general purpose graphics-processing-unit implementation of the canonical Press-Ryden-Spergel algorithm for the evolution of cosmological domain wall networks. This is ported to the Open Computing Language standard, and as a consequence significant speedups are achieved both in two-dimensional (2D) and 3D simulations.
Generalized Fractional Processes with Long Memory and Time Dependent Volatility Revisited
M. Shelton Peiris
2016-09-01
Full Text Available In recent years, fractionally-differenced processes have received a great deal of attention due to their flexibility in financial applications with long-memory. This paper revisits the class of generalized fractionally-differenced processes generated by Gegenbauer polynomials and the ARMA structure (GARMA with both the long-memory and time-dependent innovation variance. We establish the existence and uniqueness of second-order solutions. We also extend this family with innovations to follow GARCH and stochastic volatility (SV. Under certain regularity conditions, we give asymptotic results for the approximate maximum likelihood estimator for the GARMA-GARCH model. We discuss a Monte Carlo likelihood method for the GARMA-SV model and investigate finite sample properties via Monte Carlo experiments. Finally, we illustrate the usefulness of this approach using monthly inflation rates for France, Japan and the United States.
General description of few-body break-up processes at threshold
Barrachina, R.O.
2004-01-01
Full text: In this communication we present a general description of the behavior of fragmentation processes near threshold by analyzing the break-up into two, three and N bodies in steps of increasing complexity. In particular, we describe the effects produced by an N-body threshold behavior in N+1 body break-up processes, as it occurs in situations where one of the fragments acquires almost all the excess energy of the system. Furthermore, we relate the appearance of cusps and discontinuities in single-particle multiply differential cross sections to the threshold behavior of the remaining particles, and apply these ideas to different systems from atomic, molecular and nuclear collision physics. We finally show that, even though the study of ultracold collisions represents the direct way of gathering information on a break-up system near threshold, the analysis of high-energy collisions provides an alternative, and sometimes advantageous, approach
Naylor, Larissa; Coombes, Martin; Sewell, Jack; White, Anissia
2014-05-01
Coastal processes shape the coast into a variety of eye-catching and enticing landforms that attract people to marvel at, relax and enjoy coastal geomorphology. Field guides to explain these processes (and the geodiversity that results) to the general public and children are few and far between. In contrast, there is a relative wealth of resources and organised activities introducing people to coastal wildlife, especially on rocky shores. These biological resources typically focus on the biology and climatic controls on their distribution, rather than how the biology interacts with its physical habitat. As an outcome of two recent rock coast biogeomorphology projects (www.biogeomorph.org/coastal/coastaldefencedbiodiversity and www.biogeomorph.org/coastal/bioprotection ), we produced the first known guide to understanding how biogeomorphological processes help create coastal landforms. The 'Shore Shapers' guide (www.biogeomorph.org/coastal/shoreshapers) is designed to: a) bring biotic-geomorphic interactions to life and b) introduce some of the geomorphological and geological controls on biogeomorphic processes and landform development. The guide provides scientific information in an accessible and interactive way - to help sustain children's interest and extend their learning. We tested a draft version of our guide with children, the general public and volunteers on rocky shore rambles using social science techniques and of 74 respondents, 75.6% were more interested in understanding how rock pools (i.e. coastal landforms) develop after seeing the guide. Respondents' opinions about key bioprotective species also changed as a result of seeing the guide - 58% of people found barnacles unattractive before they saw the guide whilst 36% of respondents were more interested in barnacles after seeing the guide. These results demonstrate that there is considerable interest in more educational materials on coastal biogeomorphology and geodiversity.
Degradation data analysis based on a generalized Wiener process subject to measurement error
Li, Junxing; Wang, Zhihua; Zhang, Yongbo; Fu, Huimin; Liu, Chengrui; Krishnaswamy, Sridhar
2017-09-01
Wiener processes have received considerable attention in degradation modeling over the last two decades. In this paper, we propose a generalized Wiener process degradation model that takes unit-to-unit variation, time-correlated structure and measurement error into considerations simultaneously. The constructed methodology subsumes a series of models studied in the literature as limiting cases. A simple method is given to determine the transformed time scale forms of the Wiener process degradation model. Then model parameters can be estimated based on a maximum likelihood estimation (MLE) method. The cumulative distribution function (CDF) and the probability distribution function (PDF) of the Wiener process with measurement errors are given based on the concept of the first hitting time (FHT). The percentiles of performance degradation (PD) and failure time distribution (FTD) are also obtained. Finally, a comprehensive simulation study is accomplished to demonstrate the necessity of incorporating measurement errors in the degradation model and the efficiency of the proposed model. Two illustrative real applications involving the degradation of carbon-film resistors and the wear of sliding metal are given. The comparative results show that the constructed approach can derive a reasonable result and an enhanced inference precision.
Allu, Srikanth [ORNL; Velamur Asokan, Badri [Exxon Mobil Research and Engineering; Shelton, William A [Louisiana State University; Philip, Bobby [ORNL; Pannala, Sreekanth [ORNL
2014-01-01
A generalized three dimensional computational model based on unied formulation of electrode- electrolyte-electrode system of a electric double layer supercapacitor has been developed. The model accounts for charge transport across the solid-liquid system. This formulation based on volume averaging process is a widely used concept for the multiphase ow equations ([28] [36]) and is analogous to porous media theory typically employed for electrochemical systems [22] [39] [12]. This formulation is extended to the electrochemical equations for a supercapacitor in a consistent fashion, which allows for a single-domain approach with no need for explicit interfacial boundary conditions as previously employed ([38]). In this model it is easy to introduce the spatio-temporal variations, anisotropies of physical properties and it is also conducive for introducing any upscaled parameters from lower length{scale simulations and experiments. Due to the irregular geometric congurations including porous electrode, the charge transport and subsequent performance characteristics of the super-capacitor can be easily captured in higher dimensions. A generalized model of this nature also provides insight into the applicability of 1D models ([38]) and where multidimensional eects need to be considered. In addition, simple sensitivity analysis on key input parameters is performed in order to ascertain the dependence of the charge and discharge processes on these parameters. Finally, we demonstarted how this new formulation can be applied to non-planar supercapacitors
Michele Biasutti
2017-06-01
Full Text Available Improvisation is an articulated multidimensional activity based on an extemporaneous creative performance. Practicing improvisation, participants expand sophisticated skills such as sensory and perceptual encoding, memory storage and recall, motor control, and performance monitoring. Improvisation abilities have been developed following several methodologies mainly with a product-oriented perspective. A model framed under the socio-cultural theory of learning for designing didactic activities on processes instead of outcomes is presented in the current paper. The challenge is to overcome the mere instructional dimension of some practices of teaching improvisation by designing activities that stimulate self-regulated learning strategies in the students. In the article the present thesis is declined in three ways, concerning the following three possible areas of application: (1 high-level musical learning, (2 musical pedagogy with children, (3 general pedagogy. The applications in the music field focusing mainly on an expert's use of improvisation are discussed. The last section considers how these ideas should transcend music studies, presenting the benefits and the implications of improvisation activities for general learning. Moreover, the application of music education to the following cognitive processes are discussed: anticipation, use of repertoire, emotive communication, feedback and flow. These characteristics could be used to outline a pedagogical method for teaching music improvisation based on the development of reflection, reasoning, and meta-cognition.
DNA Processing and Reassembly on General Purpose FPGA-based Development Boards
SZÁSZ Csaba
2017-05-01
Full Text Available The great majority of researchers involved in microelectronics generally agree that many scientific challenges in life sciences have associated with them a powerful computational requirement that must be solved before scientific progress can be made. The current trend in Deoxyribonucleic Acid (DNA computing technologies is to develop special hardware platforms capable to provide the needed processing performance at lower cost. In this endeavor the FPGA-based (Field Programmable Gate Arrays configurations aimed to accelerate genome sequencing and reassembly plays a leading role. This paper emphasizes benefits and advantages using general purpose FPGA-based development boards in DNA reassembly applications beside the special hardware architecture solutions. An original approach is unfolded which outlines the versatility of high performance ready-to-use manufacturer development platforms endowed with powerful hardware resources fully optimized for high speed processing applications. The theoretical arguments are supported via an intuitive implementation example where the designer it is discharged from any hardware development effort and completely assisted in exclusive concentration only on software design issues providing greatly reduced application development cycles. The experiments prove that such boards available on the market are suitable to fulfill in all a wide range of DNA sequencing and reassembly applications.
General emotion processing in social anxiety disorder: neural issues of cognitive control.
Brühl, Annette Beatrix; Herwig, Uwe; Delsignore, Aba; Jäncke, Lutz; Rufer, Michael
2013-05-30
Anxiety disorders are characterized by deficient emotion regulation prior to and in anxiety-evoking situations. Patients with social anxiety disorder (SAD) have increased brain activation also during the anticipation and perception of non-specific emotional stimuli pointing to biased general emotion processing. In the current study we addressed the neural correlates of emotion regulation by cognitive control during the anticipation and perception of non-specific emotional stimuli in patients with SAD. Thirty-two patients with SAD underwent functional magnetic resonance imaging during the announced anticipation and perception of emotional stimuli. Half of them were trained and instructed to apply reality-checking as a control strategy, the others anticipated and perceived the stimuli. Reality checking significantly (pperception of negative emotional stimuli. The medial prefrontal cortex was comparably active in both groups (p>0.50). The results suggest that cognitive control in patients with SAD influences emotion processing structures, supporting the usefulness of emotion regulation training in the psychotherapy of SAD. In contrast to studies in healthy subjects, cognitive control was not associated with increased activation of prefrontal regions in SAD. This points to possibly disturbed general emotion regulating circuits in SAD. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Biasutti, Michele
2017-01-01
Improvisation is an articulated multidimensional activity based on an extemporaneous creative performance. Practicing improvisation, participants expand sophisticated skills such as sensory and perceptual encoding, memory storage and recall, motor control, and performance monitoring. Improvisation abilities have been developed following several methodologies mainly with a product-oriented perspective. A model framed under the socio-cultural theory of learning for designing didactic activities on processes instead of outcomes is presented in the current paper. The challenge is to overcome the mere instructional dimension of some practices of teaching improvisation by designing activities that stimulate self-regulated learning strategies in the students. In the article the present thesis is declined in three ways, concerning the following three possible areas of application: (1) high-level musical learning, (2) musical pedagogy with children, (3) general pedagogy. The applications in the music field focusing mainly on an expert's use of improvisation are discussed. The last section considers how these ideas should transcend music studies, presenting the benefits and the implications of improvisation activities for general learning. Moreover, the application of music education to the following cognitive processes are discussed: anticipation, use of repertoire, emotive communication, feedback and flow. These characteristics could be used to outline a pedagogical method for teaching music improvisation based on the development of reflection, reasoning, and meta-cognition.
Perusich, Stephen; Moos, Thomas; Muscatello, Anthony
2011-01-01
This innovation provides the user with autonomous on-screen monitoring, embedded computations, and tabulated output for two new processes. The software was originally written for the Continuous Lunar Water Separation Process (CLWSP), but was found to be general enough to be applicable to the Lunar Greenhouse Amplifier (LGA) as well, with minor alterations. The resultant program should have general applicability to many laboratory processes (see figure). The objective for these programs was to create a software application that would provide both autonomous monitoring and data storage, along with manual manipulation. The software also allows operators the ability to input experimental changes and comments in real time without modifying the code itself. Common process elements, such as thermocouples, pressure transducers, and relative humidity sensors, are easily incorporated into the program in various configurations, along with specialized devices such as photodiode sensors. The goal of the CLWSP research project is to design, build, and test a new method to continuously separate, capture, and quantify water from a gas stream. The application is any In-Situ Resource Utilization (ISRU) process that desires to extract or produce water from lunar or planetary regolith. The present work is aimed at circumventing current problems and ultimately producing a system capable of continuous operation at moderate temperatures that can be scaled over a large capacity range depending on the ISRU process. The goal of the LGA research project is to design, build, and test a new type of greenhouse that could be used on the moon or Mars. The LGA uses super greenhouse gases (SGGs) to absorb long-wavelength radiation, thus creating a highly efficient greenhouse at a future lunar or Mars outpost. Silica-based glass, although highly efficient at trapping heat, is heavy, fragile, and not suitable for space greenhouse applications. Plastics are much lighter and resilient, but are not
A national general pediatric clerkship curriculum: the process of development and implementation.
Olson, A L; Woodhead, J; Berkow, R; Kaufman, N M; Marshall, S G
2000-07-01
To describe a new national general pediatrics clerkship curriculum, the development process that built national support for its use, and current progress in implementing the curriculum in pediatric clerkships at US allopathic medical schools. CURRICULUM DEVELOPMENT: A curriculum project team of pediatric clerkship directors and an advisory committee representing professional organizations invested in pediatric student education developed the format and content in collaboration with pediatric educators from the Council on Medical Student Education in Pediatrics (COMSEP) and the Ambulatory Pediatric Association (APA). An iterative process or review by clerkship directors, pediatric departmental chairs, and students finalized the content and built support for the final product. The national dissemination process resulted in consensus among pediatric educators that this curriculum should be used as the national curricular guideline for clerkships. MONITORING IMPLEMENTATION: Surveys were mailed to all pediatric clerkship directors before dissemination (November 1994), and in the first and third academic years after national dissemination (March 1996 and September 1997). The 3 surveys assessed schools' implementation of specific components of the curriculum. The final survey also assessed ways the curriculum was used and barriers to implementation. The final curriculum provided objectives and competencies for attitudes, skills, and 18 knowledge areas of general pediatrics. A total of 216 short clinical cases were also provided as an alternative learning method. An accompanying resource manual provided suggested strategies for implementation, teaching, and evaluation. A total of 103 schools responded to survey 1; 84 schools to survey 2; and 85 schools responded to survey 3 from the 125 medical schools surveyed. Before dissemination, 16% of schools were already using the clinical cases. In the 1995-1996 academic year, 70% of schools were using some or all of the curricular
Generalized hardware post-processing technique for chaos-based pseudorandom number generators
Barakat, Mohamed L.
2013-06-01
This paper presents a generalized post-processing technique for enhancing the pseudorandomness of digital chaotic oscillators through a nonlinear XOR-based operation with rotation and feedback. The technique allows full utilization of the chaotic output as pseudorandom number generators and improves throughput without a significant area penalty. Digital design of a third-order chaotic system with maximum function nonlinearity is presented with verified chaotic dynamics. The proposed post-processing technique eliminates statistical degradation in all output bits, thus maximizing throughput compared to other processing techniques. Furthermore, the technique is applied to several fully digital chaotic oscillators with performance surpassing previously reported systems in the literature. The enhancement in the randomness is further examined in a simple image encryption application resulting in a better security performance. The system is verified through experiment on a Xilinx Virtex 4 FPGA with throughput up to 15.44 Gbit/s and logic utilization less than 0.84% for 32-bit implementations. © 2013 ETRI.
A Business Process Management System based on a General Optimium Criterion
Vasile MAZILESCU
2009-01-01
Full Text Available Business Process Management Systems (BPMS provide a broadrange of facilities to manage operational business processes. These systemsshould provide support for the complete Business Process Management (BPMlife-cycle [16]: (redesign, configuration, execution, control, and diagnosis ofprocesses. BPMS can be seen as successors of Workflow Management (WFMsystems. However, already in the seventies people were working on officeautomation systems which are comparable with today’s WFM systems.Recently, WFM vendors started to position their systems as BPMS. Our paper’sgoal is a proposal for a Tasks-to-Workstations Assignment Algorithm (TWAAfor assembly lines which is a special implementation of a stochastic descenttechnique, in the context of BPMS, especially at the control level. Both cases,single and mixed-model, are treated. For a family of product models having thesame generic structure, the mixed-model assignment problem can be formulatedthrough an equivalent single-model problem. A general optimum criterion isconsidered. As the assembly line balancing, this kind of optimisation problemleads to a graph partitioning problem meeting precedence and feasibilityconstraints. The proposed definition for the "neighbourhood" function involvesan efficient way for treating the partition and precedence constraints. Moreover,the Stochastic Descent Technique (SDT allows an implicit treatment of thefeasibility constraint. The proposed algorithm converges with probability 1 toan optimal solution.
Nascimento, M.A.C. do
1992-01-01
A Generalized Multi Structural (GMS) wave function is presented which combines the advantages of the SCF-MO and VB models, preserving the classical chemical structures but optimizing the orbitals in a self-consistent way. This wave function is particularly suitable to treat situations where the description of the molecular state requires localized wave functions. It also provides a very convenient way of treating the electron correlation problem, avoiding large CI expansions. The final wave functions are much more compact and easier to interpret than the ones obtained by the conventional methods, using orthogonal orbitals. Applications of the GMS wave function to the study of the photoelectron spectra of the trans-glyoxal molecule and to electron impact excitation processes in the nitrogen molecule are presented as an illustration of the method. (author)
''Sheiva'' : a general purpose multi-parameter data acquisition and processing system at VECC
Viyogi, Y.P.; Ganguly, N.K.
1982-01-01
A general purpose interactive software to be used with the PDP-15/76 on-line computer at VEC Centre for the acquisition and processing of data in nuclear physics experiments is described. The program can accommodate a maximum of thirty two inputs although the present hardware limits the number of inputs to eight. Particular emphasis is given to the problems of flexibility and ease of operation, memory optimisation and techniques dealing with experimenter-computer interaction. Various graphical methods for one- and two-dimensional data presentation are discussed. Specific problems of particle identification using detector telescopes have been dealt with carefully to handle experiments using several detector telescopes and those involving light particle-heavy particle coincidence studies. Steps needed to tailor this program towards utilisation for special experiments are also described. (author)
Attention allocation: Relationships to general working memory or specific language processing.
Archibald, Lisa M D; Levee, Tyler; Olino, Thomas
2015-11-01
Attention allocation, updating working memory, and language processing are interdependent cognitive tasks related to the focused direction of limited resources, refreshing and substituting information in the current focus of attention, and receiving/sending verbal communication, respectively. The current study systematically examined the relationship among executive attention, working memory executive skills, and language abilities while adjusting for individual differences in short-term memory. School-age children completed a selective attention task requiring them to recall whether a presented shape was in the same place as a previous target shape shown in an array imposing a low or high working memory load. Results revealed a selective attention cost when working above but not within memory span capacity. Measures of general working memory were positively related to overall task performance, whereas language abilities were related to response time. In particular, higher language skills were associated with faster responses under low load conditions. These findings suggest that attentional control and storage demands have an additive impact on working memory resources but provide only limited evidence for a domain-general mechanism in language learning. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.
Yang, Chih-Hao; Huang, Chiung-Chun; Hsu, Kuei-Sen
2011-09-01
Repetitive replay of fear memories may precipitate the occurrence of post-traumatic stress disorder and other anxiety disorders. Hence, the suppression of fear memory retrieval may help prevent and treat these disorders. The formation of fear memories is often linked to multiple environmental cues and these interconnected cues may act as reminders for the recall of traumatic experiences. However, as a convenience, a simple paradigm of one cue pairing with the aversive stimulus is usually used in studies of fear conditioning in animals. Here, we built a more complex fear conditioning model by presenting several environmental stimuli during fear conditioning and characterize the effectiveness of extinction training and the disruption of reconsolidation process on the expression of learned fear responses. We demonstrate that extinction training with a single-paired cue resulted in cue-specific attenuation of fear responses but responses to other cures were unchanged. The cue-specific nature of the extinction persisted despite training sessions combined with D-cycloserine treatment reveals a significant weakness in extinction-based treatment. In contrast, the inhibition of the dorsal hippocampus (DH) but not the basolateral amygdala (BLA)-dependent memory reconsolidation process using either protein synthesis inhibitors or genetic disruption of cAMP-response-element-binding protein-mediated transcription comprehensively disrupted the learned connections between fear responses and all paired environmental cues. These findings emphasize the distinct role of the DH and the BLA in the reconsolidation process of fear memories and further indicate that the disruption of memory reconsolidation process in the DH may result in generalization of fear inhibition.
A general-purpose process modelling framework for marine energy systems
Dimopoulos, George G.; Georgopoulou, Chariklia A.; Stefanatos, Iason C.; Zymaris, Alexandros S.; Kakalis, Nikolaos M.P.
2014-01-01
Highlights: • Process modelling techniques applied in marine engineering. • Systems engineering approaches to manage the complexity of modern ship machinery. • General purpose modelling framework called COSSMOS. • Mathematical modelling of conservation equations and related chemical – transport phenomena. • Generic library of ship machinery component models. - Abstract: High fuel prices, environmental regulations and current shipping market conditions impose ships to operate in a more efficient and greener way. These drivers lead to the introduction of new technologies, fuels, and operations, increasing the complexity of modern ship energy systems. As a means to manage this complexity, in this paper we present the introduction of systems engineering methodologies in marine engineering via the development of a general-purpose process modelling framework for ships named as DNV COSSMOS. Shifting the focus from components – the standard approach in shipping- to systems, widens the space for optimal design and operation solutions. The associated computer implementation of COSSMOS is a platform that models, simulates and optimises integrated marine energy systems with respect to energy efficiency, emissions, safety/reliability and costs, under both steady-state and dynamic conditions. DNV COSSMOS can be used in assessment and optimisation of design and operation problems in existing vessels, new builds as well as new technologies. The main features and our modelling approach are presented and key capabilities are illustrated via two studies on the thermo-economic design and operation optimisation of a combined cycle system for large bulk carriers, and the transient operation simulation of an electric marine propulsion system
Generalized Least Energy of Separation for Desalination and Other Chemical Separation Processes
Karan H. Mistry
2013-05-01
Full Text Available Increasing global demand for fresh water is driving the development and implementation of a wide variety of seawater desalination technologies driven by different combinations of heat, work, and chemical energy. This paper develops a consistent basis for comparing the energy consumption of such technologies using Second Law efficiency. The Second Law efficiency for a chemical separation process is defined in terms of the useful exergy output, which is the minimum least work of separation required to extract a unit of product from a feed stream of a given composition. For a desalination process, this is the minimum least work of separation for producing one kilogram of product water from feed of a given salinity. While definitions in terms of work and heat input have been proposed before, this work generalizes the Second Law efficiency to allow for systems that operate on a combination of energy inputs, including fuel. The generalized equation is then evaluated through a parametric study considering work input, heat inputs at various temperatures, and various chemical fuel inputs. Further, since most modern, large-scale desalination plants operate in cogeneration schemes, a methodology for correctly evaluating Second Law efficiency for the desalination plant based on primary energy inputs is demonstrated. It is shown that, from a strictly energetic point of view and based on currently available technology, cogeneration using electricity to power a reverse osmosis system is energetically superior to thermal systems such as multiple effect distillation and multistage flash distillation, despite the very low grade heat input normally applied in those systems.
Pomeroy, Sylvia E M; Cant, Robyn P
2010-01-01
The aim of this project was to describe general practitioners' (GPs') decision-making process for reducing nutrition risk in cardiac patients through referring a patient to a dietitian. The setting was primary care practices in Victoria. The method we employed was mixed methods research: in Study 1, 30 GPs were interviewed. Recorded interviews were transcribed and narratives analysed thematically. Study 2 involved a survey of statewide random sample of GPs. Frequencies and analyses of variance were used to explore the impact of demographic variables on decisions to refer. We found that the referral decision involved four elements: (i) synthesising management information; (ii) forecasting outcomes; (iii) planning management; and (iv) actioning referrals. GPs applied cognitive and collaborative strategies to develop a treatment plan. In Study 2, doctors (248 GPs, 30%) concurred with identified barriers/enabling factors for patients' referral. There was no association between GPs' sex, age or hours worked per week and referral factors. We conclude that a GP's judgment to offer a dietetic referral to an adult patient is a four element reasoning process. Attention to how these elements interact may assist clinical decision making. Apart from the sole use of prescribed medications/surgical procedures for cardiac care, patients offered a dietetic referral were those who were considered able to commit to dietary change and who were willing to attend a dietetic consultation. Improvements in provision of patients' nutrition intervention information to GPs are needed. Further investigation is justified to determine how to resolve this practice gap.
The amblyopic deficit for 2nd order processing: Generality and laterality.
Gao, Yi; Reynaud, Alexandre; Tang, Yong; Feng, Lixia; Zhou, Yifeng; Hess, Robert F
2015-09-01
A number of previous reports have suggested that the processing of second-order stimuli by the amblyopic eye (AE) is defective and that the fellow non-amblyopic eye (NAE) also exhibits an anomaly. Second-order stimuli involve extra-striate as well as striate processing and provide a means of exploring the extent of the cortical anomaly in amblyopia using psychophysics. We use a range of different second-order stimuli to investigate how general the deficit is for detecting second-order stimuli in adult amblyopes. We compare these results to our previously published adult normative database using the same stimuli and approach to determine the extent to which the detection of these stimuli is defective for both amblyopic and non-amblyopic eye stimulation. The results suggest that the second-order deficit affects a wide range of second-order stimuli, and by implication a large area of extra-striate cortex, both dorsally and ventrally. The NAE is affected only in motion-defined form judgments, suggesting a difference in the degree to which ocular dominance is disrupted in dorsal and ventral extra-striate regions. Copyright © 2014 Elsevier Ltd. All rights reserved.
A Fast General-Purpose Clustering Algorithm Based on FPGAs for High-Throughput Data Processing
Annovi, A; The ATLAS collaboration; Castegnaro, A; Gatta, M
2012-01-01
We present a fast general-purpose algorithm for high-throughput clustering of data ”with a two dimensional organization”. The algorithm is designed to be implemented with FPGAs or custom electronics. The key feature is a processing time that scales linearly with the amount of data to be processed. This means that clustering can be performed in pipeline with the readout, without suffering from combinatorial delays due to looping multiple times through all the data. This feature makes this algorithm especially well suited for problems where the data has high density, e.g. in the case of tracking devices working under high-luminosity condition such as those of LHC or Super-LHC. The algorithm is organized in two steps: the first step (core) clusters the data; the second step analyzes each cluster of data to extract the desired information. The current algorithm is developed as a clustering device for modern high-energy physics pixel detectors. However, the algorithm has much broader field of applications. In ...
TRIO-EF a general thermal hydraulics computer code applied to the Avlis process
Magnaud, J.P.; Claveau, M.; Coulon, N.; Yala, P.; Guilbaud, D.; Mejane, A.
1993-01-01
TRIO(EF is a general purpose Fluid Mechanics 3D Finite Element Code. The system capabilities cover areas such as steady state or transient, laminar or turbulent, isothermal or temperature dependent fluid flows; it is applicable to the study of coupled thermo-fluid problems involving heat conduction and possibly radiative heat transfer. It has been used to study the thermal behaviour of the AVLIS process separation module. In this process, a linear electron beam impinges the free surface of a uranium ingot, generating a two dimensional curtain emission of vapour from a water-cooled crucible. The energy transferred to the metal causes its partial melting, forming a pool where strong convective motion increases heat transfer towards the crucible. In the upper part of the Separation Module, the internal structures are devoted to two main functions: vapor containment and reflux, irradiation and physical separation. They are subjected to very high temperature levels and heat transfer occurs mainly by radiation. Moreover, special attention has to be paid to electron backscattering. These two major points have been simulated numerically with TRIO-EF and the paper presents and comments the results of such a computation, for each of them. After a brief overview of the computer code, two examples of the TRIO-EF capabilities are given: a crucible thermal hydraulics model, a thermal analysis of the internal structures
Giménez, Nuria; Pedrazas, David; Redondo, Susana; Quintana, Salvador
2016-10-01
Adequate information for patients and respect for their autonomy are mandatory in research. This article examined insights of researchers, patients and general practitioners (GPs) on the informed consent process in clinical trials, and the role of the GP. A cross-sectional study using three questionnaires, informed consent reviews, medical records, and hospital discharge reports. GPs, researchers and patients involved in clinical trials. Included, 504 GPs, 108 researchers, and 71 patients. Consulting the GP was recommended in 50% of the informed consents. Participation in clinical trials was shown in 33% of the medical records and 3% of the hospital discharge reports. GPs scored 3.54 points (on a 1-10 scale) on the assessment of the information received by the principal investigator. The readability of the informed consent sheet was rated 8.03 points by researchers, and the understanding was rated 7.68 points by patients. Patient satisfaction was positively associated with more time for reflection. GPs were not satisfied with the information received on the participation of patients under their in clinical trials. Researchers were satisfied with the information they offered to patients, and were aware of the need to improve the information GPs received. Patients collaborated greatly towards biomedical research, expressed satisfaction with the overall process, and minimised the difficulties associated with participation. Copyright © 2015 Elsevier España, S.L.U. All rights reserved.
Jarrold, Christopher; Tam, Helen; Baddeley, Alan D; Harvey, Caroline E
2011-05-01
Two studies that examine whether the forgetting caused by the processing demands of working memory tasks is domain-general or domain-specific are presented. In each, separate groups of adult participants were asked to carry out either verbal or nonverbal operations on exactly the same processing materials while maintaining verbal storage items. The imposition of verbal processing tended to produce greater forgetting even though verbal processing operations took no longer to complete than did nonverbal processing operations. However, nonverbal processing did cause forgetting relative to baseline control conditions, and evidence from the timing of individuals' processing responses suggests that individuals in both processing groups slowed their responses in order to "refresh" the memoranda. Taken together the data suggest that processing has a domain-general effect on working memory performance by impeding refreshment of memoranda but can also cause effects that appear domain-specific and that result from either blocking of rehearsal or interference.
Halcomb, Elizabeth J; Furler, John S; Hermiz, Oshana S; Blackberry, Irene D; Smith, Julie P; Richmond, Robyn L; Zwar, Nicholas A
2015-08-01
Support in primary care can assist smokers to quit successfully, but there are barriers to general practitioners (GPs) providing this support routinely. Practice nurses (PNs) may be able to effectively take on this role. The aim of this study was to perform a process evaluation of a PN-led smoking cessation intervention being tested in a randomized controlled trial in Australian general practice. Process evaluation was conducted by means of semi-structured telephone interviews with GPs and PNs allocated in the intervention arm (Quit with PN) of the Quit in General Practice trial. Interviews focussed on nurse training, content and implementation of the intervention. Twenty-two PNs and 15 GPs participated in the interviews. The Quit with PN intervention was viewed positively. Most PNs were satisfied with the training and the materials provided. Some challenges in managing patient data and follow-up were identified. The Quit with PN intervention was acceptable to participating PNs and GPs. Issues to be addressed in the planning and wider implementation of future trials of nurse-led intervention in general practice include providing ongoing mentoring support, integration into practice management systems and strategies to promote greater collaboration in GPs and PN teams in general practice. The ongoing feasibility of the intervention was impacted by the funding model supporting PN employment and the competing demands on the PNs time. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Joaquim Bento de Souza Ferreira Filho
1999-12-01
Full Text Available This paper deals with the effects of trade liberalization and Mercosur integration process upon the Brazilian economy, with emphasis on the agricultural and agroindustrial production sectors, under the hypothesis that those phenomena could be another step in the rural-urban transfer process in Brazil. The analysis is conducted through an applied general equilibrium model. Results suggest that trade liberalization would hardly generate a widespread process of rural-urban transfers, although Brazilian agriculture shows up as a loser in the process. Notwithstanding that fact, there are transfers inside the agricultural sectors, where, besides the losses in the value added of the grain production sectors, there would be gains for the livestock and for the ''other crops" sectors. The agroindustry, in contrast, seems to gain both in Brazil and Argentina. Model results suggest yet that the Brazilian society would be benefitted as a whole by the integration, despite the losses in the agricultural sector.Este artigo analisa os efeitos do processo de liberalização comercial e de constituição do Mercosul sobre a economia brasileira, com ênfase nos setores produtivos da agricultura e da agroindústria, sob a hipótese de que aqueles fenômenos seriam mais uma etapa no processo de transferências rurais-urbanas no Brasil. Para tanto, a análise é conduzida através do uso de um modelo de equilíbrio geral aplicado. Os resultados sugerem que a integração comercial não irá gerar um processo amplo de transferências rurais-urbanas no Brasil, embora a agricultura brasileira apareça, no agregado, como o setor perdedor na integração, em benefício da agricultura argentina. Há, entretanto, transferências dentro dos setores da agropecuária brasileira, onde, ao lado das perdas no valor adicionado do setor produtor de grãos, haveria ganhos para a pecuária e para o setor ''outras culturas". A agroindústria, em contraste, parece ganhar tanto no Brasil
Gusev, E Yu; Chereshnev, V A
2013-01-01
Theoretical and methodological approaches to description of systemic inflammation as general pathological process are discussed. It is shown, that there is a need of integration of wide range of types of researches to develop a model of systemic inflammation.
Conway, Andrew R. A.; Cowan, Nelsin; Bunting, Michael F.; Therriault, David J.; Minkoff, Scott R. B.
2002-01-01
Studied the interrelationships among general fluid intelligence, short-term memory capacity, working memory capacity, and processing speed in 120 young adults and used structural equation modeling to determine the best predictor of general fluid intelligence. Results suggest that working memory capacity, but not short-term memory capacity or…
Verkuyten, Maykel
1988-01-01
Examined lack of differences in general self-esteem between adolescents of ethnic minorities and Dutch adolescents, focusing on reflected appraisal process. Found significant relationship between general self-esteem and perceived evaluation of family members (and no such relationship with nonfamily members) for ethnic minority adolescents;…
Generalized renewal process for repairable systems based on finite Weibull mixture
Veber, B.; Nagode, M.; Fajdiga, M.
2008-01-01
Repairable systems can be brought to one of possible states following a repair. These states are: 'as good as new', 'as bad as old' and 'better than old but worse than new'. The probabilistic models traditionally used to estimate the expected number of failures account for the first two states, but they do not properly apply to the last one, which is more realistic in practice. In this paper, a probabilistic model that is applicable to all of the three after-repair states, called generalized renewal process (GRP), is applied. Simplistically, GRP addresses the repair assumption by introducing the concept of virtual age into the stochastic point processes to enable them to represent the full spectrum of repair assumptions. The shape of measured or design life distributions of systems can vary considerably, and therefore frequently cannot be approximated by simple distribution functions. The scope of the paper is to prove that a finite Weibull mixture, with positive component weights only, can be used as underlying distribution of the time to first failure (TTFF) of the GRP model, on condition that the unknown parameters can be estimated. To support the main idea, three examples are presented. In order to estimate the unknown parameters of the GRP model with m-fold Weibull mixture, the EM algorithm is applied. The GRP model with m mixture components distributions is compared to the standard GRP model based on two-parameter Weibull distribution by calculating the expected number of failures. It can be concluded that the suggested GRP model with Weibull mixture with an arbitrary but finite number of components is suitable for predicting failures based on the past performance of the system
Tong, Seng Fah; Ng, Chirk Jenn; Lee, Verna Kar Mun; Lee, Ping Yein; Ismail, Irmi Zarina; Khoo, Ee Ming; Tahir, Noor Azizah; Idris, Iliza; Ismail, Mastura; Abdullah, Adina
2018-01-01
The participation of general practitioners (GPs) in primary care research is variable and often poor. We aimed to develop a substantive and empirical theoretical framework to explain GPs' decision-making process to participate in research. We used the grounded theory approach to construct a substantive theory to explain the decision-making process of GPs to participate in research activities. Five in-depth interviews and four focus group discussions were conducted among 21 GPs. Purposeful sampling followed by theoretical sampling were used to attempt saturation of the core category. Data were collected using semi-structured open-ended questions. Interviews were recorded, transcribed verbatim and checked prior to analysis. Open line-by-line coding followed by focus coding were used to arrive at a substantive theory. Memoing was used to help bring concepts to higher abstract levels. The GPs' decision to participate in research was attributed to their inner drive and appreciation for primary care research and their confidence in managing their social and research environments. The drive and appreciation for research motivated the GPs to undergo research training to enhance their research knowledge, skills and confidence. However, the critical step in the GPs' decision to participate in research was their ability to align their research agenda with priorities in their social environment, which included personal life goals, clinical practice and organisational culture. Perceived support for research, such as funding and technical expertise, facilitated the GPs' participation in research. In addition, prior experiences participating in research also influenced the GPs' confidence in taking part in future research. The key to GPs deciding to participate in research is whether the research agenda aligns with the priorities in their social environment. Therefore, research training is important, but should be included in further measures and should comply with GPs' social
MacNamara, Annmarie; Proudfit, Greg Hajcak
2014-08-01
Generalized anxiety disorder (GAD) may be characterized by emotion regulation deficits attributable to an imbalance between top-down (i.e., goal-driven) and bottom-up (i.e., stimulus-driven) attention. In prior work, these attentional processes were examined by presenting unpleasant and neutral pictures within a working memory paradigm. The late positive potential (LPP) measured attention toward task-irrelevant pictures. Results from this prior work showed that working memory load reduced the LPP across participants; however, this effect was attenuated for individuals with greater self-reported state anxiety, suggesting reduced top-down control. In the current study, the same paradigm was used with 106 medication-free female participants-71 with GAD and 35 without GAD. Unpleasant pictures elicited larger LPPs, and working memory load reduced the picture-elicited LPP. Compared with healthy controls, participants with GAD showed large LPPs to unpleasant pictures presented under high working memory load. Self-reported symptoms of anhedonic depression were related to a reduced effect of working memory load on the LPP elicited by neutral pictures. These results indicate that individuals with GAD show less flexible modulation of attention when confronted with unpleasant stimuli. Furthermore, among those with GAD, anhedonic depression may broaden attentional deficits to neutral distracters. (c) 2014 APA, all rights reserved.
Zhen Chen
2016-01-01
Full Text Available Accelerated degradation test (ADT has been widely used to assess highly reliable products’ lifetime. To conduct an ADT, an appropriate degradation model and test plan should be determined in advance. Although many historical studies have proposed quite a few models, there is still room for improvement. Hence we propose a Nonlinear Generalized Wiener Process (NGWP model with consideration of the effects of stress level, product-to-product variability, and measurement errors for a higher estimation accuracy and a wider range of use. Then under the constraints of sample size, test duration, and test cost, the plans of constant-stress ADT (CSADT with multiple stress levels based on the NGWP are designed by minimizing the asymptotic variance of the reliability estimation of the products under normal operation conditions. An optimization algorithm is developed to determine the optimal stress levels, the number of units allocated to each level, inspection frequency, and measurement times simultaneously. In addition, a comparison based on degradation data of LEDs is made to show better goodness-of-fit of the NGWP than that of other models. Finally, optimal two-level and three-level CSADT plans under various constraints and a detailed sensitivity analysis are demonstrated through examples in this paper.
Lensky, Vadim; Hagelstein, Franziska; Pascalutsa, Vladimir; Vanderhaeghen, Marc
2018-04-01
We derive two new sum rules for the unpolarized doubly virtual Compton scattering process on a nucleon, which establish novel low-Q2 relations involving the nucleon's generalized polarizabilities and moments of the nucleon's unpolarized structure functions F1(x ,Q2) and F2(x ,Q2). These relations facilitate the determination of some structure constants which can only be accessed in off-forward doubly virtual Compton scattering, not experimentally accessible at present. We perform an empirical determination for the proton and compare our results with a next-to-leading-order chiral perturbation theory prediction. We also show how these relations may be useful for a model-independent determination of the low-Q2 subtraction function in the Compton amplitude, which enters the two-photon-exchange contribution to the Lamb shift of (muonic) hydrogen. An explicit calculation of the Δ (1232 )-resonance contribution to the muonic-hydrogen 2 P -2 S Lamb shift yields -1 ±1 μ eV , confirming the previously conjectured smallness of this effect.
Gao, Yunjiao; Wong, Dennis S W; Yu, Yanping
2016-01-01
Using a sample of 1,163 adolescents from four middle schools in China, this study explores the intervening process of how adolescent maltreatment is related to delinquency within the framework of general strain theory (GST) by comparing two models. The first model is Agnew's integrated model of GST, which examines the mediating effects of social control, delinquent peer affiliation, state anger, and depression on the relationship between maltreatment and delinquency. Based on this model, with the intent to further explore the mediating effects of state anger and depression and to investigate whether their effects on delinquency can be demonstrated more through delinquent peer affiliation and social control, an extended model (Model 2) is proposed by the authors. The second model relates state anger to delinquent peer affiliation and state depression to social control. By comparing the fit indices and the significance of the hypothesized paths of the two models, the study found that the extended model can better reflect the mechanism of how maltreatment contributes to delinquency, whereas the original integrated GST model only receives partial support because of its failure to find the mediating effects of state negative emotions. © The Author(s) 2014.
Geometric correction of radiographic images using general purpose image processing program
Kim, Eun Kyung; Cheong, Ji Seong; Lee, Sang Hoon
1994-01-01
The present study was undertaken to compare geometric corrected image by general-purpose image processing program for the Apple Macintosh II computer (NIH Image, Adobe Photoshop) with standardized image by individualized custom fabricated alignment instrument. Two non-standardized periapical films with XCP film holder only were taken at the lower molar portion of 19 volunteers. Two standardized periapical films with customized XCP film holder with impression material on the bite-block were taken for each person. Geometric correction was performed with Adobe Photoshop and NIH Image program. Specially, arbitrary image rotation function of 'Adobe Photoshop' and subtraction with transparency function of 'NIH Image' were utilized. The standard deviations of grey values of subtracted images were used to measure image similarity. Average standard deviation of grey values of subtracted images if standardized group was slightly lower than that of corrected group. However, the difference was found to be statistically insignificant (p>0.05). It is considered that we can use 'NIH Image' and 'Adobe Photoshop' program for correction of nonstandardized film, taken with XCP film holder at lower molar portion.
Murdoch, Jamie; Varley, Anna; Fletcher, Emily; Britten, Nicky; Price, Linnie; Calitri, Raff; Green, Colin; Lattimer, Valerie; Richards, Suzanne H; Richards, David A; Salisbury, Chris; Taylor, Rod S; Campbell, John L
2015-04-10
Telephone triage represents one strategy to manage demand for face-to-face GP appointments in primary care. However, limited evidence exists of the challenges GP practices face in implementing telephone triage. We conducted a qualitative process evaluation alongside a UK-based cluster randomised trial (ESTEEM) which compared the impact of GP-led and nurse-led telephone triage with usual care on primary care workload, cost, patient experience, and safety for patients requesting a same-day GP consultation. The aim of the process study was to provide insights into the observed effects of the ESTEEM trial from the perspectives of staff and patients, and to specify the circumstances under which triage is likely to be successfully implemented. Here we report perspectives of staff. The intervention comprised implementation of either GP-led or nurse-led telephone triage for a period of 2-3 months. A qualitative evaluation was conducted using staff interviews recruited from eight general practices (4 GP triage, 4 Nurse triage) in the UK, implementing triage as part of the ESTEEM trial. Qualitative interviews were undertaken with 44 staff members in GP triage and nurse triage practices (16 GPs, 8 nurses, 7 practice managers, 13 administrative staff). Staff reported diverse experiences and perceptions regarding the implementation of telephone triage, its effects on workload, and on the benefits of triage. Such diversity were explained by the different ways triage was organised, the staffing models used to support triage, how the introduction of triage was communicated across practice staff, and by how staff roles were reconfigured as a result of implementing triage. The findings from the process evaluation offer insight into the range of ways GP practices participating in ESTEEM implemented telephone triage, and the circumstances under which telephone triage can be successfully implemented beyond the context of a clinical trial. Staff experiences and perceptions of telephone
Jarrold, Christopher; Tam, Helen; Baddeley, Alan D.; Harvey, Caroline E.
2011-01-01
Two studies that examine whether the forgetting caused by the processing demands of working memory tasks is domain-general or domain-specific are presented. In each, separate groups of adult participants were asked to carry out either verbal or nonverbal operations on exactly the same processing materials while maintaining verbal storage items.…
Rosalind Adam
Full Text Available Delayed cancer diagnosis leads to poorer patient outcomes. During short consultations, General Practitioners (GPs make quick decisions about likelihood of cancer. Patients' facial cues are processed rapidly and may influence diagnosis.To investigate whether patients' facial characteristics influence immediate perception of cancer risk by GPs.Web-based binary forced choice experiment with GPs from Northeast Scotland.GPs were presented with a series of pairs of face prototypes and asked to quickly select the patient more likely to have cancer. Faces were modified with respect to age, gender, and ethnicity. Choices were analysed using Chi-squared goodness-of-fit statistics with Bonferroni corrections.Eighty-two GPs participated. GPs were significantly more likely to suspect cancer in older patients. Gender influenced GP cancer suspicion, but this was modified by age: the male face was chosen as more likely to have cancer than the female face for young (72% of GPs;95% CI 61.0-87.0 and middle-aged faces (65.9%; 95% CI 54.7-75.5; but 63.4% (95% CI 52.2-73.3 decided the older female was more likely to have cancer than the older male (p = 0.015. GPs were significantly more likely to suspect cancer in the young Caucasian male (65.9% (95% CI 54.7, 75.5 compared to the young Asian male (p = 0.004.GPs' first impressions about cancer risk are influenced by patient age, gender, and ethnicity. Tackling GP cognitive biases could be a promising way of reducing cancer diagnostic delays, particularly for younger patients.
Monk, J.; Zhu, Y.; Koons, P. O.; Segee, B. E.
2009-12-01
With the introduction of the G8X series of cards by nVidia an architecture called CUDA was released, virtually all subsequent video cards have had CUDA support. With this new architecture nVidia provided extensions for C/C++ that create an Application Programming Interface (API) allowing code to be executed on the GPU. Since then the concept of GPGPU (general purpose graphics processing unit) has been growing, this is the concept that the GPU is very good a algebra and running things in parallel so we should take use of that power for other applications. This is highly appealing in the area of geodynamic modeling, as multiple parallel solutions of the same differential equations at different points in space leads to a large speedup in simulation speed. Another benefit of CUDA is a programmatic method of transferring large amounts of data between the computer's main memory and the dedicated GPU memory located on the video card. In addition to being able to compute and render on the video card, the CUDA framework allows for a large speedup in the situation, such as with a tiled display wall, where the rendered pixels are to be displayed in a different location than where they are rendered. A CUDA extension for VirtualGL was developed allowing for faster read back at high resolutions. This paper examines several aspects of rendering OpenGL graphics on large displays using VirtualGL and VNC. It demonstrates how performance can be significantly improved in rendering on a tiled monitor wall. We present a CUDA enhanced version of VirtualGL as well as the advantages to having multiple VNC servers. It will discuss restrictions caused by read back and blitting rates and how they are affected by different sizes of virtual displays being rendered.
Plint, Simon; Patterson, Fiona
2010-06-01
The UK national recruitment process into general practice training has been developed over several years, with incremental introduction of stages which have been piloted and validated. Previously independent processes, which encouraged multiple applications and produced inconsistent outcomes, have been replaced by a robust national process which has high reliability and predictive validity, and is perceived to be fair by candidates and allocates applicants equitably across the country. Best selection practice involves a job analysis which identifies required competencies, then designs reliable assessment methods to measure them, and over the long term ensures that the process has predictive validity against future performance. The general practitioner recruitment process introduced machine markable short listing assessments for the first time in the UK postgraduate recruitment context, and also adopted selection centre workplace simulations. The key success factors have been identified as corporate commitment to the goal of a national process, with gradual convergence maintaining locus of control rather than the imposition of change without perceived legitimate authority.
Kitis, G.; Furetta, C.; Azorin, J.
2003-01-01
Synthetic thermoluminescent (Tl) glow peaks, following a second and general kinetics order have been generated by computer. The general properties of the so generated peaks have been investigated over several order of magnitude of simulated doses. Some non usual results which, at the best knowledge of the authors, are not reported in the literature, are obtained and discussed. (Author)
Schellevis, F.G.; Eijk, J.T.M. van; Lisdonk, E.H. van de; Velden, J. van der; Weel, C. van
1994-01-01
In a prospective longitudinal study over 21 months the performance of general practitioners and the disease status of their patients was measured during the formulation and implementation of guidelines on follow-up care. Data on 15 general practitioners and on 613 patients with hypertension, 95 with
UN Secretary-General Normative Capability to Influence The Security Council Decision-Making Process
Dmitry Guennadievich Novik
2016-01-01
Full Text Available The present article studies the issue of the interrelation between the senior UN official - the Secretary-General and the main UN body - the Security Council. The nature of the Secretary-General role is ambiguous since the very creation of the UN. On one hand, the Secretary-General leads the Secretariat - the body that carries out technical and subsidiary functions in relation to other UN Main Bodies. This is the way the Secretary-General position was initially viewed by the UN authors. On the other hand, the UN Charter contains certain provisions that, with a certain representation, give the Secretary-General vigorous powers, including political ones. Since the very beginning of the UN operation the Secretary-Generals have tried to define the nature of these auxiliary powers, formalize the practice of their use. Special place among these powers have the provisions given in the Charter article 99. This article give to the Secretary-General the right to directly appeal to the Security Council and draw its attention to the situation that, in his (Secretary-General's opinion may threaten the international peace and security. This right was used by some Secretary-Generals during different crises occurred after the creation of the UN. This article covers consecutively the crisis in Congo, Iran hostage crisis and the situation in Lebanon. These are three situations that forced Secretary-Generals Hammarskjold, Waldheim and de Cuellar to explicitly use their right to appeal to the Security Council. Other cases in UN history involving the Secretary-General appealing to the Security Council while mentioning article 99 cannot be considered as the use of the nature of this article in full sense of its spirit. Such cases were preceded by other appeals to the Council on the same situations by other subjects (notably, the UN member states or other actions that made Secretary-General to merely perform its technical function. The main research problem here is
Generalized enthalpy model of a high-pressure shift freezing process
Smith, N. A. S.; Peppin, S. S. L.; Ramos, A. M.
2012-01-01
High-pressure freezing processes are a novel emerging technology in food processing, offering significant improvements to the quality of frozen foods. To be able to simulate plateau times and thermal history under different conditions, in this work
The theory, practice, and future of process improvement in general thoracic surgery.
Freeman, Richard K
2014-01-01
Process improvement, in its broadest sense, is the analysis of a given set of actions with the aim of elevating quality and reducing costs. The tenets of process improvement have been applied to medicine in increasing frequency for at least the last quarter century including thoracic surgery. This review outlines the theory underlying process improvement, the currently available data sources for process improvement and possible future directions of research. Copyright © 2015 Elsevier Inc. All rights reserved.
Delgado, Francisco
2017-12-01
Quantum information processing should be generated through control of quantum evolution for physical systems being used as resources, such as superconducting circuits, spinspin couplings in ions and artificial anyons in electronic gases. They have a quantum dynamics which should be translated into more natural languages for quantum information processing. On this terrain, this language should let to establish manipulation operations on the associated quantum information states as classical information processing does. This work shows how a kind of processing operations can be settled and implemented for quantum states design and quantum processing for systems fulfilling a SU(2) reduction in their dynamics.
Arbitrage free pricing of forward and futures in the energy market
Kloster, Kristian
2003-01-01
This thesis will describe a method for an arbitrage-free evaluation of forward and futures contracts in the Nordic electricity market. This is a market where it is not possible to hedge using the underlying asset which one normally would do. The electricity market is a relatively new market, and is less developed than the financial markets. The pricing of energy and energy derivatives are depending on factors like production, transport, storage etc. There are different approaches when pricing a forward contract in an energy market. With motivation from interest rate theory, one could model the forward prices directly in the risk neutral world. Another approach is to start out with a model for the spot prices in the physical world, and then derive theoretical forward prices, which then are fitted to observed forward prices. These and other approaches are described by Clewlow and Strickland in their book, Energy derivatives. This thesis uses the approach where I start out with a model for the spot price, and then derive theoretical forward prices. I use a generalization of the multifactor Schwartz model with seasonal trends and Ornstein Uhlenbeck processes to model the spot prices for electricity. This continuous-time model also incorporates mean-reversion, which is an important aspect of energy prices. Historical data for the spot prices is used to estimate my variables in the multi-factor Schwartz model. Then one can specify arbitrage-free prices for forward and futures based on the Schwartz model. The result from this procedure is a joint spot and forward price model in both the risk neutral and physical market, together with knowledge of the equivalent martingale measure chosen by the market. This measure can be interpreted as the market price of risk, which is of interest for risk management. In this setup both futures and forward contracts will have the same pricing dynamics, as the only difference between the two types of contracts is how the payment for the
A General Audiovisual Temporal Processing Deficit in Adult Readers with Dyslexia
Francisco, Ana A.; Jesse, Alexandra; Groen, Margriet A.; McQueen, James M.
2017-01-01
Purpose: Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia. Method: We tested 39 typical readers and 51 adult readers with dyslexia on their sensitivity to the simultaneity of…
A general audiovisual temporal processing deficit in adult readers with dyslexia
Francisco, A.A.; Jesse, A.; Groen, M.A.; McQueen, J.M.
2017-01-01
Purpose: Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia. Method: We tested 39 typical readers and 51 adult readers with
Syncope prevalence in the ED compared to general practice and population: a strong selection process
Olde Nordkamp, Louise R. A.; van Dijk, Nynke; Ganzeboom, Karin S.; Reitsma, Johannes B.; Luitse, Jan S. K.; Dekker, Lukas R. C.; Shen, Win-Kuang; Wieling, Wouter
2009-01-01
Objective: We assessed the prevalence and distribution of the different causes of transient loss of consciousness (TLOC) in the emergency department (ED) and chest pain unit (CPU) and estimated the proportion of persons with syncope in the general population who seek medical attention from either
van Velzen, Joke H.
2018-01-01
There were two purposes for this mixed methods study: to investigate (a) the realistic meaning of awareness and understanding as the underlying constructs of general knowledge of the learning process and (b) a procedure for data consolidation. The participants were 11th-grade high school and first-year university students. Integrated data…
Vnukov, V.S.; Rjazanov, B.G.; Sviridov, V.I.; Frolov, V.V.; Zubkov, Y.N.
1991-01-01
The paper describes the general principles of nuclear criticality safety for handling, processing, transportation and fissile materials storing. Measures to limit the consequences of critical accidents are discussed for the fuel processing plants and fissile materials storage. The system of scientific and technical measures on nuclear criticality safety as well as the system of control and state supervision based on the rules, limits and requirements are described. The criticality safety aspects for various stages of handling nuclear materials are considered. The paper gives descriptions of the methods and approaches for critical risk assessments for the processing facilities, plants and storages. (Author)
Uranium tetrafluoride reduction closed bomb. Part I: Reduction process general conditions
Anca Abati, R.; Lopez Rodriguez, M.
1961-01-01
General conditions about the metallo thermic reduction in small bombs (250 and 800 gr. of uranium) has been investigated. Factors such as kind and granulometry of the magnesium used, magnesium excess and preheating temperature, which affect yields and metal quality have been considered. magnesium excess increased yields in a 15% in the small bomb, about the preheating temperature, there is a range between which yields and metal quality does not change. All tests have been made with graphite linings. (Author) 18 refs
Wu, S. Q.; Cai, X.
2000-01-01
Four classical laws of black hole thermodynamics are extended from exterior (event) horizon to interior (Cauchy) horizon. Especially, the first law of classical thermodynamics for Kerr-Newman black hole (KNBH) is generalized to those in quantum form. Then five quantum conservation laws on the KNBH evaporation effect are derived in virtue of thermodynamical equilibrium conditions. As a by-product, Bekenstein-Hawking's relation $ S=A/4 $ is exactly recovered.
Wu, S.Q.; Cai, X.
2000-01-01
Four classical laws of black-hole thermodynamics are extended from exterior (event) horizon to interior (Cauchy) horizon. Especially, the first law of classical thermodynamics for Kerr-Newman black hole (KNBH) is generalized to those in quantum form. Then five quantum conservation laws on the KNBH evaporation effect are derived in virtue of thermodynamical equilibrium conditions. As a by-product, Bekenstein-Haw king's relation S=A/4 is exactly recovered
Johnson, L; Stricker, R B
2009-05-01
Lyme disease is one of the most controversial illnesses in the history of medicine. In 2006 the Connecticut Attorney General launched an antitrust investigation into the Lyme guidelines development process of the Infectious Diseases Society of America (IDSA). In a recent settlement with IDSA, the Attorney General noted important commercial conflicts of interest and suppression of scientific evidence that had tainted the guidelines process. This paper explores two broad ethical themes that influenced the IDSA investigation. The first is the growing problem of conflicts of interest among guidelines developers, and the second is the increasing centralisation of medical decisions by insurance companies, which use treatment guidelines as a means of controlling the practices of individual doctors and denying treatment for patients. The implications of the first-ever antitrust investigation of medical guidelines and the proposed model to remediate the tainted IDSA guidelines process are also discussed.
Variational estimation of process parameters in a simplified atmospheric general circulation model
Lv, Guokun; Koehl, Armin; Stammer, Detlef
2016-04-01
Parameterizations are used to simulate effects of unresolved sub-grid-scale processes in current state-of-the-art climate model. The values of the process parameters, which determine the model's climatology, are usually manually adjusted to reduce the difference of model mean state to the observed climatology. This process requires detailed knowledge of the model and its parameterizations. In this work, a variational method was used to estimate process parameters in the Planet Simulator (PlaSim). The adjoint code was generated using automatic differentiation of the source code. Some hydrological processes were switched off to remove the influence of zero-order discontinuities. In addition, the nonlinearity of the model limits the feasible assimilation window to about 1day, which is too short to tune the model's climatology. To extend the feasible assimilation window, nudging terms for all state variables were added to the model's equations, which essentially suppress all unstable directions. In identical twin experiments, we found that the feasible assimilation window could be extended to over 1-year and accurate parameters could be retrieved. Although the nudging terms transform to a damping of the adjoint variables and therefore tend to erases the information of the data over time, assimilating climatological information is shown to provide sufficient information on the parameters. Moreover, the mechanism of this regularization is discussed.
Domain general sequence operations contribute to pre-SMA involvement in visuo-spatial processing
E. Charles eLeek
2016-01-01
Full Text Available This study used 3T MRI to elucidate the functional role of supplementary motor area (SMA in relation to visuo-spatial processing. A localizer task contrasting sequential number subtraction and repetitive button pressing was used to functionally delineate non-motor sequence processing in pre-SMA, and activity in SMA-proper associated with motor sequencing. Patterns of BOLD responses in these regions were then contrasted to those from two tasks of visuo-spatial processing. In one task participants performed mental rotation in which recognition memory judgments were made to previously memorized 2D novel patterns across image-plane rotations. The other task involved abstract grid navigation in which observers computed a series of imagined location shifts in response to directional (arrow cues around a mental grid. The results showed overlapping activation in pre-SMA for sequential subtraction and both visuo-spatial tasks. These results suggest that visuo-spatial processing is supported by non-motor sequence operations that involve pre-SMA. More broadly, these data further highlight the functional heterogeneity of pre-SMA, and show that its role extends to processes beyond the planning and online control of movement.
Mo, Jian
2005-01-01
A great number of papers have shown that free radicals as well as bioactive molecules can play a role of mediator in a wide spectrum of biological processes, but the biological actions and chemical reactivity of the free radicals are quite different from that of the bioactive molecules, and that a wide variety of bioactive molecules can be easily modified by free radicals due to having functional groups sensitive to redox, and the significance of the interaction between the free radicals and the bioactive molecules in biological processes has been confirmed by the results of some in vitro and in vivo studies. Based on these evidence, this article presented a novel theory about the mediators of biological processes. The essentials of the theory are: (a) mediators of biological processes can be classified into general and specific mediators; the general mediators include two types of free radicals, namely superoxide and nitric oxide; the specific mediators include a wide variety of bioactive molecules, such as specific enzymes, transcription factors, cytokines and eicosanoids; (b) a general mediator can modify almost any class of the biomolecules, and thus play a role of mediator in nearly every biological process via diverse mechanisms; a specific mediator always acts selectively on certain classes of the biomolecules, and may play a role of mediator in different biological processes via a same mechanism; (c) biological processes are mostly controlled by networks of their mediators, so the free radicals can regulate the last consequence of a biological process by modifying some types of the bioactive molecules, or in cooperation with these bioactive molecules; the biological actions of superoxide and nitric oxide may be synergistic or antagonistic. According to this theory, keeping the integrity of these networks and the balance between the free radicals and the bioactive molecules as well as the balance between the free radicals and the free radical scavengers
Collection, transport and general processing of clinical specimens in Microbiology laboratory.
Sánchez-Romero, M Isabel; García-Lechuz Moya, Juan Manuel; González López, Juan José; Orta Mira, Nieves
2018-02-06
The interpretation and the accuracy of the microbiological results still depend to a great extent on the quality of the samples and their processing within the Microbiology laboratory. The type of specimen, the appropriate time to obtain the sample, the way of sampling, the storage and transport are critical points in the diagnostic process. The availability of new laboratory techniques for unusual pathogens, makes necessary the review and update of all the steps involved in the processing of the samples. Nowadays, the laboratory automation and the availability of rapid techniques allow the precision and turn-around time necessary to help the clinicians in the decision making. In order to be efficient, it is very important to obtain clinical information to use the best diagnostic tools. Copyright © 2018 Elsevier España, S.L.U. and Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.
Enkhbat, S; Toyota, M; Yasuda, N; Ohara, H
1997-06-01
The objective of this study is to compare the influence on delays in the tuberculosis case-finding process according to the types of medical facilities initially visited. The subjects include 107 patients 16 years and older who were diagnosed with bacteriologically confirmed pulmonary tuberculosis at nine tuberculosis specialized facilities in Ulaanbaatar, Mongolia from May 1995 to March 1996. Patients were interviewed about their demographic and socioeconomic factors and their medical records were reviewed for measuring delays. Fifty-five patients initially consulted general physicians and the remaining 52 patients initially visited other types of facilities including tuberculosis specialized facilities. Patients who initially consulted general physicians had shorter patient's delays and longer doctor's delays than those who had visited other facilities first. Since the reduction of patient's delay outweighs the extension of doctor's delay among patients who initially consulted general physicians, their total delay was shorter than that of patients who visited other facilities first. The beneficial influence of consulting general physicians first on the total delay was observed after adjusting for patient's age, sex, residence area, family income and family history of tuberculosis. This finding indicates that general physicians play an important role in improving the passive case-finding process in Mongolia.
Analysis of Queues with Rational Arrival Process Components - A General Approach
Bean, Nigel; Nielsen, Bo Friis
In a previous paper we demonstrated that the well known matrix-geometric solution of Quasi-Birth-and-Death processes is valid also if we introduce Rational Arrival Process (RAP) components. Here we extend those results and we offer an alternative proof by using results obtained by Tweedie. We prove...... the matrix-geometric form for a certain kind of operators on the stationary measure for discrete time Markov chains of GI/M/1 type. We apply this result to an embedded chain with RAP components. We then discuss the straight- forward modification of the standard algorithms for calculating the matrix R...
Westmijze, Mark
2018-01-01
Commercial Off The Shelf (COTS) Chip Multi-Processor (CMP) systems are for cost reasons often used in industry for soft real-time stream processing. COTS CMP systems typically have a low timing predictability, which makes it difficult to develop software applications for these systems with tight
General classification of maturation reaction-norm shape from size-based processes
Christensen, Asbjørn; Andersen, Ken Haste
2011-01-01
for growth and mortality is based on processes at the level of the individual, and is motivated by the energy budget of fish. MRN shape is a balance between opposing factors and depends on subtle details of size dependence of growth and mortality. MRNs with both positive and negative slopes are predicted...
Distribution flow: a general process in the top layer of water repellent soils
Ritsema, C.J.; Dekker, L.W.
1995-01-01
Distribution flow is the process of water and solute flowing in a lateral direction over and through the very first millimetre or centimetre of the soil profile. A potassium bromide tracer was applied in two water-repellent sandy soils to follow the actual flow paths of water and solutes in the
Valery E. Tarabanko
2017-11-01
Full Text Available This review discusses principal patterns that govern the processes of lignins’ catalytic oxidation into vanillin (3-methoxy-4-hydroxybenzaldehyde and syringaldehyde (3,5-dimethoxy-4-hydroxybenzaldehyde. It examines the influence of lignin and oxidant nature, temperature, mass transfer, and of other factors on the yield of the aldehydes and the process selectivity. The review reveals that properly organized processes of catalytic oxidation of various lignins are only insignificantly (10–15% inferior to oxidation by nitrobenzene in terms of yield and selectivity in vanillin and syringaldehyde. Very high consumption of oxygen (and consequentially, of alkali in the process—over 10 mol per mol of obtained vanillin—is highlighted as an unresolved and unexplored problem: scientific literature reveals almost no studies devoted to the possibilities of decreasing the consumption of oxygen and alkali. Different hypotheses about the mechanism of lignin oxidation into the aromatic aldehydes are discussed, and the mechanism comprising the steps of single-electron oxidation of phenolate anions, and ending with retroaldol reaction of a substituted coniferyl aldehyde was pointed out as the most convincing one. The possibility and development prospects of single-stage oxidative processing of wood into the aromatic aldehydes and cellulose are analyzed.
Soobik, Mart
2014-01-01
The sustainability of technology education is related to a traditional understanding of craft and the methods used to teach it; however, the methods used in the teaching process have been influenced by the innovative changes accompanying the development of technology. In respect to social and economic development, it is important to prepare young…
The protection of fundamental human rights in criminal process General report
Brants, C.; Franken, Stijn
2009-01-01
This contribution examines the effect of the uniform standards of human rights in international conventions on criminal process in different countries and identifies factors inherent in national systems that influence the scope of international standards and the way in which they are implemented in
Tarabanko, Valery E.; Tarabanko, Nikolay
2017-01-01
This review discusses principal patterns that govern the processes of lignins’ catalytic oxidation into vanillin (3-methoxy-4-hydroxybenzaldehyde) and syringaldehyde (3,5-dimethoxy-4-hydroxybenzaldehyde). It examines the influence of lignin and oxidant nature, temperature, mass transfer, and of other factors on the yield of the aldehydes and the process selectivity. The review reveals that properly organized processes of catalytic oxidation of various lignins are only insignificantly (10–15%) inferior to oxidation by nitrobenzene in terms of yield and selectivity in vanillin and syringaldehyde. Very high consumption of oxygen (and consequentially, of alkali) in the process—over 10 mol per mol of obtained vanillin—is highlighted as an unresolved and unexplored problem: scientific literature reveals almost no studies devoted to the possibilities of decreasing the consumption of oxygen and alkali. Different hypotheses about the mechanism of lignin oxidation into the aromatic aldehydes are discussed, and the mechanism comprising the steps of single-electron oxidation of phenolate anions, and ending with retroaldol reaction of a substituted coniferyl aldehyde was pointed out as the most convincing one. The possibility and development prospects of single-stage oxidative processing of wood into the aromatic aldehydes and cellulose are analyzed. PMID:29140301
Macellini, S.; Maranesi, M.; Bonini, L.; Simone, L.; Rozzi, S.; Ferrari, P. F.; Fogassi, L.
2012-01-01
Macaques can efficiently use several tools, but their capacity to discriminate the relevant physical features of a tool and the social factors contributing to their acquisition are still poorly explored. In a series of studies, we investigated macaques' ability to generalize the use of a stick as a tool to new objects having different physical features (study 1), or to new contexts, requiring them to adapt the previously learned motor strategy (study 2). We then assessed whether the observation of a skilled model might facilitate tool-use learning by naive observer monkeys (study 3). Results of study 1 and study 2 showed that monkeys trained to use a tool generalize this ability to tools of different shape and length, and learn to adapt their motor strategy to a new task. Study 3 demonstrated that observing a skilled model increases the observers' manipulations of a stick, thus facilitating the individual discovery of the relevant properties of this object as a tool. These findings support the view that in macaques, the motor system can be modified through tool use and that it has a limited capacity to adjust the learnt motor skills to a new context. Social factors, although important to facilitate the interaction with tools, are not crucial for tool-use learning. PMID:22106424
2007-05-01
BASED ENVIROMENTAL IMPACT ANALYSIS PROCESS LAUGHLIN AIR FORCE BASE, TEXAS AGENCY: 47th Flying Training Wing (FTW), Laughlin Air Force Base (AFB), Texas...m3 micrograms per cubic meter US United States USACE United States Army Corp of Engineers USC United States Code USCB United States Census Bureau...effects and annoyance in that very few flight operations and ground engine runs occur between 2200 hours and 0700 hours. BMPs include restricting the
Neural responses to ambiguity involve domain-general and domain-specific emotion processing systems.
Neta, Maital; Kelley, William M; Whalen, Paul J
2013-04-01
Extant research has examined the process of decision making under uncertainty, specifically in situations of ambiguity. However, much of this work has been conducted in the context of semantic and low-level visual processing. An open question is whether ambiguity in social signals (e.g., emotional facial expressions) is processed similarly or whether a unique set of processors come on-line to resolve ambiguity in a social context. Our work has examined ambiguity using surprised facial expressions, as they have predicted both positive and negative outcomes in the past. Specifically, whereas some people tended to interpret surprise as negatively valenced, others tended toward a more positive interpretation. Here, we examined neural responses to social ambiguity using faces (surprise) and nonface emotional scenes (International Affective Picture System). Moreover, we examined whether these effects are specific to ambiguity resolution (i.e., judgments about the ambiguity) or whether similar effects would be demonstrated for incidental judgments (e.g., nonvalence judgments about ambiguously valenced stimuli). We found that a distinct task control (i.e., cingulo-opercular) network was more active when resolving ambiguity. We also found that activity in the ventral amygdala was greater to faces and scenes that were rated explicitly along the dimension of valence, consistent with findings that the ventral amygdala tracks valence. Taken together, there is a complex neural architecture that supports decision making in the presence of ambiguity: (a) a core set of cortical structures engaged for explicit ambiguity processing across stimulus boundaries and (b) other dedicated circuits for biologically relevant learning situations involving faces.
Wang, Rongming; Yang, Wantai; Song, Yuanjun; Shen, Xiaomiao; Wang, Junmei; Zhong, Xiaodi; Li, Shuai; Song, Yujun
2015-01-01
A new methodology based on core alloying and shell gradient-doping are developed for the synthesis of nanohybrids, realized by coupled competitive reactions, or sequenced reducing-nucleation and co-precipitation reaction of mixed metal salts in a microfluidic and batch-cooling process. The latent time of nucleation and the growth of nanohybrids can be well controlled due to the formation of controllable intermediates in the coupled competitive reactions. Thus, spatiotemporal-resolved synthesi...
Østergaard, Jacob; Kramer, Mark A.; Eden, Uri T.
2018-01-01
current. We then fit these spike train datawith a statistical model (a generalized linear model, GLM, with multiplicative influences of past spiking). For different levels of noise, we show how the GLM captures both the deterministic features of the Izhikevich neuron and the variability driven...... by the noise. We conclude that the GLM captures essential features of the simulated spike trains, but for near-deterministic spike trains, goodness-of-fit analyses reveal that the model does not fit very well in a statistical sense; the essential random part of the GLM is not captured....... are separately applied; understanding the relationships between these modeling approaches remains an area of active research. In this letter, we examine this relationship using simulation. To do so, we first generate spike train data from a well-known dynamical model, the Izhikevich neuron, with a noisy input...
Wu Shan-Shan; Wang Lei; Yang De-Ren
2011-01-01
The behavior of wafers and solar cells from the border of a multicrystalline silicon (mc-Si) ingot, which contain deteriorated regions, is investigated. It is found that the diffusion length distribution of minority carriers in the cells is uniform, and high efficiency of the solar cells (about 16%) is achieved. It is considered that the quality of the deteriorated regions could be improved to be similar to that of adjacent regions. Moreover, it is indicated that during general solar cell fabrication, phosphorus gettering and hydrogen passivation could significantly improve the quality of deteriorated regions, while aluminum gettering by RTP could not. Therefore, it is suggested that the border of a mc-Si ingot could be used to fabricate high efficiency solar cells, which will increase mc-Si utilization effectively. (condensed matter: structure, mechanical and thermal properties)
Simulations of the general circulation of the Martian atmosphere. I - Polar processes
Pollack, James B.; Haberle, Robert M.; Schaeffer, James; Lee, Hilda
1990-01-01
Numerical simulations of the Martian atmosphere general circulation are carried out for 50 simulated days, using a three-dimensional model, based on the primitive equations of meteorology, which incorporated the radiative effects of atmospheric dust on solar and thermal radiation. A large number of numerical experiments were conducted for alternative choices of seasonal date and dust optical depth. It was found that, as the dust content of the winter polar region increased, the rate of atmospheric CO2 condensation increased sharply. It is shown that the strong seasonal variation in the atmospheric dust content observed might cause a number of hemispheric asymmetries. These asymmetries include the greater prevalence of polar hoods in the northern polar region during winter, the lower albedo of the northern polar cap during spring, and the total dissipation of the northern CO2 ice cap during the warmer seasons.
Generalized role for the cerebellum in encoding internal models: evidence from semantic processing.
Moberget, Torgeir; Gullesen, Eva Hilland; Andersson, Stein; Ivry, Richard B; Endestad, Tor
2014-02-19
The striking homogeneity of cerebellar microanatomy is strongly suggestive of a corresponding uniformity of function. Consequently, theoretical models of the cerebellum's role in motor control should offer important clues regarding cerebellar contributions to cognition. One such influential theory holds that the cerebellum encodes internal models, neural representations of the context-specific dynamic properties of an object, to facilitate predictive control when manipulating the object. The present study examined whether this theoretical construct can shed light on the contribution of the cerebellum to language processing. We reasoned that the cerebellum might perform a similar coordinative function when the context provided by the initial part of a sentence can be highly predictive of the end of the sentence. Using functional MRI in humans we tested two predictions derived from this hypothesis, building on previous neuroimaging studies of internal models in motor control. First, focal cerebellar activation-reflecting the operation of acquired internal models-should be enhanced when the linguistic context leads terminal words to be predictable. Second, more widespread activation should be observed when such predictions are violated, reflecting the processing of error signals that can be used to update internal models. Both predictions were confirmed, with predictability and prediction violations associated with increased blood oxygenation level-dependent signal in the posterior cerebellum (Crus I/II). Our results provide further evidence for cerebellar involvement in predictive language processing and suggest that the notion of cerebellar internal models may be extended to the language domain.
McCaskey, Ursina; von Aster, Michael; O’Gorman Tuura, Ruth; Kucian, Karin
2017-01-01
The link between number and space has been discussed in the literature for some time, resulting in the theory that number, space and time might be part of a generalized magnitude system. To date, several behavioral and neuroimaging findings support the notion of a generalized magnitude system, although contradictory results showing a partial overlap or separate magnitude systems are also found. The possible existence of a generalized magnitude processing area leads to the question how individuals with developmental dyscalculia (DD), known for deficits in numerical-arithmetical abilities, process magnitudes. By means of neuropsychological tests and functional magnetic resonance imaging (fMRI) we aimed to examine the relationship between number and space in typical and atypical development. Participants were 16 adolescents with DD (14.1 years) and 14 typically developing (TD) peers (13.8 years). In the fMRI paradigm participants had to perform discrete (arrays of dots) and continuous magnitude (angles) comparisons as well as a mental rotation task. In the neuropsychological tests, adolescents with dyscalculia performed significantly worse in numerical and complex visuo-spatial tasks. However, they showed similar results to TD peers when making discrete and continuous magnitude decisions during the neuropsychological tests and the fMRI paradigm. A conjunction analysis of the fMRI data revealed commonly activated higher order visual (inferior and middle occipital gyrus) and parietal (inferior and superior parietal lobe) magnitude areas for the discrete and continuous magnitude tasks. Moreover, no differences were found when contrasting both magnitude processing conditions, favoring the possibility of a generalized magnitude system. Group comparisons further revealed that dyscalculic subjects showed increased activation in domain general regions, whilst TD peers activate domain specific areas to a greater extent. In conclusion, our results point to the existence of a
McCaskey, Ursina; von Aster, Michael; O'Gorman Tuura, Ruth; Kucian, Karin
2017-01-01
The link between number and space has been discussed in the literature for some time, resulting in the theory that number, space and time might be part of a generalized magnitude system. To date, several behavioral and neuroimaging findings support the notion of a generalized magnitude system, although contradictory results showing a partial overlap or separate magnitude systems are also found. The possible existence of a generalized magnitude processing area leads to the question how individuals with developmental dyscalculia (DD), known for deficits in numerical-arithmetical abilities, process magnitudes. By means of neuropsychological tests and functional magnetic resonance imaging (fMRI) we aimed to examine the relationship between number and space in typical and atypical development. Participants were 16 adolescents with DD (14.1 years) and 14 typically developing (TD) peers (13.8 years). In the fMRI paradigm participants had to perform discrete (arrays of dots) and continuous magnitude (angles) comparisons as well as a mental rotation task. In the neuropsychological tests, adolescents with dyscalculia performed significantly worse in numerical and complex visuo-spatial tasks. However, they showed similar results to TD peers when making discrete and continuous magnitude decisions during the neuropsychological tests and the fMRI paradigm. A conjunction analysis of the fMRI data revealed commonly activated higher order visual (inferior and middle occipital gyrus) and parietal (inferior and superior parietal lobe) magnitude areas for the discrete and continuous magnitude tasks. Moreover, no differences were found when contrasting both magnitude processing conditions, favoring the possibility of a generalized magnitude system. Group comparisons further revealed that dyscalculic subjects showed increased activation in domain general regions, whilst TD peers activate domain specific areas to a greater extent. In conclusion, our results point to the existence of a
Cincotti, Silvano; Ponta, Linda; Raberto, Marco; Scalas, Enrico
2005-05-01
In this paper, empirical analyses and computational experiments are presented on high-frequency data for a double-auction (book) market. Main objective of the paper is to generalize the order waiting time process in order to properly model such empirical evidences. The empirical study is performed on the best bid and best ask data of 7 U.S. financial markets, for 30-stock time series. In particular, statistical properties of trading waiting times have been analyzed and quality of fits is evaluated by suitable statistical tests, i.e., comparing empirical distributions with theoretical models. Starting from the statistical studies on real data, attention has been focused on the reproducibility of such results in an artificial market. The computational experiments have been performed within the Genoa Artificial Stock Market. In the market model, heterogeneous agents trade one risky asset in exchange for cash. Agents have zero intelligence and issue random limit or market orders depending on their budget constraints. The price is cleared by means of a limit order book. The order generation is modelled with a renewal process. Based on empirical trading estimation, the distribution of waiting times between two consecutive orders is modelled by a mixture of exponential processes. Results show that the empirical waiting-time distribution can be considered as a generalization of a Poisson process. Moreover, the renewal process can approximate real data and implementation on the artificial stocks market can reproduce the trading activity in a realistic way.
Njau, E.C.
1987-10-01
Complete analytical expressions for the distortion signals introduced into analogue signals by sampling and quantization processes are developed. These expressions are made up of terms that are wholely functions of the parameters of the original signals involved and hence are easy to evaluate numerically. It is shown in Parts 2 and 3 of this series that these expressions may be successfully used in the design and development of some electronic devices whose operation depends upon the above-named distortion signals. (author). 7 refs
Boman, R.; Papeleux, L.; Ponthot, J. P.
2007-01-01
In this paper, the Arbitrary Lagrangian Eulerian formalism is used to compute the steady state of a 2D metal cutting operation and a 3D U-shaped cold roll forming process. Compared to the Lagrangian case, this method allows the use of a refined mesh near the tools, leading to an accurate representation of the chip formation (metal cutting) and the bending of the sheet (roll forming) with a limited computational time. The main problem of this kind of simulation is the rezoning of the nodes on the free surfaces of the sheet. A modified iterative isoparametric smoother is used to manage this geometrically complex and CPU expensive task
Therdsak Maitaouthong
2011-11-01
Full Text Available This article presents the factors affecting the integration of information literacy in the teaching and learning processes of general education courses at an undergraduate level, where information literacy is used as a tool in the student-centered teaching approach. The research was divided into two phases: (1 The study of factors affecting at a policy level – a qualitative research method conducted through an in-depth interview of the vice president for academic affairs and the Director of the General Education Management Center, and (2 The survey of factors affecting in the teaching and learning processes, which is concluded through the questioning of lecturers of general education courses, and librarians. The qualitative data was analyzed on content, and the quantitative data was analyzed through the use of descriptive statistics, weight of score prioritization and percentage. Two major categories were found to have an impact on integrating information literacy in the teaching and learning of general education courses at an undergraduate level. (1 Six factors at a policy level, namely, institutional policy, administrative structure and system, administrators’ roles, resources and infrastructures, learning resources and supporting programs, and teacher evaluation and development. (2 There are eleven instructional factors: roles of lecturers, roles of librarians, roles of learners, knowledge and understanding of information literacy of lecturers and librarians, cooperation between librarians and lecturers, learning outcomes, teaching plans, teaching methods, teaching activities, teaching aids, and student assessment and evaluation.
Hadzidiakos, Daniel; Horn, Nadja; Degener, Roland; Buchner, Axel; Rehberg, Benno
2009-08-01
There have been reports of memory formation during general anesthesia. The process-dissociation procedure has been used to determine if these are controlled (explicit/conscious) or automatic (implicit/unconscious) memories. This study used the process-dissociation procedure with the original measurement model and one which corrected for guessing to determine if more accurate results were obtained in this setting. A total of 160 patients scheduled for elective surgery were enrolled. Memory for words presented during propofol and remifentanil general anesthesia was tested postoperatively by using a word-stem completion task in a process-dissociation procedure. To assign possible memory effects to different levels of anesthetic depth, the authors measured depth of anesthesia using the BIS XP monitor (Aspect Medical Systems, Norwood, MA). Word-stem completion performance showed no evidence of memory for intraoperatively presented words. Nevertheless, an evaluation of these data using the original measurement model for process-dissociation data suggested an evidence of controlled (C = 0.05; 95% confidence interval [CI] 0.02-0.08) and automatic (A = 0.11; 95% CI 0.09-0.12) memory processes (P memory processes was obtained. The authors report and discuss parallel findings for published data sets that were generated by using the process-dissociation procedure. Patients had no memories for auditory information presented during propofol/remifentanil anesthesia after midazolam premedication. The use of the process-dissociation procedure with the original measurement model erroneously detected memories, whereas the extended model, corrected for guessing, correctly revealed no memory.
The protection of fundamental human rights in criminal process
General report
Chrisje Brants
2009-10-01
Full Text Available This contribution examines the effect of the uniform standards of human rights in international conventions on criminal process in different countries and identifies factors inherent in national systems that influence the scope of international standards and the way in which they are implemented in a national context. Three overreaching issues influence the reception of international fundamental rights and freedoms in criminal process: constitutional arrangements, legal tradition and culture, and practical circumstances. There is no such thing as the uniform implementation of convention standards; even in Europe where the European Convention on Human Rights and Fundamental Freedoms and the case law of the European Court play a significant role, there is still much diversity in the actual implementation of international norms due to the influence of legal traditions which form a counterforce to the weight of convention obligations. An even greater counterforce is at work in practical circumstances that can undermine international norms, most especially global issues of security, crime control and combating terrorism. Although convention norms are still in place, there is a very real risk that they are circumvented or at least diluted in order to increase effective crime control.
Generalization of the Poincare sphere to process 2D displacement signals
Sciammarella, Cesar A.; Lamberti, Luciano
2017-06-01
Traditionally the multiple phase method has been considered as an essential tool for phase information recovery. The in-quadrature phase method that theoretically is an alternative pathway to achieve the same goal failed in actual applications. The authors in a previous paper dealing with 1D signals have shown that properly implemented the in-quadrature method yields phase values with the same accuracy than the multiple phase method. The present paper extends the methodology developed in 1D to 2D. This extension is not a straight forward process and requires the introduction of a number of additional concepts and developments. The concept of monogenic function provides the necessary tools required for the extension process. The monogenic function has a graphic representation through the Poincare sphere familiar in the field of Photoelasticity and through the developments introduced in this paper connected to the analysis of displacement fringe patterns. The paper is illustrated with examples of application that show that multiple phases method and the in-quadrature are two aspects of the same basic theoretical model.
Ward nurses' experiences of the discharge process between intensive care unit and general ward.
Kauppi, Wivica; Proos, Matilda; Olausson, Sepideh
2018-05-01
Intensive care unit (ICU) discharges are challenging practices that carry risks for patients. Despite the existing body of knowledge, there are still difficulties in clinical practice concerning unplanned ICU discharges, specifically where there is no step-down unit. The aim of this study was to explore general ward nurses' experiences of caring for patients being discharged from an ICU. Data were collected from focus groups and in-depth interviews with a total of 16 nurses from three different hospitals in Sweden. An inductive qualitative design was chosen. The analysis revealed three themes that reflect the challenges in nursing former ICU patients: a vulnerable patient, nurses' powerlessness and organizational structure. The nurses described the challenge of nursing a fragile patient based on several aspects. They expressed feeling unrealistic demands when caring for a fragile former ICU patient. The demands were related to their own profession and knowledge regarding how to care for this group of patients. The organizational structure had an impact on how the nurses' caring practice could be realized. This evoked ethical concerns that the nurses had to cope with as the organization's care guidelines did not always favour the patients. The structure of the organization and its leadership appear to have a significant impact on the nurses' ability to offer patients the care they need. This study sheds light on the need for extended outreach services and intermediate care in order to meet the needs of patients after the intensive care period. © 2018 British Association of Critical Care Nurses.
Song, Yun S; Steinrücken, Matthias
2012-03-01
The transition density function of the Wright-Fisher diffusion describes the evolution of population-wide allele frequencies over time. This function has important practical applications in population genetics, but finding an explicit formula under a general diploid selection model has remained a difficult open problem. In this article, we develop a new computational method to tackle this classic problem. Specifically, our method explicitly finds the eigenvalues and eigenfunctions of the diffusion generator associated with the Wright-Fisher diffusion with recurrent mutation and arbitrary diploid selection, thus allowing one to obtain an accurate spectral representation of the transition density function. Simplicity is one of the appealing features of our approach. Although our derivation involves somewhat advanced mathematical concepts, the resulting algorithm is quite simple and efficient, only involving standard linear algebra. Furthermore, unlike previous approaches based on perturbation, which is applicable only when the population-scaled selection coefficient is small, our method is nonperturbative and is valid for a broad range of parameter values. As a by-product of our work, we obtain the rate of convergence to the stationary distribution under mutation-selection balance.
Keanini, R.G.
2011-01-01
Research highlights: → Systematic approach for physically probing nonlinear and random evolution problems. → Evolution of vortex sheets corresponds to evolution of an Ornstein-Uhlenbeck process. → Organization of near-molecular scale vorticity mediated by hydrodynamic modes. → Framework allows calculation of vorticity evolution within random strain fields. - Abstract: A framework which combines Green's function (GF) methods and techniques from the theory of stochastic processes is proposed for tackling nonlinear evolution problems. The framework, established by a series of easy-to-derive equivalences between Green's function and stochastic representative solutions of linear drift-diffusion problems, provides a flexible structure within which nonlinear evolution problems can be analyzed and physically probed. As a preliminary test bed, two canonical, nonlinear evolution problems - Burgers' equation and the nonlinear Schroedinger's equation - are first treated. In the first case, the framework provides a rigorous, probabilistic derivation of the well known Cole-Hopf ansatz. Likewise, in the second, the machinery allows systematic recovery of a known soliton solution. The framework is then applied to a fairly extensive exploration of physical features underlying evolution of randomly stretched and advected Burger's vortex sheets. Here, the governing vorticity equation corresponds to the Fokker-Planck equation of an Ornstein-Uhlenbeck process, a correspondence that motivates an investigation of sub-sheet vorticity evolution and organization. Under the assumption that weak hydrodynamic fluctuations organize disordered, near-molecular-scale, sub-sheet vorticity, it is shown that these modes consist of two weakly damped counter-propagating cross-sheet acoustic modes, a diffusive cross-sheet shear mode, and a diffusive cross-sheet entropy mode. Once a consistent picture of in-sheet vorticity evolution is established, a number of analytical results, describing the
NJOY-97, General ENDF/B Processing System for Reactor Design Problems
1999-01-01
1 - Description of program or function: The NJOY nuclear data processing system is a modular computer code used for converting evaluated nuclear data in the ENDF format into libraries useful for applications calculations. Because the Evaluated Nuclear Data File (ENDF) format is used all around the world (e.g., ENDF/B-VI in the US, JEF-2.2 in Europe, JENDL-3.2 in Japan, BROND-2.2 in Russia), NJOY gives its users access to a wide variety of the most up-to-date nuclear data. NJOY provides comprehensive capabilities for processing evaluated data, and it can serve applications ranging from continuous-energy Monte Carlo (MCNP), through deterministic transport codes (DANT, ANISN, DORT), to reactor lattice codes (WIMS, EPRI). NJOY handles a wide variety of nuclear effects, including resonances, Doppler broadening, heating (KERMA), radiation-damage, thermal scattering (even cold moderators), gas production, neutrons and charged particles, photo-atomic interactions, self shielding, probability tables, photon production, and high-energy interactions (to 150 MeV). Output can include printed listings, special library files for applications, and Postscript graphics (plus colour). More information on NJOY is available from the developer's home page at http://t2.lanl.gov. Follow the Tourbus section of the Tour area to find notes from the ICTP lectures held at Trieste in March 1998 on the ENDF format and on the NJOY code. 2 - Methods: NJOY97 consists of a set of modules, each performing a well-defined processing task. Each of these modules is essentially a separate computer program linked together by input and output files and a few common constants. The methods and instructions on how to use them are documented in the LA-12740-M report on NJOY91 and in the 'README' file. No new published document is yet available. NJOY97 is a cleaned up version of NJOY94.105 that features compatibility with a wider variety of compilers and machines, explicit double precision for 32-bit systems, a
Prototype performance studies of a Full Mesh ATCA-based General Purpose Data Processing Board
Okumura, Yasuyuki; Liu, Tiehui Ted; Yin, Hang
2013-01-01
High luminosity conditions at the LHC pose many unique challenges for potential silicon based track trigger systems. One of the major challenges is data formatting, where hits from thousands of silicon modules must first be shared and organized into overlapping eta-phi trigger towers. Communication between nodes requires high bandwidth, low latency, and flexible real time data sharing, for which a full mesh backplane is a natural solution. A custom Advanced Telecommunications Computing Architecture data processing board is designed with the goal of creating a scalable architecture abundant in flexible, non-blocking, high bandwidth board to board communication channels while keeping the design as simple as possible. We have performed the first prototype board testing and our first attempt at designing the prototype system has proven to be successful. Leveraging the experience we gained through designing, building and testing the prototype board system we are in the final stages of laying out the next generatio...
General information on licensing process in Bulgaria and disused sealed sources
Nizamska, M.
2003-01-01
The basic legal framework for radiation protection and the safety of radiation sources is given in the report. The authorisation process is described. Actual data for the system of authorisation about SIR during 2002/2003 are given. The planned activities related to RAW management are:commissioning of the complex for treatment, conditioning and storage of RAW in Kozloduy NPP - by the end of 2003; investigation of Gabra site for construction of institutional waste disposal facility - by the end of 2004; implementation of program for reconstruction and modernisation of Novi Han Repository - by the end of 2007; site selection for the national RAW disposal facility - by the end of 2008. The Nuclear Energy Act defines the following future activities: establishment of the State Enterprise 'RAW' in 2004; development of new secondary legislation for safe management of SF and RAW until July 2004; update of the National Strategy for Safe Management of SF and RAW until the end of 2003
General description of few-body break-up processes at threshold
Barrachina, R.O.
2005-01-01
In this communication we describe the effects produced by an N-body threshold behavior in N + 1 body break-up processes, as it occurs in situations where one of the fragments acquires almost all the excess energy of the system. Furthermore, we relate the appearance of discontinuities in single-particle multiply differential cross sections to the threshold behavior of the remaining particles, and describe the applicability of these ideas to different systems from atomic, molecular and nuclear collision physics. We finally show that, even though the study of ultracold collisions represents the direct way of gathering information on a break-up system near threshold, the analysis of high-energy collisions provides an alternative, and sometimes advantageous, approach
Generalization of the Wide-Sense Markov Concept to a Widely Linear Processing
Espinosa-Pulido, Juan Antonio; Navarro-Moreno, Jesús; Fernández-Alcalá, Rosa María; Ruiz-Molina, Juan Carlos; Oya-Lechuga, Antonia; Ruiz-Fuentes, Nuria
2014-01-01
In this paper we show that the classical definition and the associated characterizations of wide-sense Markov (WSM) signals are not valid for improper complex signals. For that, we propose an extension of the concept of WSM to a widely linear (WL) setting and the study of new characterizations. Specifically, we introduce a new class of signals, called widely linear Markov (WLM) signals, and we analyze some of their properties based either on second-order properties or on state-space models from a WL processing standpoint. The study is performed in both the forwards and backwards directions of time. Thus, we provide two forwards and backwards Markovian representations for WLM signals. Finally, different estimation recursive algorithms are obtained for these models
A Full Mesh ATCA-based General Purpose Data Processing Board: Pulsar II
Olsen, J; Okumura, Y
2014-01-01
High luminosity conditions at the LHC pose many unique challenges for potential silicon based track trigger systems. Among those challenges is data formatting, where hits from thousands of silicon modules must first be shared and organized into overlapping trigger towers. Other challenges exist for Level-1 track triggers, where many parallel data paths may be used for 5 high speed time multiplexed data transfers. Communication between processing nodes requires high bandwidth, low latency, and flexible real time data sharing, for which a full mesh backplane is a natural fit. A custom full mesh enabled ATCA board called the Pulsar II has been designed with the goal of creating a scalable architecture abundant in flexible, non-blocking, high bandwidth board- to-board communication channels while keeping the design as simple as possible.
NJOY91, General ENDF/B Processing System for Reactor Design Problems
MacFarlane, R.E.; Barrett, R.J.; Muir, D.W.; Boicourt, R.M.
1997-01-01
1 - Description of problem or function: The NJOY nuclear data processing system is a comprehensive computer code package for producing pointwise and multigroup neutron, photon, and charged particle cross sections from ENDF/B evaluated nuclear data. NJOY-89 is a substantial upgrade of the previous release. It includes photon production and photon interaction capabilities, heating calculations, covariance processing, and thermal scattering capabilities. It is capable of processing data in ENDF/B-4, ENDF/B-5, and ENDF/B-6 formats for evaluated data (to the extent that the latter have been frozen at the time of this release). NJOY-91.118: This is the last in the NJOY-91 series. It uses the same module structure as the earlier versions and its graphics options depend on DISSPLA. NJOY91.118 includes bug fixes, improvements in several modules, and some new capabilities. Information on the changes is included in the README file. A new test problem was added to test some ENDF/B-6 features, including Reich-Moore resonance reconstruction, energy-angle matrices in GROUPR, and energy-angle distributions in ACER. The 91.118 release is basically configured for UNIX. Short descriptions of the different modules follow: RECONR Reconstructs pointwise (energy-dependent) cross sections from ENDF/B resonance parameters and interpolation schemes. BROADR Doppler broadens and thins pointwise cross sections. UNRESR Computes effective self-shielded pointwise cross sections in the unresolved-resonance region. HEATR Generates pointwise heat production cross sections (KERMA factors) and radiation-damage-energy production cross sections. THERMR Produces incoherent inelastic energy-to-energy matrices for free or bound scatterers, coherent elastic cross sections for hexagonal materials, and incoherent elastic cross sections. GROUPR Generates self-shielded multigroup cross sections, group- to-group neutron scattering matrices, and photon production matrices from pointwise input. GAMINR Calculates
Daniel Alfonso-Robaina
2011-09-01
Full Text Available
Para mejorar el enfoque a procesos en el rediseño de la organización de la empresa, es necesaria la adecuación de 6 fases. En la propuesta se presentan las actividades de cada fase del Procedimiento de rediseño organizacional para mejorar el enfoque a procesos, así como sus entradas y salidas. El procedimiento propuesto en esta investigación es el resultado de la fusión de varios de los estudiados, teniendo como base el procedimiento de Rummler y Brache (1995 [1]. En la investigación fue útil la utilización de técnicas, entre las que se destacan: las entrevistas, la tormenta de ideas y la búsqueda bibliográfica; además del empleo de herramientas como: el Mapa de Procesos y el Modelo General de Organización. Con el uso de estas técnicas y herramientas se identificó como asunto crítico de negocio en la empresa Explomat, la insuficiente gestión integrada de los procesos, lo que debilita las posibilidades de la entidad para aprovechar las oportunidades que le brinda el entorno, poniendo en peligro el cumplimiento de su misión. Teniendo en cuenta el análisis del nivel de integración del sistema de dirección, a partir de las matrices de relaciones, se contribuyó a proyectar mejoras, confeccionando el debe “ser”.
Abstract
In order to improve the process approach relative to the organizational redesign, it is necessary the adaptation of 6 phases. In the proposal, the activities of each phase of the Procedure of organizational redesign to improve the process approach, as well as its inputs and outputs, are presented. The proposed procedure in this investigation is the result of the merger of several of those studied, taking as a starting point the procedure of Rummler and Brache (1995 [1]. In this investigation it was useful the use of techniques, such as the interviews, the brainstorm and bibliographical search; besides the employment of tools like the Processes Map and the General Model of
Abhinav Parihar
2018-04-01
Full Text Available Artificial neural networks can harness stochasticity in multiple ways to enable a vast class of computationally powerful models. Boltzmann machines and other stochastic neural networks have been shown to outperform their deterministic counterparts by allowing dynamical systems to escape local energy minima. Electronic implementation of such stochastic networks is currently limited to addition of algorithmic noise to digital machines which is inherently inefficient; albeit recent efforts to harness physical noise in devices for stochasticity have shown promise. To succeed in fabricating electronic neuromorphic networks we need experimental evidence of devices with measurable and controllable stochasticity which is complemented with the development of reliable statistical models of such observed stochasticity. Current research literature has sparse evidence of the former and a complete lack of the latter. This motivates the current article where we demonstrate a stochastic neuron using an insulator-metal-transition (IMT device, based on electrically induced phase-transition, in series with a tunable resistance. We show that an IMT neuron has dynamics similar to a piecewise linear FitzHugh-Nagumo (FHN neuron and incorporates all characteristics of a spiking neuron in the device phenomena. We experimentally demonstrate spontaneous stochastic spiking along with electrically controllable firing probabilities using Vanadium Dioxide (VO2 based IMT neurons which show a sigmoid-like transfer function. The stochastic spiking is explained by two noise sources - thermal noise and threshold fluctuations, which act as precursors of bifurcation. As such, the IMT neuron is modeled as an Ornstein-Uhlenbeck (OU process with a fluctuating boundary resulting in transfer curves that closely match experiments. The moments of interspike intervals are calculated analytically by extending the first-passage-time (FPT models for Ornstein-Uhlenbeck (OU process to include a
Stochastic Models for Laser Propagation in Atmospheric Turbulence.
Leland, Robert Patton
In this dissertation, stochastic models for laser propagation in atmospheric turbulence are considered. A review of the existing literature on laser propagation in the atmosphere and white noise theory is presented, with a view toward relating the white noise integral and Ito integral approaches. The laser beam intensity is considered as the solution to a random Schroedinger equation, or forward scattering equation. This model is formulated in a Hilbert space context as an abstract bilinear system with a multiplicative white noise input, as in the literature. The model is also modeled in the Banach space of Fresnel class functions to allow the plane wave case and the application of path integrals. Approximate solutions to the Schroedinger equation of the Trotter-Kato product form are shown to converge for each white noise sample path. The product forms are shown to be physical random variables, allowing an Ito integral representation. The corresponding Ito integrals are shown to converge in mean square, providing a white noise basis for the Stratonovich correction term associated with this equation. Product form solutions for Ornstein -Uhlenbeck process inputs were shown to converge in mean square as the input bandwidth was expanded. A digital simulation of laser propagation in strong turbulence was used to study properties of the beam. Empirical distributions for the irradiance function were estimated from simulated data, and the log-normal and Rice-Nakagami distributions predicted by the classical perturbation methods were seen to be inadequate. A gamma distribution fit the simulated irradiance distribution well in the vicinity of the boresight. Statistics of the beam were seen to converge rapidly as the bandwidth of an Ornstein-Uhlenbeck process was expanded to its white noise limit. Individual trajectories of the beam were presented to illustrate the distortion and bending of the beam due to turbulence. Feynman path integrals were used to calculate an
Signal processing and general purpose data acquisition system for on-line tomographic measurements
Murari, A.; Martin, P.; Hemming, O.; Manduchi, G.; Marrelli, L.; Taliercio, C.; Hoffmann, A.
1997-01-01
New analog signal conditioning electronics and data acquisition systems have been developed for the soft x-ray and bolometric tomography diagnostic in the reverse field pinch experiment (RFX). For the soft x-ray detectors the analog signal processing includes a fully differential current to voltage conversion, with up to a 200 kHz bandwidth. For the bolometers, a 50 kHz carrier frequency amplifier allows a maximum bandwidth of 10 kHz. In both cases the analog signals are digitized with a 1 MHz sampling rate close to the diagnostic and are transmitted via a transparent asynchronous xmitter/receiver interface (TAXI) link to purpose built Versa Module Europa (VME) modules which perform data acquisition. A software library has been developed for data preprocessing and tomographic reconstruction. It has been written in C language and is self-contained, i.e., no additional mathematical library is required. The package is therefore platform-free: in particular it can perform online analysis in a real-time application, such as continuous display and feedback, and is portable for long duration fusion or other physical experiments. Due to the modular organization of the library, new preprocessing and analysis modules can be easily integrated in the environment. This software is implemented in RFX over three different platforms: open VMS, digital Unix, and VME 68040 CPU.
NJOY-94, General ENDF/B Processing System for Reactor Design Problems
1997-01-01
1 - Description of program or function: The NJOY nuclear data processing system is a comprehensive computer code system for producing pointwise and multigroup cross sections and related quantities from ENDF/B evaluated nuclear data in the ENDF format, including the latest US library, ENDF/B-VI. The NJOY code works with neutrons, photons, and charged particles and produces libraries for a wide variety of particle transport and reactor analysis codes. It is capable of processing data in ENDF/B-4, ENDF/B-5, and ENDF/B-6 formats for evaluated data. Short descriptions of the different modules follow: RECONR Reconstructs pointwise cross sections from ENDF/B resonance parameters and interpolation schemes. BROADR Doppler broadens and thins pointwise cross sections. UNRESR Computes effective self-shielded pointwise cross sections in the unresolved-resonance region. HEATR Generates pointwise heat production cross sections and radiation-damage-energy production cross sections. THERMR Produces incoherent inelastic energy-to-energy matrices for free or bound scatterers, coherent elastic cross sections for hexagonal materials, and incoherent elastic cross sections. GROUPR Generates self-shielded multigroup cross sections, group- to-group neutron scattering matrices, and photon production matrices from pointwise input. GAMINR Calculates multigroup photon interaction cross sections and KERMA factors and group-to-group photon scattering matrices. ERRORR Produces multigroup covariance matrices from ENDF/B uncertainties. COVR Reads the output of ERRORR and performs covariance plotting and output-formatting operations. DTFR Formats multigroup data for transport codes such as DTF-IV and ANISN. CCCCR Formats multigroup data for the CCCC standard interface files ISOTXS, BRKOXS, and DLAYXS. MATXSR Formats multigroup data for the MATXS cross section interface file. ACER Prepares libraries for the Los Alamos continuous-energy Monte Carlo code MCNP. POWR Prepares libraries for the EPRI
Chu, Shih-I.; Telnov, Dmitry A.
2004-02-01
The advancement of high-power and short-pulse laser technology in the past two decades has generated considerable interest in the study of multiphoton and very high-order nonlinear optical processes of atomic and molecular systems in intense and superintense laser fields, leading to the discovery of a host of novel strong-field phenomena which cannot be understood by the conventional perturbation theory. The Floquet theorem and the time-independent Floquet Hamiltonian method are powerful theoretical framework for the study of bound-bound multiphoton transitions driven by periodically time-dependent fields. However, there are a number of significant strong-field processes cannot be directly treated by the conventional Floquet methods. In this review article, we discuss several recent developments of generalized Floquet theorems, formalisms, and quasienergy methods, beyond the conventional Floquet theorem, for accurate nonperturbative treatment of a broad range of strong-field atomic and molecular processes and phenomena of current interests. Topics covered include (a) artificial intelligence (AI)-most-probable-path approach (MPPA) for effective treatment of ultralarge Floquet matrix problem; (b) non-Hermitian Floquet formalisms and complex quasienergy methods for nonperturbative treatment of bound-free and free-free processes such as multiphoton ionization (MPI) and above-threshold ionization (ATI) of atoms and molecules, multiphoton dissociation (MPD) and above-threshold dissociation (ATD) of molecules, chemical bond softening and hardening, charge-resonance enhanced ionization (CREI) of molecular ions, and multiple high-order harmonic generation (HHG), etc.; (c) many-mode Floquet theorem (MMFT) for exact treatment of multiphoton processes in multi-color laser fields with nonperiodic time-dependent Hamiltonian; (d) Floquet-Liouville supermatrix (FLSM) formalism for exact nonperturbative treatment of time-dependent Liouville equation (allowing for relaxations and
Chu, S.-I.; Telnov, D.A.
2004-01-01
The advancement of high-power and short-pulse laser technology in the past two decades has generated considerable interest in the study of multiphoton and very high-order nonlinear optical processes of atomic and molecular systems in intense and superintense laser fields, leading to the discovery of a host of novel strong-field phenomena which cannot be understood by the conventional perturbation theory. The Floquet theorem and the time-independent Floquet Hamiltonian method are powerful theoretical framework for the study of bound-bound multiphoton transitions driven by periodically time-dependent fields. However, there are a number of significant strong-field processes cannot be directly treated by the conventional Floquet methods. In this review article, we discuss several recent developments of generalized Floquet theorems, formalisms, and quasienergy methods, beyond the conventional Floquet theorem, for accurate nonperturbative treatment of a broad range of strong-field atomic and molecular processes and phenomena of current interests. Topics covered include (a) artificial intelligence (AI)-most-probable-path approach (MPPA) for effective treatment of ultralarge Floquet matrix problem; (b) non-Hermitian Floquet formalisms and complex quasienergy methods for nonperturbative treatment of bound-free and free-free processes such as multiphoton ionization (MPI) and above-threshold ionization (ATI) of atoms and molecules, multiphoton dissociation (MPD) and above-threshold dissociation (ATD) of molecules, chemical bond softening and hardening, charge-resonance enhanced ionization (CREI) of molecular ions, and multiple high-order harmonic generation (HHG), etc.; (c) many-mode Floquet theorem (MMFT) for exact treatment of multiphoton processes in multi-color laser fields with nonperiodic time-dependent Hamiltonian; (d) Floquet-Liouville supermatrix (FLSM) formalism for exact nonperturbative treatment of time-dependent Liouville equation (allowing for relaxations and
2008-06-01
This report evaluates alternative processes that could be used to produce Pu-238 fueled General Purpose Heat Sources (GPHS) for radioisotope thermoelectric generators (RTG). Fabricating GPHSs with the current process has remained essentially unchanged since its development in the 1970s. Meanwhile, 30 years of technological advancements have been made in the fields of chemistry, manufacturing, ceramics, and control systems. At the Department of Energy’s request, alternate manufacturing methods were compared to current methods to determine if alternative fabrication processes could reduce the hazards, especially the production of respirable fines, while producing an equivalent GPHS product. An expert committee performed the evaluation with input from four national laboratories experienced in Pu-238 handling.
Ubiquitin-aldehyde: a general inhibitor of ubiquitin-recycling processes
Hershko, A.; Rose, I.A.
1987-01-01
The generation and characterization of ubiquitin (Ub)-aldehyde, a potent inhibitor of Ub-C-terminal hydrolase, has previously been reported. The authors examine the action of this compound on the Ub-mediated proteolytic pathway using the system derived from rabbit reticulocytes. Addition of Ub-aldehyde was found to strongly inhibit breakdown of added 125 I-labeled lysozyme, but inhibition was overcome by increasing concentrations of Ub. The following evidence shows the effect of Ub-aldehyde on protein breakdown to be indirectly caused by its interference with the recycling of Ub, leading to exhaustion of the supply of free Ub: (i) Ub-aldehyde markedly increased the accumulation of Ub-protein conjugates coincident with a much decreased rate of conjugate breakdown; (ii) release of Ub from isolated Ub-protein conjugates in the absence of ATP (and therefore not coupled to protein degradation) is markedly inhibited by Ub-aldehyde. On the other hand, the ATP-dependent degradation of the protein moiety of Ub conjugates, which is an integral part of the proteolytic process, is not inhibited by this agent; (iii) direct measurement of levels of free Ub showed a rapid disappearance caused by the inhibitor. The Ub is found to be distributed in derivatives of a wide range of molecular weight classes. It thus seems that Ub-aldehyde, previously demonstrated to inhibit the hydrolysis of Ub conjugates of small molecules, also inhibits the activity of a series of enzymes that regenerate free Ub from adducts with proteins and intermediates in protein breakdown
Sotirov, Sotir
2016-01-01
The book offers a comprehensive and timely overview of advanced mathematical tools for both uncertainty analysis and modeling of parallel processes, with a special emphasis on intuitionistic fuzzy sets and generalized nets. The different chapters, written by active researchers in their respective areas, are structured to provide a coherent picture of this interdisciplinary yet still evolving field of science. They describe key tools and give practical insights into and research perspectives on the use of Atanassov's intuitionistic fuzzy sets and logic, and generalized nets for describing and dealing with uncertainty in different areas of science, technology and business, in a single, to date unique book. Here, readers find theoretical chapters, dealing with intuitionistic fuzzy operators, membership functions and algorithms, among other topics, as well as application-oriented chapters, reporting on the implementation of methods and relevant case studies in management science, the IT industry, medicine and/or ...
Peters, H.P.; Renn, O.
1983-01-01
The perception of risk has become a mayor research field, after scientists and politicians recognized that scientific risk studies like the Rasmussen-Report on nuclear energy had no large impact on the public acceptance. With our surveys we aimed to combine two methodological approaches (object perception and attitude theory) and to develop a technique in which the psychic process of perceiving and assessing risk-objects by the general public was followed up and analyzed. Psychological experiments in the field of isolating relevant factors of qualitative risk properties as well as demographic surveys for the measurement of the belief structure were carried out. Our results indicate that in objection to the common conception by natural scientists people in general have a good estimative ability to judge the expected value of different risks. But beyond this estimation of fatalities people also use other criteria (like personal control) to order different objects in respect to their riskiness. The perceived risk is but one factor influencing attitude. A simplified model of the acceptance-building process is carried out showing that acceptance-building is not a purely individual process. Individuals are linked together by their social environment so that every individual decision is influenced by the decision of other people
Lizarralde, I; Fernández-Arévalo, T; Brouckaert, C; Vanrolleghem, P; Ikumi, D S; Ekama, G A; Ayesa, E; Grau, P
2015-05-01
This paper introduces a new general methodology for incorporating physico-chemical and chemical transformations into multi-phase wastewater treatment process models in a systematic and rigorous way under a Plant-Wide modelling (PWM) framework. The methodology presented in this paper requires the selection of the relevant biochemical, chemical and physico-chemical transformations taking place and the definition of the mass transport for the co-existing phases. As an example a mathematical model has been constructed to describe a system for biological COD, nitrogen and phosphorus removal, liquid-gas transfer, precipitation processes, and chemical reactions. The capability of the model has been tested by comparing simulated and experimental results for a nutrient removal system with sludge digestion. Finally, a scenario analysis has been undertaken to show the potential of the obtained mathematical model to study phosphorus recovery. Copyright © 2015 Elsevier Ltd. All rights reserved.
A comparison of ancestral state reconstruction methods for quantitative characters.
Royer-Carenzi, Manuela; Didier, Gilles
2016-09-07
Choosing an ancestral state reconstruction method among the alternatives available for quantitative characters may be puzzling. We present here a comparison of seven of them, namely the maximum likelihood, restricted maximum likelihood, generalized least squares under Brownian, Brownian-with-trend and Ornstein-Uhlenbeck models, phylogenetic independent contrasts and squared parsimony methods. A review of the relations between these methods shows that the maximum likelihood, the restricted maximum likelihood and the generalized least squares under Brownian model infer the same ancestral states and can only be distinguished by the distributions accounting for the reconstruction uncertainty which they provide. The respective accuracy of the methods is assessed over character evolution simulated under a Brownian motion with (and without) directional or stabilizing selection. We give the general form of ancestral state distributions conditioned on leaf states under the simulation models. Ancestral distributions are used first, to give a theoretical lower bound of the expected reconstruction error, and second, to develop an original evaluation scheme which is more efficient than comparing the reconstructed and the simulated states. Our simulations show that: (i) the distributions of the reconstruction uncertainty provided by the methods generally make sense (some more than others); (ii) it is essential to detect the presence of an evolutionary trend and to choose a reconstruction method accordingly; (iii) all the methods show good performances on characters under stabilizing selection; (iv) without trend or stabilizing selection, the maximum likelihood method is generally the most accurate. Copyright © 2016 Elsevier Ltd. All rights reserved.
2010-04-01
..., consent to service of process by a nonresident general partner of a broker-dealer firm. This form shall be... 17 Commodity and Securities Exchanges 3 2010-04-01 2010-04-01 false Form 10-M, consent to service of process by a nonresident general partner of a broker-dealer firm. 249.510 Section 249.510...
2010-04-01
... 17 Commodity and Securities Exchanges 3 2010-04-01 2010-04-01 false Form ADV-NR, appointment of agent for service of process by non-resident general partner and non-resident managing agent of an... agent for service of process by non-resident general partner and non-resident managing agent of an...
Adriaensens, Stefanie; Beyers, Wim; Struyf, Elke
2015-01-01
The theory that self-esteem is substantially constructed based on social interactions implies that having a stutter could have a negative impact on self-esteem. Specifically, self-esteem during adolescence, a period of life characterized by increased self-consciousness, could be at risk. In addition to studying mean differences between stuttering and non-stuttering adolescents, this article concentrates on the influence of stuttering severity on domain-specific and general self-esteem. Subsequently, we investigate if covert processes on negative communication attitudes, experienced stigma, non-disclosure of stuttering, and (mal)adaptive perfectionism mediate the relationship between stuttering severity and self-esteem. Our sample comprised 55 stuttering and 76 non-stuttering adolescents. They were asked to fill in a battery of questionnaires, consisting of: Subjective Screening of Stuttering, Self-Perception Profile for Adolescents, Erickson S-24, Multidimensional Perfectionism Scale, and the Stigmatization and Disclosure in Adolescents Who Stutter Scale. SEM (structural equation modeling) analyses showed that stuttering severity negatively influences adolescents' evaluations of social acceptance, school competence, the competence to experience a close friendship, and global self-esteem. Maladaptive perfectionism and especially negative communication attitudes fully mediate the negative influence of stuttering severity on self-esteem. Group comparison showed that the mediation model applies to both stuttering and non-stuttering adolescents. We acknowledge the impact of having a stutter on those domains of the self in which social interactions and communication matter most. We then accentuate that negative attitudes about communication situations and excessive worries about saying things in ways they perceive as wrong are important processes to consider with regard to the self-esteem of adolescents who stutter. Moreover, we provide evidence that these covert
Cervera Peris, Mercedes; Alonso Rorís, Víctor Manuel; Santos Gago, Juan Manuel; Álvarez Sabucedo, Luis; Wanden-Berghe, Carmina; Sanz-Valero, Javier
2018-04-03
Any system applied to the control of parenteral nutrition (PN) ought to prove that the process meets the established requirements and include a repository of records to allow evaluation of the information about PN processes at any time. The goal of the research was to evaluate the mobile health (mHealth) app and validate its effectiveness in monitoring the management of the PN process. We studied the evaluation and validation of the general process of PN using an mHealth app. The units of analysis were the PN bags prepared and administered at the Son Espases University Hospital, Palma, Spain, from June 1 to September 6, 2016. For the evaluation of the app, we used the Poststudy System Usability Questionnaire and subsequent analysis with the Cronbach alpha coefficient. Validation was performed by checking the compliance of control for all operations on each of the stages (validation and transcription of the prescription, preparation, conservation, and administration) and by monitoring the operative control points and critical control points. The results obtained from 387 bags were analyzed, with 30 interruptions of administration. The fulfillment of stages was 100%, including noncritical nonconformities in the storage control. The average deviation in the weight of the bags was less than 5%, and the infusion time did not present deviations greater than 1 hour. The developed app successfully passed the evaluation and validation tests and was implemented to perform the monitoring procedures for the overall PN process. A new mobile solution to manage the quality and traceability of sensitive medicines such as blood-derivative drugs and hazardous drugs derived from this project is currently being deployed. ©Mercedes Cervera Peris, Víctor Manuel Alonso Rorís, Juan Manuel Santos Gago, Luis Álvarez Sabucedo, Carmina Wanden-Berghe, Javier Sanz-Valero. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 03.04.2018.
Dost, A A; Redman, D; Cox, G
2000-08-01
This study assesses the current patterns and levels of exposure to rubber fume and rubber process dust in the British rubber industry and compares and contrasts the data obtained from the general rubber goods (GRG), retread tire (RT) and new tire (NT) sectors. A total of 179 rubber companies were visited and data were obtained from 52 general rubber goods, 29 retread tire and 7 new tire manufacturers. The survey was conducted using a questionnaire and included a walk-through inspection of the workplace to assess the extent of use of control measures and the nature of work practices being employed. The most recent (predominantly 1995-97) exposure monitoring data for rubber fume and rubber process dust were obtained from these companies; no additional sampling was conducted for the purpose of this study. In addition to the assessment of exposure data, evaluation of occupational hygiene reports for the quality of information and advice was also carried out.A comparison of the median exposures for processes showed that the order of exposure to rubber fume (E, in mg m(-3)) is: E(moulding) (0.40) approximately E(extrusion) (0.33)>E(milling) (0.18) for GRG; E(press) (0. 32)>E(extrusion) (0.19)>E(autoclave) (0.10) for RT; and E(press) (0. 22) approximately E(all other) (0.22) for NT. The order of exposure to rubber fume between sectors was E(GRG) (0.40)>E(RT) (0.32)>E(NT) (0.22). Median exposures to rubber process dust in the GRG was E(weighing) (4.2)>E(mixing) (1.2) approximately E(milling) (0.8) approximately E(extrusion) (0.8) and no significant difference (P=0. 31) between GRG and NT sectors. The findings compare well with the study carried out in the Netherlands [Kromhout et al. (1994), Annals of Occupational Hygiene 38(1), 3-22], and it is suggested that the factors governing the significant differences noted between the three sectors relate principally to the production and task functions and also to the extent of controls employed. Evaluation of occupational
The professional’s orientation in the formative process for the bachelor’s general united students
Darwin Stalin Faz-Delgado
2016-11-01
Full Text Available In Ecuador Primary Education has as goal to develop the abilities, skills and linguistic competence in children and teenagers from 5 years old until they arrive to High School degree. High School main objective is to provide to students a general and an interdisciplinary preparation that guide them to elaborate their life projects in order that they can fit in societyas responsible, critical and solidary human beings. It also has as intention to develop students’ abilities in knowledge acquisition and citizen competence and to prepare them to work, to learn and to access to University; this aspect establishes the importance of an adequate professional orientation that facilitates the conscious selection of their future profession and career. This article contains theoretical basis of process the formation and professional’s orientation in the High School, for the attention on the context of the Ecuador people.
Giménez-Alventosa, V; Ballester, F; Vijande, J
2016-12-01
The design and construction of geometries for Monte Carlo calculations is an error-prone, time-consuming, and complex step in simulations describing particle interactions and transport in the field of medical physics. The software VoxelMages has been developed to help the user in this task. It allows to design complex geometries and to process DICOM image files for simulations with the general-purpose Monte Carlo code PENELOPE in an easy and straightforward way. VoxelMages also allows to import DICOM-RT structure contour information as delivered by a treatment planning system. Its main characteristics, usage and performance benchmarking are described in detail. Copyright © 2016 Elsevier Ltd. All rights reserved.
Davies, Scott
2011-01-01
We combine the six-dimensional helicity formalism of Cheung and O'Connell with D-dimensional generalized unitarity to obtain a new formalism for computing one-loop amplitudes in dimensionally regularized QCD. With this procedure, we simultaneously obtain the pieces that are constructible from four-dimensional unitarity cuts and the rational pieces that are missed by them, while retaining a helicity formalism. We illustrate the procedure using four- and five-point one-loop amplitudes in QCD, including examples with external fermions. We also demonstrate the technique's effectiveness in next-to-leading order QCD corrections to Higgs processes by computing the next-to-leading order correction to the Higgs plus three positive-helicity gluons amplitude in the large top-quark mass limit.
Trinidade, A; Yung, M W
2014-04-01
A specialist balance clinic to effectively deal with dizzy patients is recommended by ENT-UK. We audit the patient pathway before and following the introduction of a consultant-led dedicated balance clinic. Process evaluation and audit. ENT outpatients department of a district general hospital. The journey of dizzy patients seen in the general ENT clinic was mapped from case notes and recorded retrospectively. A consultant-led, multidisciplinary balance clinic involving an otologist, a senior audiologist and a neurophysiotherapist was then set up, and the journey was prospectively recorded and compared with that before the change. Of the 44 dizzy patients seen in the general clinic, 41% had further follow-up consultations; 64% were given definitive or provisional diagnoses; 75% were discharged without a management plan. Oculomotor examination was not systematically performed. The mean interval between Visits 1 and 2 was 8.4 weeks and the mean number of visits was 3. In the consultant-led dedicated balance clinic, following Visit 1, only 8% of patients required follow-up; 97% received definitive diagnoses, which guided management; all patients left with definitive management plans in place. In all patients, oculomotor assessment was systematically performed and all patients received consultant and, where necessary, allied healthcare professional input. By standardising the management experience for dizzy patients, appropriate and timely treatment can be achieved, allowing for a more seamless and efficient patient journey from referral to treatment. A multidisciplinary balance clinic led by a consultant otologist is the ideal way to achieve this. © 2014 John Wiley & Sons Ltd.
Shen, C.; Fang, K.
2017-12-01
Deep Learning (DL) methods have made revolutionary strides in recent years. A core value proposition of DL is that abstract notions and patterns can be extracted purely from data, without the need for domain expertise. Process-based models (PBM), on the other hand, can be regarded as repositories of human knowledge or hypotheses about how systems function. Here, through computational examples, we argue that there is merit in integrating PBMs with DL due to the imbalance and lack of data in many situations, especially in hydrology. We trained a deep-in-time neural network, the Long Short-Term Memory (LSTM), to learn soil moisture dynamics from Soil Moisture Active Passive (SMAP) Level 3 product. We show that when PBM solutions are integrated into LSTM, the network is able to better generalize across regions. LSTM is able to better utilize PBM solutions than simpler statistical methods. Our results suggest PBMs have generalization value which should be carefully assessed and utilized. We also emphasize that when properly regularized, the deep network is robust and is of superior testing performance compared to simpler methods.
Isis Didier Lins
2018-03-01
Full Text Available The Generalized Renewal Process (GRP is a probabilistic model for repairable systems that can represent the usual states of a system after a repair: as new, as old, or in a condition between new and old. It is often coupled with the Weibull distribution, widely used in the reliability context. In this paper, we develop novel GRP models based on probability distributions that stem from the Tsallis’ non-extensive entropy, namely the q-Exponential and the q-Weibull distributions. The q-Exponential and Weibull distributions can model decreasing, constant or increasing failure intensity functions. However, the power law behavior of the q-Exponential probability density function for specific parameter values is an advantage over the Weibull distribution when adjusting data containing extreme values. The q-Weibull probability distribution, in turn, can also fit data with bathtub-shaped or unimodal failure intensities in addition to the behaviors already mentioned. Therefore, the q-Exponential-GRP is an alternative for the Weibull-GRP model and the q-Weibull-GRP generalizes both. The method of maximum likelihood is used for their parameters’ estimation by means of a particle swarm optimization algorithm, and Monte Carlo simulations are performed for the sake of validation. The proposed models and algorithms are applied to examples involving reliability-related data of complex systems and the obtained results suggest GRP plus q-distributions are promising techniques for the analyses of repairable systems.
Nicolae Răzvan Decuseară
2013-01-01
Full Text Available Due to limited resources a company cannot serve all potential markets in the world in a manner that all the clients to be satisfied and the business goals achieved, which is why the company should select the most appropriate markets. It can focus on a single product market serving many geographic areas, but may also decide to serve different product markets in a group of selected geographic areas. Due to the large number and diversity of markets that can choose, analyze of the market attractiveness and the selection the most interesting is a complex process. General Electric Matrix/McKinsey has two dimensions, market attractiveness and the competitive strength of the firm, and aims to analyze the strengths and weaknesses of the company in a variety of areas, allowing the company to identify the most attractive markets and to guide managers in allocating resources to these markets, improve the weaker competitive position of the company in emerging markets, or to draw firm unattractive markets. We can say that it is a very efficient tool for the company being used by international market specialists, on one hand to select foreign markets for the company, and on the other hand, to determine the strategy that the firm will be using to internationalize on those markets. At the end of this paper we present a part of a larger study in which we showed how General Electric Matrix/McKinsey it is used specifically in select foreign markets.
A multiscale guide to Brownian motion
Grebenkov, Denis S; Belyaev, Dmitry; Jones, Peter W
2016-01-01
We revise the Lévy construction of Brownian motion as a simple though rigorous approach to operate with various Gaussian processes. A Brownian path is explicitly constructed as a linear combination of wavelet-based ‘geometrical features’ at multiple length scales with random weights. Such a wavelet representation gives a closed formula mapping of the unit interval onto the functional space of Brownian paths. This formula elucidates many classical results about Brownian motion (e.g., non-differentiability of its path), providing an intuitive feeling for non-mathematicians. The illustrative character of the wavelet representation, along with the simple structure of the underlying probability space, is different from the usual presentation of most classical textbooks. Similar concepts are discussed for the Brownian bridge, fractional Brownian motion, the Ornstein-Uhlenbeck process, Gaussian free fields, and fractional Gaussian fields. Wavelet representations and dyadic decompositions form the basis of many highly efficient numerical methods to simulate Gaussian processes and fields, including Brownian motion and other diffusive processes in confining domains. (topical review)
Werner, Gerhard
2009-04-01
In this theoretical and speculative essay, I propose that insights into certain aspects of neural system functions can be gained from viewing brain function in terms of the branch of Statistical Mechanics currently referred to as "Modern Critical Theory" [Stanley, H.E., 1987. Introduction to Phase Transitions and Critical Phenomena. Oxford University Press; Marro, J., Dickman, R., 1999. Nonequilibrium Phase Transitions in Lattice Models. Cambridge University Press, Cambridge, UK]. The application of this framework is here explored in two stages: in the first place, its principles are applied to state transitions in global brain dynamics, with benchmarks of Cognitive Neuroscience providing the relevant empirical reference points. The second stage generalizes to suggest in more detail how the same principles could also apply to the relation between other levels of the structural-functional hierarchy of the nervous system and between neural assemblies. In this view, state transitions resulting from the processing at one level are the input to the next, in the image of a 'bucket brigade', with the content of each bucket being passed on along the chain, after having undergone a state transition. The unique features of a process of this kind will be discussed and illustrated.
Salata, Brian M; Sterling, Madeline R; Beecy, Ashley N; Ullal, Ajayram V; Jones, Erica C; Horn, Evelyn M; Goyal, Parag
2018-05-01
Given high rates of heart failure (HF) hospitalizations and widespread adoption of the hospitalist model, patients with HF are often cared for on General Medicine (GM) services. Differences in discharge processes and 30-day readmission rates between patients on GM and those on Cardiology during the contemporary hospitalist era are unknown. The present study compared discharge processes and 30-day readmission rates of patients with HF admitted on GM services and those on Cardiology services. We retrospectively studied 926 patients discharged home after HF hospitalization. The primary outcome was 30-day all-cause readmission after discharge from index hospitalization. Although 60% of patients with HF were admitted to Cardiology services, 40% were admitted to GM services. Prevalence of cardiovascular and noncardiovascular co-morbidities were similar between patients admitted to GM services and Cardiology services. Discharge summaries for patients on GM services were less likely to have reassessments of ejection fraction, new study results, weights, discharge vital signs, discharge physical examinations, and scheduled follow-up cardiologist appointments. In a multivariable regression analysis, patients on GM services were more likely to experience 30-day readmissions compared with those on Cardiology services (odds ratio 1.43 95% confidence interval [1.05 to 1.96], p = 0.02). In conclusion, outcomes are better among those admitted to Cardiology services, signaling the need for studies and interventions focusing on noncardiology hospital providers that care for patients with HF. Copyright © 2018 Elsevier Inc. All rights reserved.
Hama, Hiromitsu; Yamashita, Kazumi
1991-11-01
A new method for video signal processing is described in this paper. The purpose is real-time image transformations at low cost, low power, and small size hardware. This is impossible without special hardware. Here generalized digital differential analyzer (DDA) and control memory (CM) play a very important role. Then indentation, which is called jaggy, is caused on the boundary of a background and a foreground accompanied with the processing. Jaggy does not occur inside the transformed image because of adopting linear interpretation. But it does occur inherently on the boundary of the background and the transformed images. It causes deterioration of image quality, and must be avoided. There are two well-know ways to improve image quality, blurring and supersampling. The former does not have much effect, and the latter has the much higher cost of computing. As a means of settling such a trouble, a method is proposed, which searches for positions that may arise jaggy and smooths such points. Computer simulations based on the real data from VTR, one scene of a movie, are presented to demonstrate our proposed scheme using DDA and CMs and to confirm the effectiveness on various transformations.
Gao, Lili; Zhou, Zai-Fa; Huang, Qing-An
2017-11-08
A microstructure beam is one of the fundamental elements in MEMS devices like cantilever sensors, RF/optical switches, varactors, resonators, etc. It is still difficult to precisely predict the performance of MEMS beams with the current available simulators due to the inevitable process deviations. Feasible numerical methods are required and can be used to improve the yield and profits of the MEMS devices. In this work, process deviations are considered to be stochastic variables, and a newly-developed numerical method, i.e., generalized polynomial chaos (GPC), is applied for the simulation of the MEMS beam. The doubly-clamped polybeam has been utilized to verify the accuracy of GPC, compared with our Monte Carlo (MC) approaches. Performance predictions have been made on the residual stress by achieving its distributions in GaAs Monolithic Microwave Integrated Circuit (MMIC)-based MEMS beams. The results show that errors are within 1% for the results of GPC approximations compared with the MC simulations. Appropriate choices of the 4-order GPC expansions with orthogonal terms have also succeeded in reducing the MC simulation labor. The mean value of the residual stress, concluded from experimental tests, shares an error about 1.1% with that of the 4-order GPC method. It takes a probability around 54.3% for the 4-order GPC approximation to attain the mean test value of the residual stress. The corresponding yield occupies over 90 percent around the mean within the twofold standard deviations.
Yingjun Zheng
2016-11-01
Full Text Available Patients with schizophrenia exhibit consistent abnormalities in face-evoked N170. However, the relation between face-specific N170 abnormalities in schizophrenic patients and schizophrenia clinical characters, which probably based on common neural mechanisms, is still rarely discovered. Using event-related potentials (ERPs recording in both schizophrenic patients and healthy controls, the amplitude and latency of N170 were recorded when participants were passively watching face and non-face (table pictures. The results showed a face-specific N170 latency sluggishness in schizophrenic patients, i.e., the N170 latencies of schizophrenic patients were significantly longer than those of healthy controls under both upright face and inverted face conditions. Importantly, the face-related N170 latencies of the left temporo-occipital electrodes (P7 and PO7 were positively correlated with negative symptoms and general psychiatric symptoms. Besides the analysis of latencies, the N170 amplitudes became weaker in schizophrenic patients under both inverted face and inverted table conditions, with a left hemisphere dominant. More interestingly, the FIEs (the difference of N170 amplitudes between upright and inverted faces were absent in schizophrenic patients, which suggested the abnormality of holistic face processing. These results above revealed a marked symptom-relevant neural sluggishness of face-specific processing in schizophrenic patients, supporting the demyelinating hypothesis of schizophrenia.
Collins, Michael D; Jackson, Chris J; Walker, Benjamin R; O'Connor, Peter J; Gardiner, Elliroma
2017-01-01
Over the last 40 years or more the personality literature has been dominated by trait models based on the Big Five (B5). Trait-based models describe personality at the between-person level but cannot explain the within-person mental mechanisms responsible for personality. Nor can they adequately account for variations in emotion and behavior experienced by individuals across different situations and over time. An alternative, yet understated, approach to personality architecture can be found in neurobiological theories of personality, most notably reinforcement sensitivity theory (RST). In contrast to static trait-based personality models like the B5, RST provides a more plausible basis for a personality process model, namely, one that explains how emotions and behavior arise from the dynamic interaction between contextual factors and within-person mental mechanisms. In this article, the authors review the evolution of a neurobiologically based personality process model based on RST, the response modulation model and the context-appropriate balanced attention model. They argue that by integrating this complex literature, and by incorporating evidence from personality neuroscience, one can meaningfully explain personality at both the within- and between-person levels. This approach achieves a domain-general architecture based on RST and self-regulation that can be used to align within-person mental mechanisms, neurobiological systems and between-person measurement models. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Lili Gao
2017-11-01
Full Text Available A microstructure beam is one of the fundamental elements in MEMS devices like cantilever sensors, RF/optical switches, varactors, resonators, etc. It is still difficult to precisely predict the performance of MEMS beams with the current available simulators due to the inevitable process deviations. Feasible numerical methods are required and can be used to improve the yield and profits of the MEMS devices. In this work, process deviations are considered to be stochastic variables, and a newly-developed numerical method, i.e., generalized polynomial chaos (GPC, is applied for the simulation of the MEMS beam. The doubly-clamped polybeam has been utilized to verify the accuracy of GPC, compared with our Monte Carlo (MC approaches. Performance predictions have been made on the residual stress by achieving its distributions in GaAs Monolithic Microwave Integrated Circuit (MMIC-based MEMS beams. The results show that errors are within 1% for the results of GPC approximations compared with the MC simulations. Appropriate choices of the 4-order GPC expansions with orthogonal terms have also succeeded in reducing the MC simulation labor. The mean value of the residual stress, concluded from experimental tests, shares an error about 1.1% with that of the 4-order GPC method. It takes a probability around 54.3% for the 4-order GPC approximation to attain the mean test value of the residual stress. The corresponding yield occupies over 90 percent around the mean within the twofold standard deviations.
Deborah A Striegel
2015-08-01
Full Text Available Pancreatic islets of Langerhans consist of endocrine cells, primarily α, β and δ cells, which secrete glucagon, insulin, and somatostatin, respectively, to regulate plasma glucose. β cells form irregular locally connected clusters within islets that act in concert to secrete insulin upon glucose stimulation. Due to the central functional significance of this local connectivity in the placement of β cells in an islet, it is important to characterize it quantitatively. However, quantification of the seemingly stochastic cytoarchitecture of β cells in an islet requires mathematical methods that can capture topological connectivity in the entire β-cell population in an islet. Graph theory provides such a framework. Using large-scale imaging data for thousands of islets containing hundreds of thousands of cells in human organ donor pancreata, we show that quantitative graph characteristics differ between control and type 2 diabetic islets. Further insight into the processes that shape and maintain this architecture is obtained by formulating a stochastic theory of β-cell rearrangement in whole islets, just as the normal equilibrium distribution of the Ornstein-Uhlenbeck process can be viewed as the result of the interplay between a random walk and a linear restoring force. Requiring that rearrangements maintain the observed quantitative topological graph characteristics strongly constrained possible processes. Our results suggest that β-cell rearrangement is dependent on its connectivity in order to maintain an optimal cluster size in both normal and T2D islets.
Striegel, Deborah A; Hara, Manami; Periwal, Vipul
2015-08-01
Pancreatic islets of Langerhans consist of endocrine cells, primarily α, β and δ cells, which secrete glucagon, insulin, and somatostatin, respectively, to regulate plasma glucose. β cells form irregular locally connected clusters within islets that act in concert to secrete insulin upon glucose stimulation. Due to the central functional significance of this local connectivity in the placement of β cells in an islet, it is important to characterize it quantitatively. However, quantification of the seemingly stochastic cytoarchitecture of β cells in an islet requires mathematical methods that can capture topological connectivity in the entire β-cell population in an islet. Graph theory provides such a framework. Using large-scale imaging data for thousands of islets containing hundreds of thousands of cells in human organ donor pancreata, we show that quantitative graph characteristics differ between control and type 2 diabetic islets. Further insight into the processes that shape and maintain this architecture is obtained by formulating a stochastic theory of β-cell rearrangement in whole islets, just as the normal equilibrium distribution of the Ornstein-Uhlenbeck process can be viewed as the result of the interplay between a random walk and a linear restoring force. Requiring that rearrangements maintain the observed quantitative topological graph characteristics strongly constrained possible processes. Our results suggest that β-cell rearrangement is dependent on its connectivity in order to maintain an optimal cluster size in both normal and T2D islets.
HMM filtering and parameter estimation of an electricity spot price model
Erlwein, Christina; Benth, Fred Espen; Mamon, Rogemar
2010-01-01
In this paper we develop a model for electricity spot price dynamics. The spot price is assumed to follow an exponential Ornstein-Uhlenbeck (OU) process with an added compound Poisson process. In this way, the model allows for mean-reversion and possible jumps. All parameters are modulated by a hidden Markov chain in discrete time. They are able to switch between different economic regimes representing the interaction of various factors. Through the application of reference probability technique, adaptive filters are derived, which in turn, provide optimal estimates for the state of the Markov chain and related quantities of the observation process. The EM algorithm is applied to find optimal estimates of the model parameters in terms of the recursive filters. We implement this self-calibrating model on a deseasonalised series of daily spot electricity prices from the Nordic exchange Nord Pool. On the basis of one-step ahead forecasts, we found that the model is able to capture the empirical characteristics of Nord Pool spot prices. (author)
Saida, Hiromi
2006-01-01
When a black hole is in an empty space in which there is no matter field except that of the Hawking radiation (Hawking field), then the black hole evaporates and the entropy of the black hole decreases. The generalized second law guarantees the increase of the total entropy of the whole system which consists of the black hole and the Hawking field. That is, the increase of the entropy of the Hawking field is faster than the decrease of the black hole entropy. In a naive sense, one may expect that the entropy increase of the Hawking field is due to the self-interaction among the composite particles of the Hawking field, and that the self-relaxation of the Hawking field results in the entropy increase. Then, when one considers a non-self-interacting matter field as the Hawking field, it is obvious that self-relaxation does not take place, and one may think that the total entropy does not increase. However, using nonequilibrium thermodynamics which has been developed recently, we find for the non-self-interacting Hawking field that the rate of entropy increase of the Hawking field (the entropy emission rate by the black hole) grows faster than the rate of entropy decrease of the black hole during the black hole evaporation in empty space. The origin of the entropy increase of the Hawking field is the increase of the black hole temperature. Hence an understanding of the generalized second law in the context of nonequilibrium thermodynamics is suggested; even if the self-relaxation of the Hawking field does not take place, the temperature increase of the black hole during the evaporation process causes the entropy increase of the Hawking field to result in the increase of the total entropy
Equivalence of interest rate models and lattice gases.
Pirjol, Dan
2012-04-01
We consider the class of short rate interest rate models for which the short rate is proportional to the exponential of a Gaussian Markov process x(t) in the terminal measure r(t)=a(t)exp[x(t)]. These models include the Black-Derman-Toy and Black-Karasinski models in the terminal measure. We show that such interest rate models are equivalent to lattice gases with attractive two-body interaction, V(t(1),t(2))=-Cov[x(t(1)),x(t(2))]. We consider in some detail the Black-Karasinski model with x(t) as an Ornstein-Uhlenbeck process, and show that it is similar to a lattice gas model considered by Kac and Helfand, with attractive long-range two-body interactions, V(x,y)=-α(e(-γ|x-y|)-e(-γ(x+y))). An explicit solution for the model is given as a sum over the states of the lattice gas, which is used to show that the model has a phase transition similar to that found previously in the Black-Derman-Toy model in the terminal measure.
Bilayer graphene lattice-layer entanglement in the presence of non-Markovian phase noise
Bittencourt, Victor A. S. V.; Blasone, Massimo; Bernardini, Alex E.
2018-03-01
The evolution of single particle excitations of bilayer graphene under effects of non-Markovian noise is described with focus on the decoherence process of lattice-layer (LL) maximally entangled states. Once the noiseless dynamics of an arbitrary initial state is identified by the correspondence between the tight-binding Hamiltonian for the AB-stacked bilayer graphene and the Dirac equation—which includes pseudovectorlike and tensorlike field interactions—the noisy environment is described as random fluctuations on bias voltage and mass terms. The inclusion of noisy dynamics reproduces the Ornstein-Uhlenbeck processes: A non-Markovian noise model with a well-defined Markovian limit. Considering that an initial amount of entanglement shall be dissipated by the noise, two profiles of dissipation are identified. On one hand, for eigenstates of the noiseless Hamiltonian, deaths and revivals of entanglement are identified along the oscillation pattern for long interaction periods. On the other hand, for departing LL Werner and Cat states, the entanglement is suppressed although, for both cases, some identified memory effects compete with the pure noise-induced decoherence in order to preserve the the overall profile of a given initial state.
Mulder, Willem H; Crawford, Forrest W
2015-01-07
Efforts to reconstruct phylogenetic trees and understand evolutionary processes depend fundamentally on stochastic models of speciation and mutation. The simplest continuous-time model for speciation in phylogenetic trees is the Yule process, in which new species are "born" from existing lineages at a constant rate. Recent work has illuminated some of the structural properties of Yule trees, but it remains mostly unknown how these properties affect sequence and trait patterns observed at the tips of the phylogenetic tree. Understanding the interplay between speciation and mutation under simple models of evolution is essential for deriving valid phylogenetic inference methods and gives insight into the optimal design of phylogenetic studies. In this work, we derive the probability distribution of interspecies covariance under Brownian motion and Ornstein-Uhlenbeck models of phenotypic change on a Yule tree. We compute the probability distribution of the number of mutations shared between two randomly chosen taxa in a Yule tree under discrete Markov mutation models. Our results suggest summary measures of phylogenetic information content, illuminate the correlation between site patterns in sequences or traits of related organisms, and provide heuristics for experimental design and reconstruction of phylogenetic trees. Copyright © 2014 Elsevier Ltd. All rights reserved.
Framework based on communicability and flow to analyze complex network dynamics
Gilson, M.; Kouvaris, N. E.; Deco, G.; Zamora-López, G.
2018-05-01
Graph theory constitutes a widely used and established field providing powerful tools for the characterization of complex networks. The intricate topology of networks can also be investigated by means of the collective dynamics observed in the interactions of self-sustained oscillations (synchronization patterns) or propagationlike processes such as random walks. However, networks are often inferred from real-data-forming dynamic systems, which are different from those employed to reveal their topological characteristics. This stresses the necessity for a theoretical framework dedicated to the mutual relationship between the structure and dynamics in complex networks, as the two sides of the same coin. Here we propose a rigorous framework based on the network response over time (i.e., Green function) to study interactions between nodes across time. For this purpose we define the flow that describes the interplay between the network connectivity and external inputs. This multivariate measure relates to the concepts of graph communicability and the map equation. We illustrate our theory using the multivariate Ornstein-Uhlenbeck process, which describes stable and non-conservative dynamics, but the formalism can be adapted to other local dynamics for which the Green function is known. We provide applications to classical network examples, such as small-world ring and hierarchical networks. Our theory defines a comprehensive framework that is canonically related to directed and weighted networks, thus paving a way to revise the standards for network analysis, from the pairwise interactions between nodes to the global properties of networks including community detection.
Front Propagation in Stochastic Neural Fields
Bressloff, Paul C.
2012-01-01
We analyze the effects of extrinsic multiplicative noise on front propagation in a scalar neural field with excitatory connections. Using a separation of time scales, we represent the fluctuating front in terms of a diffusive-like displacement (wandering) of the front from its uniformly translating position at long time scales, and fluctuations in the front profile around its instantaneous position at short time scales. One major result of our analysis is a comparison between freely propagating fronts and fronts locked to an externally moving stimulus. We show that the latter are much more robust to noise, since the stochastic wandering of the mean front profile is described by an Ornstein-Uhlenbeck process rather than a Wiener process, so that the variance in front position saturates in the long time limit rather than increasing linearly with time. Finally, we consider a stochastic neural field that supports a pulled front in the deterministic limit, and show that the wandering of such a front is now subdiffusive. © 2012 Society for Industrial and Applied Mathematics.
The effect of parity on morphological evolution among phrynosomatid lizards.
Oufiero, C E; Gartner, G E A
2014-11-01
The shift from egg laying to live-bearing is one of the most well-studied transitions in evolutionary biology. Few studies, however, have assessed the effect of this transition on morphological evolution. Here, we evaluated the effect of reproductive mode on the morphological evolution of 10 traits, among 108 species of phrynosomatid lizards. We assess whether the requirement for passing shelled eggs through the pelvic girdle has led to morphological constraints in oviparous species and whether long gestation times in viviparous species have led to constraints in locomotor morphology. We fit models to the data that vary both in their tempo (strength and rate of selection) and mode of evolution (Brownian or Ornstein-Uhlenbeck) and estimates of trait optima. We found that most traits are best fit by a generalized multipeak OU model, suggesting differing trait optima for viviparous vs. oviparous species. Additionally, rates (σ(2) ) of both pelvic girdle and forelimb trait evolution varied with parity; viviparous species had higher rates. Hindlimb traits, however, exhibited no difference in σ(2) between parity modes. In a functional context, our results suggest that the passage of shelled eggs constrains the morphology of the pelvic girdle, but we found no evidence of morphological constraint of the locomotor apparatus in viviparous species. Our results are consistent with recent lineage diversification analyses, leading to the conclusion that transitions to viviparity increase both lineage and morphological diversification. © 2014 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2014 European Society For Evolutionary Biology.
Gómez del Valle, L.
1998-01-01
Full Text Available En este trabajo se obtiene explícitamente el valor de una opción europea sobre una obligación que reparte cupones. Se supone que la estructura temporal de los tantos de interés viene determinada por el valor del tanto de interés instantáneo, el cual es una combinación lineal de n variables de estado. Estos factores son procesos estocásticos de tipo Ornstein-Uhlenbeck independientes y los precios del riesgo de mercado también son estocásticos. En primer lugar, aplicando un resultado de Friedman (1975 se resuelve de forma cerrada una ecuación general de valoración de activos derivados del tanto de interés. Posteriormente, se aplica al caso particular de la valoración de opciones sobre obligaciones que no reparten cupones y finalmente se obtiene el valor de las opciones sobre obligaciones que sí reparten cupones. Langetieg (1980 realizó un estudio con este modelo, pero con precios de riesgo de mercado constantes, utilizando otros procedimientos. Este trabajo supone una generalización de Jamshidian (1989, el cual obtuvo el valor de las opciones sobre obligaciones que reparten cupones para un modelo unifactorial.
English, Thomas
2005-01-01
A standard tool of reliability analysis used at NASA-JSC is the event tree. An event tree is simply a probability tree, with the probabilities determining the next step through the tree specified at each node. The nodal probabilities are determined by a reliability study of the physical system at work for a particular node. The reliability study performed at a node is typically referred to as a fault tree analysis, with the potential of a fault tree existing.for each node on the event tree. When examining an event tree it is obvious why the event tree/fault tree approach has been adopted. Typical event trees are quite complex in nature, and the event tree/fault tree approach provides a systematic and organized approach to reliability analysis. The purpose of this study was two fold. Firstly, we wanted to explore the possibility that a semi-Markov process can create dependencies between sojourn times (the times it takes to transition from one state to the next) that can decrease the uncertainty when estimating time to failures. Using a generalized semi-Markov model, we studied a four element reliability model and were able to demonstrate such sojourn time dependencies. Secondly, we wanted to study the use of semi-Markov processes to introduce a time variable into the event tree diagrams that are commonly developed in PRA (Probabilistic Risk Assessment) analyses. Event tree end states which change with time are more representative of failure scenarios than are the usual static probability-derived end states.
Porro, C; Biral, G P [Modena Univ. (Italy). Ist. di Fisiologia Umana; Fonda, S; Baraldi, P [Modena Univ. (Italy). Lab. di Bioingegneria della Clinica Oculistica; Cavazzuti, M [Modena Univ. (Italy). Clinica Neurologica
1984-09-01
A general purpose image processing system is described including B/W TV camera, high resolution image processor and display system (TESAK VDC 501), computer (DEC PDP 11/23) and monochrome and color monitors. Images may be acquired from a microscope equipped with a TV camera or using the TV in direct viewing; the A/D converter and the image processor provides fast (40 ms) and precise (512x512 data points) digitization of TV signal with a 256 gray levels maximum resolution. Computer programs have been developed in order to perform qualitative and quantitative analyses of autoradiographs obtained with the 2-DG method, which are written in FORTRAN and MACRO 11 Assembly Language. They include: (1) procedures designed to recognize errors in acquisition due to possible image shading and correct them via software; (2) routines suitable for qualitative analyses of the whole image or selected regions of it, providing the opportunity for pseudocolor coding, statistics, graphic overlays; (3) programs permitting the conversion of gray levels into metabolic rates of glucose utilization and the display of gray- or color-coded metabolic maps.
Dynamic analysis of trapping and escaping in dual beam optical trap
Li, Wenqiang; Hu, Huizhu; Su, Heming; Li, Zhenggang; Shen, Yu
2016-10-01
In this paper, we simulate the dynamic movement of a dielectric sphere in optical trap. This dynamic analysis can be used to calibrate optical forces, increase trapping efficiency and measure viscous coefficient of surrounding medium. Since an accurate dynamic analysis is based on a detailed force calculation, we calculate all forces a sphere receives. We get the forces of dual-beam gradient radiation pressure on a micron-sized dielectric sphere in the ray optics regime and utilize Einstein-Ornstein-Uhlenbeck to deal with its Brownian motion forces. Hydrodynamic viscous force also exists when the sphere moves in liquid. Forces from buoyance and gravity are also taken into consideration. Then we simulate trajectory of a sphere when it is subject to all these forces in a dual optical trap. From our dynamic analysis, the sphere can be trapped at an equilibrium point in static water, although it permanently fluctuates around the equilibrium point due to thermal effects. We go a step further to analyze the effects of misalignment of two optical traps. Trapping and escaping phenomena of the sphere in flowing water are also simulated. In flowing water, the sphere is dragged away from the equilibrium point. This dragging distance increases with the decrease of optical power, which results in escaping of the sphere with optical power below a threshold. In both trapping and escaping process we calculate the forces and position of the sphere. Finally, we analyze a trapping region in dual optical tweezers.
Geometric capture and escape of a microswimmer colliding with an obstacle.
Spagnolie, Saverio E; Moreno-Flores, Gregorio R; Bartolo, Denis; Lauga, Eric
2015-05-07
Motivated by recent experiments, we consider the hydrodynamic capture of a microswimmer near a stationary spherical obstacle. Simulations of model equations show that a swimmer approaching a small spherical colloid is simply scattered. In contrast, when the colloid is larger than a critical size it acts as a passive trap: the swimmer is hydrodynamically captured along closed trajectories and endlessly orbits around the colloidal sphere. In order to gain physical insight into this hydrodynamic scattering problem, we address it analytically. We provide expressions for the critical trapping radius, the depth of the "basin of attraction," and the scattering angle, which show excellent agreement with our numerical findings. We also demonstrate and rationalize the strong impact of swimming-flow symmetries on the trapping efficiency. Finally, we give the swimmer an opportunity to escape the colloidal traps by considering the effects of Brownian, or active, diffusion. We show that in some cases the trapping time is governed by an Ornstein-Uhlenbeck process, which results in a trapping time distribution that is well-approximated as inverse-Gaussian. The predictions again compare very favorably with the numerical simulations. We envision applications of the theory to bioremediation, microorganism sorting techniques, and the study of bacterial populations in heterogeneous or porous environments.
LNG as a strategy to establish developing countries' gas markets: The Brazilian case
Alberto Rechelo Neto, Carlos; Sauer, Ildo Luis
2006-01-01
This paper aims to evaluate, from a Brazilian case study, if the natural gas trade can be viewed as a good opportunity for developing countries located geographically close to Western Europe and North America gas markets. Initially, the paper presents an overview of the Brazilian natural gas industry and evaluates the balance between supply and demand in each main region of Brazil. Then, it analyzes the evolution of the international gas trade, which is expected to increase rapidly (LNG particularly). Finally, the paper analyses the financial viability of the Brazilian LNG project in a context of high volatility of natural gas prices in the international market. To take this uncertainty into account, North-American natural gas prices are modelled according to the ORNSTEIN-UHLENBECK process (with EIA data over the period 1985-2003). By using an approach based on Monte-Carlo simulations and under the assumption that imports are guaranteed since the North American gas price would be higher than the breakeven of the Brazilian project, the model aims to test the hypothesis that export can promote the development of the Brazilian Northeastern gas market. LNG project is here compared to the Petrobras pipelines project, which is considered as the immediate solution for the Northeastern gas shortage. As a conclusion, this study shows that the LNG export will be vulnerable to the risks associated to the natural gas prices volatility observed on the international market
Simplified rotor load models and fatigue damage estimates for offshore wind turbines.
Muskulus, M
2015-02-28
The aim of rotor load models is to characterize and generate the thrust loads acting on an offshore wind turbine. Ideally, the rotor simulation can be replaced by time series from a model with a few parameters and state variables only. Such models are used extensively in control system design and, as a potentially new application area, structural optimization of support structures. Different rotor load models are here evaluated for a jacket support structure in terms of fatigue lifetimes of relevant structural variables. All models were found to be lacking in accuracy, with differences of more than 20% in fatigue load estimates. The most accurate models were the use of an effective thrust coefficient determined from a regression analysis of dynamic thrust loads, and a novel stochastic model in state-space form. The stochastic model explicitly models the quasi-periodic components obtained from rotational sampling of turbulent fluctuations. Its state variables follow a mean-reverting Ornstein-Uhlenbeck process. Although promising, more work is needed on how to determine the parameters of the stochastic model and before accurate lifetime predictions can be obtained without comprehensive rotor simulations. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Stochastic particle acceleration and statistical closures
Dimits, A.M.; Krommes, J.A.
1985-10-01
In a recent paper, Maasjost and Elsasser (ME) concluded, from the results of numerical experiments and heuristic arguments, that the Bourret and the direct-interaction approximation (DIA) are ''of no use in connection with the stochastic acceleration problem'' because (1) their predictions were equivalent to that of the simpler Fokker-Planck (FP) theory, and (2) either all or none of the closures were in good agreement with the data. Here some analytically tractable cases are studied and used to test the accuracy of these closures. The cause of the discrepancy (2) is found to be the highly non-Gaussian nature of the force used by ME, a point not stressed by them. For the case where the force is a position-independent Ornstein-Uhlenbeck (i.e., Gaussian) process, an effective Kubo number K can be defined. For K << 1 an FP description is adequate, and conclusion (1) of ME follows; however, for K greater than or equal to 1 the DIA behaves much better qualitatively than the other two closures. For the non-Gaussian stochastic force used by ME, all common approximations fail, in agreement with (2)
Linear response and correlation of a self-propelled particle in the presence of external fields
Caprini, Lorenzo; Marini Bettolo Marconi, Umberto; Vulpiani, Angelo
2018-03-01
We study the non-equilibrium properties of non interacting active Ornstein-Uhlenbeck particles (AOUP) subject to an external nonuniform field using a Fokker-Planck approach with a focus on the linear response and time-correlation functions. In particular, we compare different methods to compute these functions including the unified colored noise approximation (UCNA). The AOUP model, described by the position of the particle and the active force acting on it, is usually mapped into a Markovian process, describing the motion of a fictitious passive particle in terms of its position and velocity, where the effect of the activity is transferred into a position-dependent friction. We show that the form of the response function of the AOUP depends on whether we put the perturbation on the position and keep unperturbed the active force in the original variables or perturb the position and maintain unperturbed the velocity in the transformed variables. Indeed, as a result of the change of variables the perturbation on the position becomes a perturbation both on the position and on the fictitious velocity. We test these predictions by considering the response for three types of convex potentials: quadratic, quartic and double-well potential. Moreover, by comparing the response of the AOUP model with the corresponding response of the UCNA model we conclude that although the stationary properties are fairly well approximated by the UCNA, the non equilibrium properties are not, an effect which is not negligible when the persistence time is large.
Bouchaud, Jean-Philippe; Sornette, Didier
1994-06-01
The ability to price risks and devise optimal investment strategies in thé présence of an uncertain "random" market is thé cornerstone of modern finance theory. We first consider thé simplest such problem of a so-called "European call option" initially solved by Black and Scholes using Ito stochastic calculus for markets modelled by a log-Brownien stochastic process. A simple and powerful formalism is presented which allows us to generalize thé analysis to a large class of stochastic processes, such as ARCH, jump or Lévy processes. We also address thé case of correlated Gaussian processes, which is shown to be a good description of three différent market indices (MATIF, CAC40, FTSE100). Our main result is thé introduction of thé concept of an optimal strategy in the sense of (functional) minimization of the risk with respect to the portfolio. If the risk may be made to vanish for particular continuous uncorrelated 'quasiGaussian' stochastic processes (including Black and Scholes model), this is no longer the case for more general stochastic processes. The value of the residual risk is obtained and suggests the concept of risk-corrected option prices. In the presence of very large deviations such as in Lévy processes, new criteria for rational fixing of the option prices are discussed. We also apply our method to other types of options, `Asian', `American', and discuss new possibilities (`doubledecker'...). The inclusion of transaction costs leads to the appearance of a natural characteristic trading time scale. L'aptitude à quantifier le coût du risque et à définir une stratégie optimale de gestion de portefeuille dans un marché aléatoire constitue la base de la théorie moderne de la finance. Nous considérons d'abord le problème le plus simple de ce type, à savoir celui de l'option d'achat `européenne', qui a été résolu par Black et Scholes à l'aide du calcul stochastique d'Ito appliqué aux marchés modélisés par un processus Log
Tax, N.; van Zelst, S.J.; Teinemaa, I.; Gulden, Jens; Reinhartz-Berger, Iris; Schmidt, Rainer; Guerreiro, Sérgio; Guédria, Wided; Bera, Palash
2018-01-01
A plethora of automated process discovery techniques have been developed which aim to discover a process model based on event data originating from the execution of business processes. The aim of the discovered process models is to describe the control-flow of the underlying business process. At the
Condon, Steven; Hendrick, Robert; Stark, Michael E.; Steger, Warren
1997-01-01
The Flight Dynamics Division (FDD) of NASA's Goddard Space Flight Center (GSFC) recently embarked on a far-reaching revision of its process for developing and maintaining satellite support software. The new process relies on an object-oriented software development method supported by a domain specific library of generalized components. This Generalized Support Software (GSS) Domain Engineering Process is currently in use at the NASA GSFC Software Engineering Laboratory (SEL). The key facets of the GSS process are (1) an architecture for rapid deployment of FDD applications, (2) a reuse asset library for FDD classes, and (3) a paradigm shift from developing software to configuring software for mission support. This paper describes the GSS architecture and process, results of fielding the first applications, lessons learned, and future directions
Nakamoto, S.; Saito, H.; Muneyama, K.; Sato, T.; PrasannaKumar, S.; Kumar, A.; Frouin, R.
-chemical system that supports steady carbon circulation in geological time scale in the world ocean using Mixed Layer-Isopycnal ocean General Circulation model with remotely sensed Coastal Zone Color Scanner (CZCS) chlorophyll pigment concentration....
Viacheslav Mulyk
2017-06-01
Full Text Available Purpose: substantiation of the methodology of the training process of qualified female athletes engaged in bodybuilding in the general preparatory stage of the preparatory period, taking into account the biological cycle. Material & Methods: in the study participated 18 qualified female athletes engaged in bodybuilding, included in the Kharkov region team of bodybuilding. Results: comparative characteristic of the most frequently used methodology of the training process in bodybuilding are shows. An optimal methodology for qualified female athletes engaged in bodybuilding has been developed and justified, depending on the initial form of the athlete at the beginning of the general preparatory stage of the training. The dependence of the change in the body weight of female athletes from the training process is shows. Conclusion: on the basis of the study, the author suggests an optimal training methodology depending on the mesocycle of training in the preparatory period in the general preparatory stage.
Scott, Felipe; Aroca, Germán; Caballero, José Antonio; Conejeros, Raúl
2017-07-01
The aim of this study is to analyze the techno-economic performance of process configurations for ethanol production involving solid-liquid separators and reactors in the saccharification and fermentation stage, a family of process configurations where few alternatives have been proposed. Since including these process alternatives creates a large number of possible process configurations, a framework for process synthesis and optimization is proposed. This approach is supported on kinetic models fed with experimental data and a plant-wide techno-economic model. Among 150 process configurations, 40 show an improved MESP compared to a well-documented base case (BC), almost all include solid separators and some show energy retrieved in products 32% higher compared to the BC. Moreover, 16 of them also show a lower capital investment per unit of ethanol produced per year. Several of the process configurations found in this work have not been reported in the literature. Copyright © 2017 Elsevier Ltd. All rights reserved.
Darabi, Aubteen; Kalyuga, Slava
2012-01-01
The process of improving organizational performance through designing systemic interventions has remarkable similarities to designing instruction for improving learners' performance. Both processes deal with subjects (learners and organizations correspondingly) with certain capabilities that are exposed to novel information designed for producing…
Hiromori, Tomohito
2009-01-01
The purpose of this study is to examine a process model of L2 learners' motivation. To investigate the overall process of motivation, the motivation of 148 university students was analyzed. Data were collected on three variables from the pre-decisional phase of motivation (i.e., value, expectancy, and intention) and four variables from the…
Leusink, Peter; Teunissen, Doreth; Lucassen, Peter L.; Laan, Ellen T.; Lagro-Janssen, Antoine L.
2018-01-01
Background: The gap between the relatively high prevalence of provoked vulvodynia (PVD) in the general population and the low incidence in primary care can partly be explained by physicians' lack of knowledge about the assessment and management of PVD. Objectives: To recognize barriers and
Luis Emilio Caro Betancourt
2008-09-01
Full Text Available This article approaches the theoretical referents that sustain the professional pedagogical behavior of the Entire General Professor of Secondary School, using computer science in the teaching learning process. Taking into account the introducti on of the scientific and technical developments (Computer science in education and the professional's role starting from the demands of the conceived model for Secondary School Education.
Simon de Lusignan
2006-03-01
Conclusions Routinely collected primary care data could contribute more to the process of health improvement; however, those working with these data need to understand fully the complexity of the context within which data entry takes place.
T. P. Varshanina
2016-01-01
Full Text Available This work substantiates the need to ontologically couple methods of prediction of geospace processes and fundamental bases of the modern epistemological picture of the world. The method of a structural mask of power geographical fields is offered. On its basis a way of a solution of the problem of indeterminacy and overcoming influence of nonlinearity of geospace processes, as well as the methods of their dot prediction are developed.
Y.M. Furman
2014-10-01
Full Text Available Purpose : to examine the effect of general physical preparedness of young swimmers in the body artificially created state hypercapnic normobaric hypoxia. Material : the study involved 21 swimmer aged 13-14 years with sports qualifications at third and second sports categories. Results : the original method of working with young swimmers. Studies were conducted for 16 weeks a year preparatory period macrocycle. The average value of the index on the results of general endurance races 800m improved by 2.80 %. 8.24 % increased speed- strength endurance and 18.77 % increased dynamic strength endurance. During the period of formative experiment performance speed, agility, static endurance, flexibility and explosive strength athletes first experimental group was not significantly changed. Conclusions : it was found that the use of the proposed technique provides statistically significant increase in overall endurance, speed strength endurance and dynamic strength endurance.
Abraini, Jacques H; Marassio, Guillaume; David, Helene N; Vallone, Beatrice; Prangé, Thierry; Colloc'h, Nathalie
2014-11-01
The mechanisms by which general anesthetics, including xenon and nitrous oxide, act are only beginning to be discovered. However, structural approaches revealed weak but specific protein-gas interactions. To improve knowledge, we performed x-ray crystallography studies under xenon and nitrous oxide pressure in a series of 10 binding sites within four proteins. Whatever the pressure, we show (1) hydrophobicity of the gas binding sites has a screening effect on xenon and nitrous oxide binding, with a threshold value of 83% beyond which and below which xenon and nitrous oxide, respectively, binds to their sites preferentially compared to each other; (2) xenon and nitrous oxide occupancies are significantly correlated respectively to the product and the ratio of hydrophobicity by volume, indicating that hydrophobicity and volume are binding parameters that complement and oppose each other's effects; and (3) the ratio of occupancy of xenon to nitrous oxide is significantly correlated to hydrophobicity of their binding sites. These data demonstrate that xenon and nitrous oxide obey different binding mechanisms, a finding that argues against all unitary hypotheses of narcosis and anesthesia, and indicate that the Meyer-Overton rule of a high correlation between anesthetic potency and solubility in lipids of general anesthetics is often overinterpreted. This study provides evidence that the mechanisms of gas binding to proteins and therefore of general anesthesia should be considered as the result of a fully reversible interaction between a drug ligand and a receptor as this occurs in classical pharmacology.
Pretschner, D.P.; Pfeiffer, G.; Deutsches Elektronen-Sychnchrotron
1981-01-01
In the field of nuclear medicine, BASIC and FORTRAN are currently being favoured as higher-level programming languages for computer-aided signal processing, and most operating systems of so-called ''freely programmable analyzers'' in nuclear wards have compilers for this purpose. However, FORTRAN is not an interactive language and thus not suited for conversational computing as a man-machine interface. BASIC, on the other hand, although a useful starting language for beginners, is not sufficiently sophisticated for complex nuclear medicine problems involving detailed calculations. Integration of new methods of signal acquisition, processing and presentation into an existing system or generation of new systems is difficult in FORTRAN, BASIC or ASSEMBLER and can only be done by system specialists, not by nuclear physicians. This problem may be solved by suitable interactive systems that are easy to learn, flexible, transparent and user-friendly. An interactive system of this type, XDS, was developed in the course of a project on evaluation of radiological image sequences. An XDS-generated command processing system for signal and image processing in nuclear medicine is described. The system is characterized by interactive program development and execution, problem-relevant data types, a flexible procedure concept and an integrated system implementation language for modern image processing algorithms. The advantages of the interactive system are illustrated by an example of diagnosis by nuclear methods. (orig.) [de
Xia, Chuan
2016-12-30
We demonstrate a versatile top-down ion exchange process, done at ambient temperature, to form epitaxial chalcogenide films and devices, with nanometer scale thickness control. To demonstrate the versatility of our process we have synthesized (1) epitaxial chalcogenide metallic and semiconducting films and (2) free-standing chalcogenide films and (3) completed in situ formation of atomically sharp heterojunctions by selective ion exchange. Epitaxial NiCo2S4 thin films prepared by our process show 115 times higher mobility than NiCo2S4 pellets (23 vs 0.2 cm(2) V-1 s(-1)) prepared by previous reports. By controlling the ion exchange process time, we made free-standing epitaxial films of NiCo2S4 and transferred them onto different substrates. We also demonstrate in situ formation of atomically sharp, lateral Schottky diodes based on NiCo2O4/NiCo2S4 heterojunction, using a single ion exchange step. Additionally, we show that our approach can be easily extended to other chalcogenide semiconductors. Specifically, we used our process to prepare Cu1.8S thin films with mobility that matches single crystal Cu1.8S (25 cm(2) V-1 s(-1)), which is ca. 28 times higher than the previously reported Cu1.8S thin film mobility (0.58 cm(2) V-1 s(-1)), thus demonstrating the universal nature of our process. This is the first report in which chalcogenide thin films retain the epitaxial nature of the precursor oxide films, an approach that will be useful in many applications.
Eppelbaum, Lev
2016-04-01
the basis of multimodel (Eppelbaum and Yakubov, 2004), informational (Eppelbaum, 2014), or wavelet (Eppelbaum et al., 2011, 2014; Eppelbaum, 2015c) approaches. In Israel, a lot of positive results were derived from magnetic method employment with application of the abovementioned procedures at numerous archaeological sites (e.g., Eppelbaum, 2000; Eppelbaum et al., 2000, 2001; Eppelbaum and Itkis, 2003; 2003a; Eppelbaum et al., 2006, 2010; Eppelbaum, 2010a, 2011a, 2014, 2015a). Similar effective techniques were developed for the interpretation of microgravity anomalies (Eppelbaum, 2009b, 2011b, 2015b), temperature anomalies (Eppelbaum, 2009a, 2013a), self-potential anomalies (Eppelbaum et al., 2003b; 2004), induced polarization anomalies (Khesin et al., 1997; Eppelbaum, 2000), piezoelectric anomalies (Neishtadt and Eppelbaum, 2012), Very Low Frequency (VLF) anomalies (Eppelbaum, 2000; Eppelbaum and Khesin, 2012). The theoretical analysis indicates that for all aforementioned geophysical methods a common interpretation methodology may be applied . The main peculiarities of the developed non-conventional system for analysis of potential and quasi-potential geophysical fields are presented in Table 1. Table 1. Elements of the developed system of geophysical fields processing and interpretation under complicated environments (on the basis of Khesin et al., 1996, Eppelbaum and Khesin, 2001; Eppelbaum et al., 2000, 2001, 2004; Eppelbaum and Yakubov, 2004; Eppelbaum et al., 2006; Eppelbaum, 2009a, 2009b; Eppelbaum, 2010a, 2010b; Eppelbaum et al., 2010, 2011; Eppelbaum and Mishne, 2011; Eppelbaum, 2011a, 2011b; Neishtadt and Eppelbaum, 2012; Eppelbaum, 2013a, 2013b, 2014; Eppelbaum and Kutasov, 2014; Eppelbaum et al., 2014; Eppelbaum, 2015a, 2015b, 2015c) Time Terrain Informational, Inverse problem solution Integrated variation correction multimodel and in conditions of: 3-D integrated FIELD correction using and wavelet ruggedarbitrary approximation modeling correlation
Xia, Chuan; Li, Peng; Li, Jun; Jiang, Qiu; Zhang, Xixiang; Alshareef, Husam N.
2016-01-01
) epitaxial chalcogenide metallic and semiconducting films and (2) free-standing chalcogenide films and (3) completed in situ formation of atomically sharp heterojunctions by selective ion exchange. Epitaxial NiCo2S4 thin films prepared by our process show 115
Thais Rabanea-Souza
2016-03-01
Conclusion: The present findings suggest that executive function deficits are present in chronic schizophrenic patients. In addition, specific executive processes might be associated to symptom remission. Future studies examining prospectively first-episode, drug naive patients diagnosed with schizophrenia may be especially elucidative.
Willson-Conrad, Angela; Kowalske, Megan Grunert
2018-01-01
Retention of students who major in STEM continues to be a major concern for universities. Many students cite poor teaching and disappointing grades as reasons for dropping out of STEM courses. Current college chemistry courses often assess what a student has learned through summative exams. To understand students' experiences of the exam process,…
Ding, Junyan; Johnson, Edward A.; Martin, Yvonne E.
2018-03-01
The diffusive and advective erosion-created landscapes have similar structure (hillslopes and channels) across different scales regardless of variations in drivers and controls. The relative magnitude of diffusive erosion to advective erosion (D/K ratio) in a landscape development model controls hillslope length, shape, and drainage density, which regulate soil moisture variation, one of the critical resources of plants, through the contributing area (A) and local slope (S) represented by a topographic index (TI). Here we explore the theoretical relation between geomorphic processes, TI, and the abundance and distribution of plants. We derived an analytical model that expresses the TI with D, K, and A. This gives us the relation between soil moisture variation and geomorphic processes. Plant tolerance curves are used to link plant performance to soil moisture. Using the hypothetical tolerance curves of three plants, we show that the abundance and distribution of xeric, mesic, and hydric plants on the landscape are regulated by the D/K ratio. Where diffusive erosion is the major erosion process (large D/K ratio), mesic plants have higher abundance relative to xeric and hydric plants and the landscape has longer and convex-upward hillslope and low channel density. Increasing the dominance of advective erosion increases relative abundance of xeric and hydric plants dominance, and the landscape has short and concave hillslope and high channel density.
Yang, X. I. A.; Marusic, I.; Meneveau, C.
2016-06-01
Townsend [Townsend, The Structure of Turbulent Shear Flow (Cambridge University Press, Cambridge, UK, 1976)] hypothesized that the logarithmic region in high-Reynolds-number wall-bounded flows consists of space-filling, self-similar attached eddies. Invoking this hypothesis, we express streamwise velocity fluctuations in the inertial layer in high-Reynolds-number wall-bounded flows as a hierarchical random additive process (HRAP): uz+=∑i=1Nzai . Here u is the streamwise velocity fluctuation, + indicates normalization in wall units, z is the wall normal distance, and ai's are independently, identically distributed random additives, each of which is associated with an attached eddy in the wall-attached hierarchy. The number of random additives is Nz˜ln(δ /z ) where δ is the boundary layer thickness and ln is natural log. Due to its simplified structure, such a process leads to predictions of the scaling behaviors for various turbulence statistics in the logarithmic layer. Besides reproducing known logarithmic scaling of moments, structure functions, and correlation function [" close="]3/2 uz(x ) uz(x +r ) >, new logarithmic laws in two-point statistics such as uz4(x ) > 1 /2, 1/3, etc. can be derived using the HRAP formalism. Supporting empirical evidence for the logarithmic scaling in such statistics is found from the Melbourne High Reynolds Number Boundary Layer Wind Tunnel measurements. We also show that, at high Reynolds numbers, the above mentioned new logarithmic laws can be derived by assuming the arrival of an attached eddy at a generic point in the flow field to be a Poisson process [Woodcock and Marusic, Phys. Fluids 27, 015104 (2015), 10.1063/1.4905301]. Taken together, the results provide new evidence supporting the essential ingredients of the attached eddy hypothesis to describe streamwise velocity fluctuations of large, momentum transporting eddies in wall-bounded turbulence, while observed deviations suggest the need for further extensions of the
The Development of a General Purpose ARM-based Processing Unit for the ATLAS TileCal sROD
Cox, Mitchell Arij; Reed, Robert; Mellado Garcia, Bruce Rafael
2014-01-01
The Large Hadron Collider at CERN generates enormous amounts of raw data which present a serious computing challenge. After Phase-II upgrades in 2022, the data output from the ATLAS Tile Calorimeter will increase by 200 times to 41 Tb/s! ARM processors are common in mobile devices due to their low cost, low energy consumption and high performance. It is proposed that a cost-effective, high data throughput Processing Unit (PU) can be developed by using several consumer ARM processors in a clus...
Generalized Superconductivity. Generalized Levitation
Ciobanu, B.; Agop, M.
2004-01-01
In the recent papers, the gravitational superconductivity is described. We introduce the concept of generalized superconductivity observing that any nongeodesic motion and, in particular, the motion in an electromagnetic field, can be transformed in a geodesic motion by a suitable choice of the connection. In the present paper, the gravitoelectromagnetic London equations have been obtained from the generalized Helmholtz vortex theorem using the generalized local equivalence principle. In this context, the gravitoelectromagnetic Meissner effect and, implicitly, the gravitoelectromagnetic levitation are given. (authors)
The development of a general purpose ARM-based processing unit for the ATLAS TileCal sROD
Cox, M. A.; Reed, R.; Mellado, B.
2015-01-01
After Phase-II upgrades in 2022, the data output from the LHC ATLAS Tile Calorimeter will increase significantly. ARM processors are common in mobile devices due to their low cost, low energy consumption and high performance. It is proposed that a cost-effective, high data throughput Processing Unit (PU) can be developed by using several consumer ARM processors in a cluster configuration to allow aggregated processing performance and data throughput while maintaining minimal software design difficulty for the end-user. This PU could be used for a variety of high-level functions on the high-throughput raw data such as spectral analysis and histograms to detect possible issues in the detector at a low level. High-throughput I/O interfaces are not typical in consumer ARM System on Chips but high data throughput capabilities are feasible via the novel use of PCI-Express as the I/O interface to the ARM processors. An overview of the PU is given and the results for performance and throughput testing of four different ARM Cortex System on Chips are presented.
The Development of a General Purpose ARM-based Processing Unit for the ATLAS TileCal sROD
Cox, Mitchell Arij; The ATLAS collaboration; Mellado Garcia, Bruce Rafael
2015-01-01
The Large Hadron Collider at CERN generates enormous amounts of raw data which present a serious computing challenge. After Phase-II upgrades in 2022, the data output from the ATLAS Tile Calorimeter will increase by 200 times to 41 Tb/s! ARM processors are common in mobile devices due to their low cost, low energy consumption and high performance. It is proposed that a cost-effective, high data throughput Processing Unit (PU) can be developed by using several consumer ARM processors in a cluster configuration to allow aggregated processing performance and data throughput while maintaining minimal software design difficulty for the end-user. This PU could be used for a variety of high-level functions on the high-throughput raw data such as spectral analysis and histograms to detect possible issues in the detector at a low level. High-throughput I/O interfaces are not typical in consumer ARM System on Chips but high data throughput capabilities are feasible via the novel use of PCI-Express as the I/O interface ...
The development of a general purpose ARM-based processing unit for the ATLAS TileCal sROD
Cox, M A; Reed, R; Mellado, B
2015-01-01
After Phase-II upgrades in 2022, the data output from the LHC ATLAS Tile Calorimeter will increase significantly. ARM processors are common in mobile devices due to their low cost, low energy consumption and high performance. It is proposed that a cost-effective, high data throughput Processing Unit (PU) can be developed by using several consumer ARM processors in a cluster configuration to allow aggregated processing performance and data throughput while maintaining minimal software design difficulty for the end-user. This PU could be used for a variety of high-level functions on the high-throughput raw data such as spectral analysis and histograms to detect possible issues in the detector at a low level. High-throughput I/O interfaces are not typical in consumer ARM System on Chips but high data throughput capabilities are feasible via the novel use of PCI-Express as the I/O interface to the ARM processors. An overview of the PU is given and the results for performance and throughput testing of four different ARM Cortex System on Chips are presented
The Development of a General Purpose ARM-based Processing Unit for the TileCal sROD
Cox, Mitchell A
2014-01-01
The Large Hadron Collider at CERN generates enormous amounts of raw data which present a serious computing challenge. After planned upgrades in 2022, the data output from the ATLAS Tile Calorimeter will increase by 200 times to 41 Tb/s! ARM processors are common in mobile devices due to their low cost, low energy consumption and high performance. It is proposed that a cost-effective, high data throughput Processing Unit (PU) can be developed by using several consumer ARM processors in a cluster configuration to allow aggregated processing performance and data throughput while maintaining minimal software design difficulty for the end-user. This PU could be used for a variety of high-level functions on the high-throughput raw data such as spectral analysis and histograms to detect possible issues in the detector at a low level. High-throughput I/O interfaces are not typical in consumer ARM System on Chips but high data throughput capabilities are feasible via the novel use of PCI-Express as the I/O interface t...
Hirsh, Jacob B; Galinsky, Adam D; Zhong, Chen-Bo
2011-09-01
Social power, alcohol intoxication, and anonymity all have strong influences on human cognition and behavior. However, the social consequences of each of these conditions can be diverse, sometimes producing prosocial outcomes and other times enabling antisocial behavior. We present a general model of disinhibition to explain how these seemingly contradictory effects emerge from a single underlying mechanism: The decreased salience of competing response options prevents activation of the Behavioral Inhibition System (BIS). As a result, the most salient response in any given situation is expressed, regardless of whether it has prosocial or antisocial consequences. We review three distinct routes through which power, alcohol intoxication, and anonymity reduce the salience of competing response options, namely, through Behavioral Approach System (BAS) activation, cognitive depletion, and reduced social desirability concerns. We further discuss how these states can both reveal and shape the person. Overall, our approach allows for multiple domain-specific models to be unified within a common conceptual framework that explains how both situational and dispositional factors can influence the expression of disinhibited behavior, producing both prosocial and antisocial outcomes. © Association for Psychological Science 2011.
Holder, N.D.; Strand, J.B.; Schwarz, F.A.; Drake, R.N.
1981-11-01
The Federal Republic of Germany (FRG) and the United States (US) are cooperating on certain aspects of gas-cooled reactor technology under an umbrella agreement. Under the spent fuel treatment development section of the agreement, both FRG mixed uranium/ thorium and low-enriched uranium fuel spheres have been processed in the Department of Energy-sponsored cold pilot plant for high-temperature gas-cooled reactor (HTGR) fuel processing at General Atomic Company in San Diego, California. The FRG fuel spheres were crushed and burned to recover coated fuel particles suitable for further treatment for uranium recovery. Successful completion of the tests described in this paper demonstrated certain modifications to the US HTGR fuel burining process necessary for FRG fuel treatment. Results of the tests will be used in the design of a US/FRG joint prototype headend facility for HTGR fuel
Piriou, Pierre-Yves; Faure, Jean-Marc; Lesage, Jean-Jacques
2017-01-01
This paper presents a modeling framework that permits to describe in an integrated manner the structure of the critical system to analyze, by using an enriched fault tree, the dysfunctional behavior of its components, by means of Markov processes, and the reconfiguration strategies that have been planned to ensure safety and availability, with Moore machines. This framework has been developed from BDMP (Boolean logic Driven Markov Processes), a previous framework for dynamic repairable systems. First, the contribution is motivated by pinpointing the limitations of BDMP to model complex reconfiguration strategies and the failures of the control of these strategies. The syntax and semantics of GBDMP (Generalized Boolean logic Driven Markov Processes) are then formally defined; in particular, an algorithm to analyze the dynamic behavior of a GBDMP model is developed. The modeling capabilities of this framework are illustrated on three representative examples. Last, qualitative and quantitative analysis of GDBMP models highlight the benefits of the approach.
Holder, N.D.; Strand, J.B.; Schwarz, F.A.; Tischer, H.E.
1980-11-01
The Federal Republic of Germany (FRG) and the United States (US) are cooperating on certain aspects gas-cooled reactor technology under an umbrella agreement. Under the spent fuel treatment section of the agreement, FRG fuel spheres were recently sent for processing in the Department of Energy sponsored cold pilot plant for High-Temperature Gas-Cooled Reactor (HTGR) fuel processing at General Atomic Company in San Diego, California. The FRG fuel spheres were crushed and burned to recover coated fuel particles. These particles were in turn crushed and burned to recover the fuel-bearing kernels for further treatment for uranium recovery. Successful completion of the tests described in this paper demonstrated the applicability of the US HTGR fuel treatment flowsheet to FRG fuel processing. 10 figures
Liu, Chunbo; Pan, Feng; Li, Yun
2016-07-29
Glutamate is of great importance in food and pharmaceutical industries. There is still lack of effective statistical approaches for fault diagnosis in the fermentation process of glutamate. To date, the statistical approach based on generalized additive model (GAM) and bootstrap has not been used for fault diagnosis in fermentation processes, much less the fermentation process of glutamate with small samples sets. A combined approach of GAM and bootstrap was developed for the online fault diagnosis in the fermentation process of glutamate with small sample sets. GAM was first used to model the relationship between glutamate production and different fermentation parameters using online data from four normal fermentation experiments of glutamate. The fitted GAM with fermentation time, dissolved oxygen, oxygen uptake rate and carbon dioxide evolution rate captured 99.6 % variance of glutamate production during fermentation process. Bootstrap was then used to quantify the uncertainty of the estimated production of glutamate from the fitted GAM using 95 % confidence interval. The proposed approach was then used for the online fault diagnosis in the abnormal fermentation processes of glutamate, and a fault was defined as the estimated production of glutamate fell outside the 95 % confidence interval. The online fault diagnosis based on the proposed approach identified not only the start of the fault in the fermentation process, but also the end of the fault when the fermentation conditions were back to normal. The proposed approach only used a small sample sets from normal fermentations excitements to establish the approach, and then only required online recorded data on fermentation parameters for fault diagnosis in the fermentation process of glutamate. The proposed approach based on GAM and bootstrap provides a new and effective way for the fault diagnosis in the fermentation process of glutamate with small sample sets.
Togelius, Julian; Yannakakis, Georgios N.; 2016 IEEE Conference on Computational Intelligence and Games (CIG)
2016-01-01
Arguably the grand goal of artificial intelligence research is to produce machines with general intelligence: the capacity to solve multiple problems, not just one. Artificial intelligence (AI) has investigated the general intelligence capacity of machines within the domain of games more than any other domain given the ideal properties of games for that purpose: controlled yet interesting and computationally hard problems. This line of research, however, has so far focuse...
Mander, Johannes V; Jacob, Gitta A; Götz, Lea; Sammet, Isa; Zipfel, Stephan; Teufel, Martin
2015-01-01
The study aimed at analyzing associations between Grawe's general mechanisms of change and Young's early maladaptive schemas (EMS). Therefore, 98 patients completed the Scale for the Multiperspective Assessment of General Change Mechanisms in Psychotherapy (SACiP), the Young Shema Questionnaire-Short Form Revised (YSQ S3R), and diverse outcome measures at the beginning and end of treatment. Our results are important for clinical applications, as we demonstrated strong predictive effects of change mechanisms on schema domains using regression analyses and cross-lagged panel models. Resource activation experiences seem to be especially crucial in fostering alterations in EMS, as this change mechanism demonstrated significant associations with several schema domains. Future research should investigate these aspects in more detail using observer-based micro-process analyses.
Jia, Ding
2017-12-01
The expected indefinite causal structure in quantum gravity poses a challenge to the notion of entanglement: If two parties are in an indefinite causal relation of being causally connected and not, can they still be entangled? If so, how does one measure the amount of entanglement? We propose to generalize the notions of entanglement and entanglement measure to address these questions. Importantly, the generalization opens the path to study quantum entanglement of states, channels, networks, and processes with definite or indefinite causal structure in a unified fashion, e.g., we show that the entanglement distillation capacity of a state, the quantum communication capacity of a channel, and the entanglement generation capacity of a network or a process are different manifestations of one and the same entanglement measure.
Ehrlich, Carolyn; Kendall, Elizabeth; St John, Winsome
2013-01-01
The aim of this study was to develop understanding about how a registered nurse-provided care coordination model can "fit" within organisational processes and professional relationships in general practice. In this project, registered nurses were involved in implementation of registered nurse-provided care coordination, which aimed to improve quality of care and support patients with chronic conditions to maintain their care and manage their lifestyle. Focus group interviews were conducted with nurses using a semi-structured interview protocol. Interpretive analysis of interview data was conducted using Normalization Process Theory to structure data analysis and interpretation. Three core themes emerged: (1) pre-requisites for care coordination, (2) the intervention in context, and (3) achieving outcomes. Pre-requisites were adequate funding mechanisms, engaging organisational power-brokers, leadership roles, and utilising and valuing registered nurses' broad skill base. To ensure registered nurse-provided care coordination processes were sustainable and embedded, mentoring and support as well as allocated time were required. Finally, when registered nurse-provided care coordination was supported, positive client outcomes were achievable, and transformation of professional practice and development of advanced nursing roles was possible. Registered nurse-provided care coordination could "fit" within the context of general practice if it was adequately resourced. However, the heterogeneity of general practice can create an impasse that could be addressed through close attention to shared and agreed understandings. Successful development and implementation of registered nurse roles in care coordination requires attention to educational preparation, support of the individual nurse, and attention to organisational structures, financial implications and team member relationships.
Sieng, Sokha; Hurst, Cameron
2017-08-07
This study compares a combination of processes of care and clinical targets among patients with type 2 diabetes mellitus (T2DM) between specialist diabetes clinics (SDCs) and general medical clinics (GMCs), and how differences between these two types of clinics differ with hospital type (community, provincial and regional). Type 2 diabetes mellitus patient medical records were collected from 595 hospitals (499 community, 70 provincial, 26 regional) in Thailand between April 1 to June 30, 2012 resulting in a cross-sectional sample of 26,860 patients. Generalized linear mixed modeling was conducted to examine associations between clinic type and quality of care. The outcome variables of interest were split into clinical targets and process of care. A subsequent subgroup analysis was conducted to examine if the nature of clinical target and process of care differences between GMCs and SDCs varied with hospital type (regional, provincial, community). Regardless of the types of hospitals (regional, provincial, or community) patients attending SDCs were considerably more likely to have eye and foot exam. In terms of larger hospitals (regional and provincial) patients attending SDCs were more likely to achieve HbA1c exam, All FACE exam, BP target, and the Num7Q. Interestingly, SDCs performed better than GMCs at only provincial hospitals for LDL-C target and the All7Q. Finally, patients with T2DM who attended community hospital-GMCs had a better chance of achieving the blood pressure target than patients who attended community hospital-SDCs. Specialized diabetes clinics outperform general medical clinics for both regional and provincial hospitals for all quality of care indicators and the number of quality of care indicators achieved was never lower. However, this better performance of SDC was not observed in community hospital. Indeed, GMCs outperformed SDCs for some quality of care indicators in the community level setting.
Zhang, Jing; Liang, Lichen; Anderson, Jon R; Gatewood, Lael; Rottenberg, David A; Strother, Stephen C
2008-01-01
As functional magnetic resonance imaging (fMRI) becomes widely used, the demands for evaluation of fMRI processing pipelines and validation of fMRI analysis results is increasing rapidly. The current NPAIRS package, an IDL-based fMRI processing pipeline evaluation framework, lacks system interoperability and the ability to evaluate general linear model (GLM)-based pipelines using prediction metrics. Thus, it can not fully evaluate fMRI analytical software modules such as FSL.FEAT and NPAIRS.GLM. In order to overcome these limitations, a Java-based fMRI processing pipeline evaluation system was developed. It integrated YALE (a machine learning environment) into Fiswidgets (a fMRI software environment) to obtain system interoperability and applied an algorithm to measure GLM prediction accuracy. The results demonstrated that the system can evaluate fMRI processing pipelines with univariate GLM and multivariate canonical variates analysis (CVA)-based models on real fMRI data based on prediction accuracy (classification accuracy) and statistical parametric image (SPI) reproducibility. In addition, a preliminary study was performed where four fMRI processing pipelines with GLM and CVA modules such as FSL.FEAT and NPAIRS.CVA were evaluated with the system. The results indicated that (1) the system can compare different fMRI processing pipelines with heterogeneous models (NPAIRS.GLM, NPAIRS.CVA and FSL.FEAT) and rank their performance by automatic performance scoring, and (2) the rank of pipeline performance is highly dependent on the preprocessing operations. These results suggest that the system will be of value for the comparison, validation, standardization and optimization of functional neuroimaging software packages and fMRI processing pipelines.
Maekawa, T.; Tanaka, H.; Uchida, M.; Igami, H.
2003-01-01
General properties of scattering matrix, which governs the mode conversion process between electron Bernstein (B) waves and external electromagnetic (EM) waves in the presence of steep density gradient, are theoretically analyzed. Based on the analysis, polarization adjustment of incident EM waves for optimal mode conversion to B waves is possible and effective for a range of density gradient near the upper hybrid resonance, which are not covered by the previously proposed schemes of perpendicular injection of X mode and oblique injection of O mode. Furthermore, the analysis shows that the polarization of the externally emitted EM waves from B waves is uniquely related to the optimized polarization of incident EM waves for B wave heating and that the mode conversion rate is the same for the both processes of emission and the injection with the optimized polarization
Weis, Daniel; Willems, Helmut
2017-06-01
The article deals with the question of how aggregated data which allow for generalizable insights can be generated from single-case based qualitative investigations. Thereby, two central challenges of qualitative social research are outlined: First, researchers must ensure that the single-case data can be aggregated and condensed so that new collective structures can be detected. Second, they must apply methods and practices to allow for the generalization of the results beyond the specific study. In the following, we demonstrate how and under what conditions these challenges can be addressed in research practice. To this end, the research process of the construction of an empirically based typology is described. A qualitative study, conducted within the framework of the Luxembourg Youth Report, is used to illustrate this process. Specifically, strategies are presented which increase the likelihood of generalizability or transferability of the results, while also highlighting their limitations.
Process generalization in conceptual models
Wieringa, Roelf J.
In conceptual modeling, the universe of discourse (UoD) is divided into classes which have a taxonomic structure. The classes are usually defined in terms of attributes (all objects in a class share attribute names) and possibly of events. For enmple, the class of employees is the set of objects to
Yeung, Anna; Hocking, Jane; Guy, Rebecca; Fairley, Christopher K; Smith, Kirsty; Vaisey, Alaina; Donovan, Basil; Imrie, John; Gunn, Jane; Temple-Smith, Meredith
2018-03-28
Chlamydia is the most common notifiable sexually transmissible infection in Australia. Left untreated, it can develop into pelvic inflammatory disease and infertility. The majority of notifications come from general practice and it is ideally situated to test young Australians. The Australian Chlamydia Control Effectiveness Pilot (ACCEPt) was a multifaceted intervention that aimed to reduce chlamydia prevalence by increasing testing in 16- to 29-year-olds attending general practice. GPs were interviewed to describe the effectiveness of the ACCEPt intervention in integrating chlamydia testing into routine practice using Normalization Process Theory (NPT). GPs were purposively selected based on age, gender, geographic location and size of practice at baseline and midpoint. Interview data were analysed regarding the intervention components and results were interpreted using NPT. A total of 44 GPs at baseline and 24 at midpoint were interviewed. Most GPs reported offering a test based on age at midpoint versus offering a test based on symptoms or patient request at baseline. Quarterly feedback was the most significant ACCEPt component for facilitating a chlamydia test. The ACCEPt intervention has been able to moderately normalize chlamydia testing among GPs, although the components had varying levels of effectiveness. NPT can demonstrate the effective implementation of an intervention in general practice and has been valuable in understanding which components are essential and which components can be improved upon.
Stochastic modeling of the Fermi/LAT γ-ray blazar variability
Sobolewska, M. A.; Siemiginowska, A. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Kelly, B. C. [Department of Physics, Broida Hall, University of California, Santa Barbara, CA 93107 (United States); Nalewajko, K., E-mail: malgosia@camk.edu.pl [JILA, University of Colorado and National Institute of Standards and Technology, 440 UCB, Boulder, CO 80309 (United States)
2014-05-10
We study the γ-ray variability of 13 blazars observed with the Fermi/Large Area Telescope (LAT). These blazars have the most complete light curves collected during the first four years of the Fermi sky survey. We model them with the Ornstein-Uhlenbeck (OU) process or a mixture of the OU processes. The OU process has power spectral density (PSD) proportional to 1/f {sup α} with α changing at a characteristic timescale, τ{sub 0}, from 0 (τ >> τ{sub 0}) to 2 (τ << τ{sub 0}). The PSD of the mixed OU process has two characteristic timescales and an additional intermediate region with 0 < α < 2. We show that the OU model provides a good description of the Fermi/LAT light curves of three blazars in our sample. For the first time, we constrain a characteristic γ-ray timescale of variability in two BL Lac sources, 3C 66A and PKS 2155-304 (τ{sub 0} ≅ 25 days and τ{sub 0} ≅ 43 days, respectively, in the observer's frame), which are longer than the soft X-ray timescales detected in blazars and Seyfert galaxies. We find that the mixed OU process approximates the light curves of the remaining 10 blazars better than the OU process. We derive limits on their long and short characteristic timescales, and infer that their Fermi/LAT PSD resemble power-law functions. We constrain the PSD slopes for all but one source in the sample. We find hints for sub-hour Fermi/LAT variability in four flat spectrum radio quasars. We discuss the implications of our results for theoretical models of blazar variability.
Simon Gorin
Full Text Available Several models in the verbal domain of short-term memory (STM consider a dissociation between item and order processing. This view is supported by data demonstrating that different types of time-based interference have a greater effect on memory for the order of to-be-remembered items than on memory for the items themselves. The present study investigated the domain-generality of the item versus serial order dissociation by comparing the differential effects of time-based interfering tasks, such as rhythmic interference and articulatory suppression, on item and order processing in verbal and musical STM domains. In Experiment 1, participants had to maintain sequences of verbal or musical information in STM, followed by a probe sequence, this under different conditions of interference (no-interference, rhythmic interference, articulatory suppression. They were required to decide whether all items of the probe list matched those of the memory list (item condition or whether the order of the items in the probe sequence matched the order in the memory list (order condition. In Experiment 2, participants performed a serial order probe recognition task for verbal and musical sequences ensuring sequential maintenance processes, under no-interference or rhythmic interference conditions. For Experiment 1, serial order recognition was not significantly more impacted by interfering tasks than was item recognition, this for both verbal and musical domains. For Experiment 2, we observed selective interference of the rhythmic interference condition on both musical and verbal order STM tasks. Overall, the results suggest a similar and selective sensitivity to time-based interference for serial order STM in verbal and musical domains, but only when the STM tasks ensure sequential maintenance processes.
Gorin, Simon; Kowialiewski, Benjamin; Majerus, Steve
2016-01-01
Several models in the verbal domain of short-term memory (STM) consider a dissociation between item and order processing. This view is supported by data demonstrating that different types of time-based interference have a greater effect on memory for the order of to-be-remembered items than on memory for the items themselves. The present study investigated the domain-generality of the item versus serial order dissociation by comparing the differential effects of time-based interfering tasks, such as rhythmic interference and articulatory suppression, on item and order processing in verbal and musical STM domains. In Experiment 1, participants had to maintain sequences of verbal or musical information in STM, followed by a probe sequence, this under different conditions of interference (no-interference, rhythmic interference, articulatory suppression). They were required to decide whether all items of the probe list matched those of the memory list (item condition) or whether the order of the items in the probe sequence matched the order in the memory list (order condition). In Experiment 2, participants performed a serial order probe recognition task for verbal and musical sequences ensuring sequential maintenance processes, under no-interference or rhythmic interference conditions. For Experiment 1, serial order recognition was not significantly more impacted by interfering tasks than was item recognition, this for both verbal and musical domains. For Experiment 2, we observed selective interference of the rhythmic interference condition on both musical and verbal order STM tasks. Overall, the results suggest a similar and selective sensitivity to time-based interference for serial order STM in verbal and musical domains, but only when the STM tasks ensure sequential maintenance processes.
Tendinous framework of anurans reveals an all-purpose morphology.
Fratani, Jéssica; Ponssa, María Laura; Abdala, Virginia
2018-02-01
Tendons are directly associated with movement, amplifying power and reducing muscular work. Taking into account habitat and locomotor challenges faced by anurans, we identify the more conspicuous superficial tendons of a neotropical anuran group and investigate their relation to the former factors. We show that tendons can be visualized as an anatomical framework connected through muscles and/or fascia, and describe the most superficial tendinous layer of the postcranium of Leptodactylus latinasus. To analyze the relation between tendon morphology and ecological characters, we test the relative length ratio of 10 tendon-muscle (t-m) elements in 45 leptodactylid species while taking phylogeny into account. We identify the evolutionary model that best explains our variables. Additionally, we optimize t-m ratio values, and the shape of the longissimus dorsi insertion onto a selected phylogeny of the species. Our data show the existence of an all-purpose morphology that seems to have evolved independently of ecology and functional requirements. This is indicated by no significant relation between morphometric data of the analyzed tendons and habitat use or locomotion, a strong phylogenetic component to most of the analyzed variables, and a generalized pattern of intermediate values for ancestral states. Ornstein-Uhlenbeck is the model that best explains most t-m variables, indicating that stabilizing selection or selective optima might be driving shifts in tendon length within Leptodactylidae. Herein, we show the substantial influence that phylogeny has on tendon morphology, demonstrating that a generalized and stable morphological configuration of tendons is adequate to enable versatile locomotor modes and habitat use. This is an attempt to present the tendinous system as a framework to body support in vertebrates, and can be considered a starting point for further ecomorphological research of this anatomical system in anurans. Copyright © 2017 Elsevier GmbH. All
Julien, Clavel; Leandro, Aristide; Hélène, Morlon
2018-06-19
Working with high-dimensional phylogenetic comparative datasets is challenging because likelihood-based multivariate methods suffer from low statistical performances as the number of traits p approaches the number of species n and because some computational complications occur when p exceeds n. Alternative phylogenetic comparative methods have recently been proposed to deal with the large p small n scenario but their use and performances are limited. Here we develop a penalized likelihood framework to deal with high-dimensional comparative datasets. We propose various penalizations and methods for selecting the intensity of the penalties. We apply this general framework to the estimation of parameters (the evolutionary trait covariance matrix and parameters of the evolutionary model) and model comparison for the high-dimensional multivariate Brownian (BM), Early-burst (EB), Ornstein-Uhlenbeck (OU) and Pagel's lambda models. We show using simulations that our penalized likelihood approach dramatically improves the estimation of evolutionary trait covariance matrices and model parameters when p approaches n, and allows for their accurate estimation when p equals or exceeds n. In addition, we show that penalized likelihood models can be efficiently compared using Generalized Information Criterion (GIC). We implement these methods, as well as the related estimation of ancestral states and the computation of phylogenetic PCA in the R package RPANDA and mvMORPH. Finally, we illustrate the utility of the new proposed framework by evaluating evolutionary models fit, analyzing integration patterns, and reconstructing evolutionary trajectories for a high-dimensional 3-D dataset of brain shape in the New World monkeys. We find a clear support for an Early-burst model suggesting an early diversification of brain morphology during the ecological radiation of the clade. Penalized likelihood offers an efficient way to deal with high-dimensional multivariate comparative data.
Phylogenetic correlograms and the evolution of body size in South American owls (Strigiformes
José Alexandre Felizola Diniz-Filho
2000-06-01
Full Text Available During the last few years, many models have been proposed to link microevolutionary processes to macroevolutionary patterns, defined by comparative data analysis. Among these, Brownian motion and Ornstein-Uhlenbeck (O-U processes have been used to model, respectively, genetic drift or directional selection and stabilizing selection. These models produce different curves of pairwise variance between species against time since divergence, in such a way that different profiles appear in phylogenetic correlograms. We analyzed variation in body length among 19 species of South American owls, by means of phylogenetic correlograms constructed using Moran's I coefficient in four distance classes. Phylogeny among species was based on DNA hybridization. The observed correlogram was then compared with 500 correlograms obtained by simulations of Brownian motion and O-U over the same phylogeny, using discriminant analysis. The observed correlogram indicates a phylogenetic gradient up to 45 mya, when coefficients tend to stabilize, and it is similar to the correlograms produced by the O-U process. This is expected when we consider that body size of organisms is correlated with many ecological and life-history traits and subjected to many constraints that can be modeled by the O-U process, which has been used to describe evolution under stabilizing selection.Nos últimos anos diversos modelos têm sido propostos a fim de realizar inferências sobre processos microevolutivos com base em padrões macroevolutivos obtidos a partir de dados comparativos. Dentre esses, o movimento Browniano e o processo Ornstein-Uhlenbeck (O-U têm sido utilizados para modelar principalmente deriva genética e seleção estabilizadora, respectivamente. Esses modelos produzem curvas diferentes de relação entre variância interespecífica e distância no tempo, de modo que eles podem ser distingüidos com base em correlogramas filogenéticos. Neste trabalho, nós analisamos a varia
Analysis on one type of swing option in the energy market
Mistry, Hetal A.
2005-01-01
In the Nordic electricity market most of the trading takes place in derivates and options. To describe these products theoretically one needs to have knowledge from stochastic analysis. This thesis will derive a price model for one type of swing option in energy market. The main aim of writing this thesis is to introduce coal power plant and how to approach the problem if such power plant is built in Norway. This thesis uses the approach where I start out with a model for the spot price of electricity and coal, and then derive theoretical option prices. I use a Schwartz process for model and Ornstein Uhlenbeck processes to model the spot prices for electricity and coal. This model also incorporates mean-reversion, which is an important aspect of energy prices. Historical data for the spot prices is used to estimate my variables in the Schwartz model. The main objectives of this thesis were to find the price for a tolling contract in energy market and production volume that is producers control function. The first chapters gives an over view about the agreement and the formula used to derive the price. The second chapter provided me with the material I needed to derive these price and production volume such as dynamics for the spot prices for electricity and coal and their solution. Third chapter gives a statistical look on these stochastic processes. In the last chapter I tested the price model for stochastic control problem and found that the swing option can be bound in two ways: 1. Swing option limited as Margrabes solution. 2. Swing option limited as spread option. The use of the model is discussed. (Author)
Siegel, Daniel M; Metzger, Brian D
2017-12-08
The merger of binary neutron stars, or of a neutron star and a stellar-mass black hole, can result in the formation of a massive rotating torus around a spinning black hole. In addition to providing collimating media for γ-ray burst jets, unbound outflows from these disks are an important source of mass ejection and rapid neutron capture (r-process) nucleosynthesis. We present the first three-dimensional general-relativistic magnetohydrodynamic (GRMHD) simulations of neutrino-cooled accretion disks in neutron star mergers, including a realistic equation of state valid at low densities and temperatures, self-consistent evolution of the electron fraction, and neutrino cooling through an approximate leakage scheme. After initial magnetic field amplification by magnetic winding, we witness the vigorous onset of turbulence driven by the magnetorotational instability (MRI). The disk quickly reaches a balance between heating from MRI-driven turbulence and neutrino cooling, which regulates the midplane electron fraction to a low equilibrium value Y_{e}≈0.1. Over the 380-ms duration of the simulation, we find that a fraction ≈20% of the initial torus mass is unbound in powerful outflows with asymptotic velocities v≈0.1c and electron fractions Y_{e}≈0.1-0.25. Postprocessing the outflows through a nuclear reaction network shows the production of a robust second- and third-peak r process. Though broadly consistent with the results of previous axisymmetric hydrodynamical simulations, extrapolation of our results to late times suggests that the total ejecta mass from GRMHD disks is significantly higher. Our results provide strong evidence that postmerger disk outflows are an important site for the r process.
Nespolo, Roberto F; Figueroa, Julio; Solano-Iguaran, Jaiber J
2017-08-01
A fundamental problem in evolutionary biology is the understanding of the factors that promote or constrain adaptive evolution, and assessing the role of natural selection in this process. Here, comparative phylogenetics, that is, using phylogenetic information and traits to infer evolutionary processes has been a major paradigm . In this study, we discuss Ornstein-Uhlenbeck models (OU) in the context of thermal adaptation in ectotherms. We specifically applied this approach to study amphibians's evolution and energy metabolism. It has been hypothesized that amphibians exploit adaptive zones characterized by low energy expenditure, which generate specific predictions in terms of the patterns of diversification in standard metabolic rate (SMR). We complied whole-animal metabolic rates for 122 species of amphibians, and adjusted several models of diversification. According to the adaptive zone hypothesis, we expected: (1) to find "accelerated evolution" in SMR (i.e., diversification above Brownian Motion expectations, BM), (2) that a model assuming evolutionary optima (i.e., an OU model) fits better than a white-noise model and (3) that a model assuming multiple optima (according to the three amphibians's orders) fits better than a model assuming a single optimum. As predicted, we found that the diversification of SMR occurred most of the time, above BM expectations. Also, we found that a model assuming an optimum explained the data in a better way than a white-noise model. However, we did not find evidence that an OU model with multiple optima fits the data better, suggesting a single optimum in SMR for Anura, Caudata and Gymnophiona. These results show how comparative phylogenetics could be applied for testing adaptive hypotheses regarding history and physiological performance in ectotherms. Copyright © 2016 Elsevier Ltd. All rights reserved.
Phylogeny, ecology, and heart position in snakes.
Gartner, Gabriel E A; Hicks, James W; Manzani, Paulo R; Andrade, Denis V; Abe, Augusto S; Wang, Tobias; Secor, Stephen M; Garland, Theodore
2010-01-01
The cardiovascular system of all animals is affected by gravitational pressure gradients, the intensity of which varies according to organismic features, behavior, and habitat occupied. A previous nonphylogenetic analysis of heart position in snakes-which often assume vertical postures-found the heart located 15%-25% of total body length from the head in terrestrial and arboreal species but 25%-45% in aquatic species. It was hypothesized that a more anterior heart in arboreal species served to reduce the hydrostatic blood pressure when these animals adopt vertical postures during climbing, whereas an anterior heart position would not be needed in aquatic habitats, where the effects of gravity are less pronounced. We analyzed a new data set of 155 species from five major families of Alethinophidia (one of the two major branches of snakes, the other being blind snakes, Scolecophidia) using both conventional and phylogenetically based statistical methods. General linear models regressing log(10) snout-heart position on log(10) snout-vent length (SVL), as well as dummy variables coding for habitat and/or clade, were compared using likelihood ratio tests and the Akaike Information Criterion. Heart distance to the tip of the snout scaled isometrically with SVL. In all instances, phylogenetic models that incorporated transformation of the branch lengths under an Ornstein-Uhlenbeck model of evolution (to mimic stabilizing selection) better fit the data as compared with their nonphylogenetic counterparts. The best-fit model predicting snake heart position included aspects of both habitat and clade and indicated that arboreal snakes in our study tend to have hearts placed more posteriorly, opposite the trend identified in previous studies. Phylogenetic signal in relative heart position was apparent both within and among clades. Our results suggest that overcoming gravitational pressure gradients in snakes most likely involves the combined action of several cardiovascular and
Hakanson, Lars; Lindgren, Dan
2009-01-01
In this work a general, process-based mass-balance model for water contaminants for coastal areas at the ecosystem scale (CoastMab) is presented and for the first time tested for radionuclides. The model is dynamic, based on ordinary differential equations and gives monthly predictions. Connected to the core model there is also a sub-model for contaminant concentrations in fish. CoastMab calculates sedimentation, resuspension, diffusion, mixing, burial and retention of the given contaminant. The model contains both general algorithms, which apply to all contaminants, and substance-specific parts (such as algorithms for the particulate fraction, diffusion, biouptake and biological half-life). CoastMab and the sub-model for fish are simple to apply in practice since all driving variables may be readily accessed from maps or regular monitoring programs. The separation between the surface-water layer and the deep-water layer is not done as in most traditional models from water temperature data but from sedimentological criteria. Previous versions of the models for phosphorus and suspended particulate matter (in the Baltic Sea) have been validated and shown to predict well. This work presents modifications of the model and tests using two tracers, radiocesium and radiostrontium (from the Chernobyl fallout) in the Dnieper-Bug estuary (the Black Sea). Good correlations are shown between modeled and empirical data, except for the month directly after the fallout. We have, e.g., shown that: 1. The conditions in the sea outside the bay are important for the concentrations of the substances in water, sediments and fish within the bay, 2. We have demonstrated 'biological,' 'chemical' and 'water' dilution, 3. That the water chemical conditions in the bay influence biouptake and concentrations in fish of the radionuclides and 4. That the feeding behaviour of the coastal fish is very important for the biouptake of the radionuclides
Pless, Jacquelyn; Arent, Douglas J.; Logan, Jeffrey; Cochran, Jaquelin; Zinaman, Owen
2016-01-01
One energy policy objective in the United States is to promote the adoption of technologies that provide consumers with stable, secure, and clean energy. Recent work provides anecdotal evidence of natural gas (NG) and renewable electricity (RE) synergies in the power sector, however few studies quantify the value of investing in NG and RE systems together as complements. This paper uses discounted cash flow analysis and real options analysis to value hybrid NG-RE systems in distributed applications, focusing on residential and commercial projects assumed to be located in the states of New York and Texas. Technology performance and operational risk profiles are modeled at the hourly level to capture variable RE output and NG prices are modeled stochastically as geometric Ornstein-Uhlenbeck (OU) stochastic processes to capture NG price uncertainty. The findings consistently suggest that NG-RE hybrid distributed systems are more favorable investments in the applications studied relative to their single-technology alternatives when incentives for renewables are available. In some cases, NG-only systems are the favorable investments. Understanding the value of investing in NG-RE hybrid systems provides insights into one avenue towards reducing greenhouse gas emissions, given the important role of NG and RE in the power sector. - Highlights: • Natural gas and renewable electricity can be viewed as complements. • We model hybrid natural gas and renewable electricity systems at the hourly level. • We incorporate variable renewable power output and uncertain natural gas prices. • Hybrid natural gas and renewable electricity systems can be valuable investments.
Ecological and phylogenetic variability in the spinalis muscle of snakes.
Tingle, J L; Gartner, G E A; Jayne, B C; Garland, T
2017-11-01
Understanding the origin and maintenance of functionally important subordinate traits is a major goal of evolutionary physiologists and ecomorphologists. Within the confines of a limbless body plan, snakes are diverse in terms of body size and ecology, but we know little about the functional traits that underlie this diversity. We used a phylogenetically diverse group of 131 snake species to examine associations between habitat use, sidewinding locomotion and constriction behaviour with the number of body vertebrae spanned by a single segment of the spinalis muscle, with total numbers of body vertebrae used as a covariate in statistical analyses. We compared models with combinations of these predictors to determine which best fit the data among all species and for the advanced snakes only (N = 114). We used both ordinary least-squares models and phylogenetic models in which the residuals were modelled as evolving by the Ornstein-Uhlenbeck process. Snakes with greater numbers of vertebrae tended to have spinalis muscles that spanned more vertebrae. Habitat effects dominated models for analyses of all species and advanced snakes only, with the spinalis length spanning more vertebrae in arboreal species and fewer vertebrae in aquatic and burrowing species. Sidewinding specialists had shorter muscle lengths than nonspecialists. The relationship between prey constriction and spinalis length was less clear. Differences among clades were also strong when considering all species, but not for advanced snakes alone. Overall, these results suggest that muscle morphology may have played a key role in the adaptive radiation of snakes. © 2017 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2017 European Society For Evolutionary Biology.
Boghosian, Bruce M.; Devitt-Lee, Adrian; Johnson, Merek; Li, Jie; Marcq, Jeremy A.; Wang, Hongyan
2017-06-01
The ;Yard-Sale Model; of asset exchange is known to result in complete inequality-all of the wealth in the hands of a single agent. It is also known that, when this model is modified by introducing a simple model of redistribution based on the Ornstein-Uhlenbeck process, it admits a steady state exhibiting some features similar to the celebrated Pareto Law of wealth distribution. In the present work, we analyze the form of this steady-state distribution in much greater detail, using a combination of analytic and numerical techniques. We find that, while Pareto's Law is approximately valid for low redistribution, it gives way to something more similar to Gibrat's Law when redistribution is higher. Additionally, we prove in this work that, while this Pareto or Gibrat behavior may persist over many orders of magnitude, it ultimately gives way to gaussian decay at extremely large wealth. Also in this work, we introduce a bias in favor of the wealthier agent-what we call Wealth-Attained Advantage (WAA)-and show that this leads to the phenomenon of ;wealth condensation; when the bias exceeds a certain critical value. In the wealth-condensed state, a finite fraction of the total wealth of the population ;condenses; to the wealthiest agent. We examine this phenomenon in some detail, and derive the corresponding modification to the Fokker-Planck equation. We observe a second-order phase transition to a state of coexistence between an oligarch and a distribution of non-oligarchs. Finally, by studying the asymptotic behavior of the distribution in some detail, we show that the onset of wealth condensation has an abrupt reciprocal effect on the character of the non-oligarchical part of the distribution. Specifically, we show that the above-mentioned gaussian decay at extremely large wealth is valid both above and below criticality, but degenerates to exponential decay precisely at criticality.
Is the number and size of scales in Liolaemus lizards driven by climate?
José Tulli, María; Cruz, Félix B
2018-05-03
Ectothermic vertebrates are sensitive to thermal fluctuations in the environments where they occur. To buffer these fluctuations, ectotherms use different strategies, including the integument, which is a barrier that minimizes temperature exchange between the inner body and the surrounding air. In lizards, this barrier is constituted by keratinized scales of variable size, shape and texture, and its main function is protection, water loss avoidance and thermoregulation. The size of scales in lizards has been proposed to vary in relation to climatic gradients; however, it has also been observed that in some groups of Iguanian lizards could be related to phylogeny. Thus, here, we studied the area and number of scales (dorsal and ventral) of 61 species of Liolaemus lizards distributed in a broad latitudinal and altitudinal gradient to determine the nature of the variation of the scales with climate, and found that the number and size of scales are related to climatic variables, such as temperature and geographical variables as altitude. The evolutionary process that better explained how these morphological variables evolved was the Ornstein-Uhlenbeck model. The number of scales seemed to be related to common ancestry, whereas dorsal and ventral scale areas seemed to vary as a consequence of ecological traits. In fact, the ventral area is less exposed to climate conditions such as ultraviolet radiation or wind and is thus under less pressure to change in response to alterations in external conditions. It is possible that scale ornamentation such as keels and granulosity may bring some more information in this regard. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Statistics of zero crossings in rough interfaces with fractional elasticity
Zamorategui, Arturo L.; Lecomte, Vivien; Kolton, Alejandro B.
2018-04-01
We study numerically the distribution of zero crossings in one-dimensional elastic interfaces described by an overdamped Langevin dynamics with periodic boundary conditions. We model the elastic forces with a Riesz-Feller fractional Laplacian of order z =1 +2 ζ , such that the interfaces spontaneously relax, with a dynamical exponent z , to a self-affine geometry with roughness exponent ζ . By continuously increasing from ζ =-1 /2 (macroscopically flat interface described by independent Ornstein-Uhlenbeck processes [Phys. Rev. 36, 823 (1930), 10.1103/PhysRev.36.823]) to ζ =3 /2 (super-rough Mullins-Herring interface), three different regimes are identified: (I) -1 /2 value in the system size, or decays as a power-law towards (II) a subextensive or (III) an intensive value. In the steady state, the distribution of intervals between zeros changes from an exponential decay in (I) to a power-law decay P (ℓ ) ˜ℓ-γ in (II) and (III). While in (II) γ =1 -θ with θ =1 -ζ the steady-state persistence exponent, in (III) we obtain γ =3 -2 ζ , different from the exponent γ =1 expected from the prediction θ =0 for infinite super-rough interfaces with ζ >1 . The effect on P (ℓ ) of short-scale smoothening is also analyzed numerically and analytically. A tight relation between the mean interval, the mean width of the interface, and the density of zeros is also reported. The results drawn from our analysis of rough interfaces subject to particular boundary conditions or constraints, along with discretization effects, are relevant for the practical analysis of zeros in interface imaging experiments or in numerical analysis.
Ofir Bahar
2014-01-01
Full Text Available Pattern recognition receptors (PRRs play an important role in detecting invading pathogens and mounting a robust defense response to restrict infection. In rice, one of the best characterized PRRs is XA21, a leucine rich repeat receptor-like kinase that confers broad-spectrum resistance to multiple strains of the bacterial pathogen Xanthomonas oryzae pv. oryzae (Xoo. In 2009 we reported that an Xoo protein, called Ax21, is secreted by a type I-secretion system and that it serves to activate XA21-mediated immunity. This report has recently been retracted. Here we present data that corrects our previous model. We first show that Ax21 secretion does not depend on the predicted type I secretion system and that it is processed by the general secretion (Sec system. We further show that Ax21 is an outer membrane protein, secreted in association with outer membrane vesicles. Finally, we provide data showing that ax21 knockout strains do not overcome XA21-mediated immunity.
1982-09-01
The Fundamental Safety Rules applicable to certain types of nuclear installation are intended to clarify the conditions of which observance, for the type of installation concerned and for the subject that they deal with, is considered as equivalent to compliance with regulatory French technical practice. These Rules should facilitate safety analysises and the clear understanding between persons interested in matters related to nuclear safety. They in no way reduce the operator's liability and pose no obstacle to statutory provisions in force. For any installation to which a Fundamental Safety Rule applies according to the foregoing paragraph, the operator may be relieved from application of the Rule if he shows proof that the safety objectives set by the Rule are attained by other means that he proposes within the framework of statutory procedures. Furthermore, the Central Service for the Safety of Nuclear Installations reserves the right at all times to alter any Fundamental Safety Rule, as required, should it deem this necessary, while specifying the applicability conditions. This rule is intended to define the general provisions applicable to the production, inspection, processing, packaging and storage of the different types of wastes resulting from the reprocessing of fuels irradiated in a PWR
2013-09-13
Event 1.4.4,” August 7, 2012 AAA Attestation Report A-2010-0187- FFM , “General Fund Enterprise Business System - Federal Financial Management...Improvement Act Compliance. Examination of Requirements Through Test Event 1.4.0,” September 14, 2010 AAA Audit Report A-2009-0232- FFM , “General Fund...September 30, 2009 AAA Audit Report A-2009-0231- FFM , “General Fund Enterprise Business System - Federal Financial Management Improvement Act
The simulation of the non-Markovian behaviour of a two-level system
Semina, I.; Petruccione, F.
2016-05-01
Non-Markovian relaxation dynamics of a two-level system is studied with the help of the non-linear stochastic Schrödinger equation with coloured Ornstein-Uhlenbeck noise. This stochastic Schrödinger equation is investigated numerically with an adapted Platen scheme. It is shown, that the memory effects have a significant impact to the dynamics of the system.
Gelfand, I M; Graev, M I; Vilenkin, N Y; Pyatetskii-Shapiro, I I
Volume 1 is devoted to basics of the theory of generalized functions. The first chapter contains main definitions and most important properties of generalized functions as functional on the space of smooth functions with compact support. The second chapter talks about the Fourier transform of generalized functions. In Chapter 3, definitions and properties of some important classes of generalized functions are discussed; in particular, generalized functions supported on submanifolds of lower dimension, generalized functions associated with quadratic forms, and homogeneous generalized functions are studied in detail. Many simple basic examples make this book an excellent place for a novice to get acquainted with the theory of generalized functions. A long appendix presents basics of generalized functions of complex variables.
Hista, J.C.
1982-01-01
This reactor building includes a containment enclosure for the internal structures composed of a slab wedged on its periphery against the containment enclosure gusset and resting on the general raft by means of a peripheral bearing ring, a compressible layer being provided between the general raft and the slab [fr
Anca Abati, R; Lopez Rodriguez, M
1961-07-01
General conditions about the metallo thermic reduction in small bombs (250 and 800 gr. of uranium) has been investigated. Factors such as kind and granulometry of the magnesium used, magnesium excess and preheating temperature, which affect yields and metal quality have been considered. magnesium excess increased yields in a 15% in the small bomb, about the preheating temperature, there is a range between which yields and metal quality does not change. All tests have been made with graphite linings. (Author) 18 refs.
Home; Journals; Resonance – Journal of Science Education. General Editorial. Articles in Resonance – Journal of Science Education. Volume 19 Issue 1 January 2014 pp 1-2 General Editorial. General Editorial on Publication Ethics · R Ramaswamy · More Details Fulltext PDF. Volume 19 Issue 1 January 2014 pp 3-3 ...
Elwyn, G; Edwards, A; Hood, K; Robling, M; Atwell, C; Russell, I; Wensing, M; Grol, R
2004-08-01
A consulting method known as 'shared decision making' (SDM) has been described and operationalized in terms of several 'competences'. One of these competences concerns the discussion of the risks and benefits of treatment or care options-'risk communication'. Few data exist on clinicians' ability to acquire skills and implement the competences of SDM or risk communication in consultations with patients. The aims of this study were to evaluate the effects of skill development workshops for SDM and the use of risk communication aids on the process of consultations. A cluster randomized trial with crossover was carried out with the participation of 20 recently qualified GPs in urban and rural general practices in Gwent, South Wales. A total of 747 patients with known atrial fibrillation, prostatism, menorrhagia or menopausal symptoms were invited to a consultation to review their condition or treatments. Half the consultations were randomly selected for audio-taping, of which 352 patients attended and were audio-taped successfully. After baseline, participating doctors were randomized to receive training in (i) SDM skills or (ii) the use of simple risk communication aids, using simulated patients. The alternative training was then provided for the final study phase. Patients were allocated randomly to a consultation during baseline or intervention 1 (SDM or risk communication aids) or intervention 2 phases. A randomly selected half of the consultations were audio-taped from each phase. Raters (independent, trained and blinded to study phase) assessed the audio-tapes using a validated scale to assess levels of patient involvement (OPTION: observing patient involvement), and to analyse the nature of risk information discussed. Clinicians completed questionnaires after each consultation, assessing perceived clinician-patient agreement and level of patient involvement in decisions. Multilevel modelling was carried out with the OPTION score as the dependent variable, and
Karitskaya, S.G.; Ruzanov, K.A.; Davletov, V.S.
2005-01-01
The results of work of making the electronic textbook of special discipline ('General theory and construction of heat-and-power engineering facilities' are brought. The principles and requirements, presented towards literature of such type, are outlined. (author)
The Society of Toxicologic Pathology charged a Nervous System Sampling Working Group with devising recommended practices to routinely screen the central and peripheral nervous systems in Good Laboratory Practice-type nonclinical general toxicity studies. Brains should be trimmed ...
Greco, Salvatore; Mesiar, Radko; Rindone, Fabio
2014-01-01
Aggregation functions on [0,1] with annihilator 0 can be seen as a generalized product on [0,1]. We study the generalized product on the bipolar scale [–1,1], stressing the axiomatic point of view. Based on newly introduced bipolar properties, such as the bipolar increasingness, bipolar unit element, bipolar idempotent element, several kinds of generalized bipolar product are introduced and studied. A special stress is put on bipolar semicopulas, bipolar quasi-copulas and bipolar copulas.
Larsen, Christian; Kiesmüller, Gudrun P.
We derive a closed-form cost expression for an (R,s,nQ) inventory control policy where all replenishment orders have a constant lead-time, unfilled demand is backlogged and inter-arrival times of order requests are generalized Erlang distributed.......We derive a closed-form cost expression for an (R,s,nQ) inventory control policy where all replenishment orders have a constant lead-time, unfilled demand is backlogged and inter-arrival times of order requests are generalized Erlang distributed....
Larsen, Christian; Kiesmüller, G.P
2007-01-01
We derive a closed-form cost expression for an (R,s,nQ) inventory control policy where all replenishment orders have a constant lead-time, unfilled demand is back-logged and inter-arrival times of order requests are generalized Erlang distributed. For given values of Q and R we show how to compute...
Allen, Johnie J.; Anderson, Craig A.; Bushman, Brad J.
The General Aggression Model (GAM) is a comprehensive, integrative, framework for understanding aggression. It considers the role of social, cognitive, personality, developmental, and biological factors on aggression. Proximate processes of GAM detail how person and situation factors influence
Bezak, P.; Daniska, V.; Ondra, F.; Necas, V.
2012-01-01
Conditional release of steels from NPP decommissioning enables controlled reuse of non-negligible volumes of steels. For proposal of scenarios for steel reuse, it is needed to identify and evaluate partial elementary activities of the whole process from conditional release of steels, manufacturing of various elements up to realisation of scenarios. For scenarios of reuse of conditionally released steel the products of steel, as steel reinforcements, rails, profiles and sheets for technical constructions such as bridges, tunnels, railways and other constructions which guarantee the long-term properties over the periods of 50-100 years are considered. The idea offers also the possibility for using this type of steel for particular technical constructions, directly usable in nuclear facilities. The paper presents the review of activities for manufacturing of various steel construction elements made from conditionally released steels and their use in general and also in the nuclear industry. As the starting material for manufacturing of steel elements ingots or just fragments of steel after dismantling in controlled area can be used. These input materials are re-melted in industrial facilities in order to achieve the required physical and chemical characteristics. Mostly used technique for manufacturing of the steel construction elements is rolling. As the products considered in scenarios for reuse of conditional released steels are bars for reinforcement concrete, rolled steel sheets and other rolled profiles. For use in the nuclear industry it offers the possibility for casting of thick-walled steel containers for long-term storage of high level radioactive components in integral storage and also assembly of stainless steel tanks for storing of liquid radioactive waste. Lists of elementary activities which are needed for manufacturing of selected steel elements are elaborated. These elementary activities are then the base for detailed safety evaluation of external
2018-01-09
As required by Federal Aviation Administration Order 8110.4C, Type Certification Process, the Volpe Center Acoustics Facility (Volpe), in support of the Federal Aviation Administration Office of Environment and Energy (AEE), has completed valid...
2017-08-18
As required by Federal Aviation Administration (FAA) Order 8110.4C: Type Certification Process (most recently revised as Change 5, 20 December, 2011), the Volpe Center Acoustics Facility (Volpe), in support of the FAA Office of Environmen...
Kenyon, I.R.
1990-01-01
General relativity is discussed in this book at a level appropriate to undergraduate students of physics and astronomy. It describes concepts and experimental results, and provides a succinct account of the formalism. A brief review of special relativity is followed by a discussion of the equivalence principle and its implications. Other topics covered include the concepts of curvature and the Schwarzschild metric, test of the general theory, black holes and their properties, gravitational radiation and methods for its detection, the impact of general relativity on cosmology, and the continuing search for a quantum theory of gravity. (author)
2005-01-01
This article presents the general problems as natural disasters, consequences of global climate change, public health, the danger of criminal actions, the availability to information about problems of environment
Jensen, Christian Skov; Lando, David; Pedersen, Lasse Heje
We characterize when physical probabilities, marginal utilities, and the discount rate can be recovered from observed state prices for several future time periods. Our characterization makes no assumptions of the probability distribution, thus generalizing the time-homogeneous stationary model...
The General Conformity requirements ensure that the actions taken by federal agencies in nonattainment and maintenance areas do not interfere with a state’s plans to meet national standards for air quality.
Komatsu, Nobuyoshi; Kiwata, Takahiro; Kimura, Shigeo
2010-01-01
To clarify the nonequilibrium processes of self-gravitating systems, we examine a system enclosed in a spherical container with reflecting walls, by N-body simulations. To simulate nonequilibrium processes, we consider loss of energy through the reflecting wall, i.e., a particle reflected at a non-adiabatic wall is cooled to mimic energy loss. We also consider quasi-equilibrium structures of stellar polytropes to compare with the nonequilibrium process, where the quasi-equilibrium structure is obtained from an extremum-state of Tsallis' entropy. Consequently, we numerically show that, with increasing cooling rates, the dependence of the temperature on energy, i.e., the ε-T curve, varies from that of microcanonical ensembles (or isothermal spheres) to a common curve. The common curve appearing in the nonequilibrium process agrees well with an ε-T curve for a quasi-equilibrium structure of the stellar polytrope, especially for the polytrope index n ∼ 5. In fact, for n > 5, the stellar polytrope within an adiabatic wall exhibits gravothermal instability [Taruya, Sakagami, Physica A, 322 (2003) 285]. The present study indicates that the stellar polytrope with n ∼ 5 likely plays an important role in quasi-attractors of the nonequilibrium process in self-gravitating systems with non-adiabatic walls.
Morrill, Tuuli H; McAuley, J Devin; Dilley, Laura C; Hambrick, David Z
2015-08-01
Do the same mechanisms underlie processing of music and language? Recent investigations of this question have yielded inconsistent results. Likely factors contributing to discrepant findings are use of small samples and failure to control for individual differences in cognitive ability. We investigated the relationship between music and speech prosody processing, while controlling for cognitive ability. Participants (n = 179) completed a battery of cognitive ability tests, the Montreal Battery of Evaluation of Amusia (MBEA) to assess music perception, and a prosody test of pitch peak timing discrimination (early, as in insight vs. late, incite). Structural equation modeling revealed that only music perception was a significant predictor of prosody test performance. Music perception accounted for 34.5% of variance on prosody test performance; cognitive abilities and music training added only about 8%. These results indicate musical pitch and temporal processing are highly predictive of pitch discrimination in speech processing, even after controlling for other possible predictors of this aspect of language processing. (c) 2015 APA, all rights reserved).
1999-01-01
The document reproduces the text of the letter dated 18 October 1999 sent to the Secretary-General by the Permanent Representative of China to the United Nations in connection with the agenda item 76 (General and complete disarmament) of the 54th session of the General Assembly, First Committee. The letter expresses the position of the Chinese delegation concerning the proposed amendment of the Anti-Ballistic Missile Treaty (ABM Treaty)
Mikhailovskii, A.B.
1986-01-01
Some general problems of the theory of Alfven instabilities of a tokamak with high-energy ions are considered. It is assumed that such ions are due to either ionization of fast neutral atoms, injected into the tokamak, or production of them under thermo-nuclear conditions. Small-oscillation equations are derived for the Alfven-type waves, which allow for both destabilizing effects, associated with the high-energy particles, and stabilizing ones, such as effects of shear and bulk-plasm dissipation. A high-energy ion contribution is calculated into the growth rate of the Alfven waves. The author considers the role of trapped-electron collisional dissipation
Van Maldeghem, Hendrik
1998-01-01
Generalized Polygons is the first book to cover, in a coherent manner, the theory of polygons from scratch. In particular, it fills elementary gaps in the literature and gives an up-to-date account of current research in this area, including most proofs, which are often unified and streamlined in comparison to the versions generally known. Generalized Polygons will be welcomed both by the student seeking an introduction to the subject as well as the researcher who will value the work as a reference. In particular, it will be of great value for specialists working in the field of generalized polygons (which are, incidentally, the rank 2 Tits-buildings) or in fields directly related to Tits-buildings, incidence geometry and finite geometry. The approach taken in the book is of geometric nature, but algebraic results are included and proven (in a geometric way!). A noteworthy feature is that the book unifies and generalizes notions, definitions and results that exist for quadrangles, hexagons, octagons - in the ...
Tubiana, M.
1993-01-01
In conclusion, a general consensus of a number of points which the author endeavours to summarize in this article: -doctors are an excellent channel for passing on information to the public -doctors feel that they do not know enough about the subject and a training on radiobiology and radiation protection is a necessity for them -communication between doctors and the general public is poor in this field -research should be encouraged in numerous areas such as: carcinogenic effect of low doses of radiation, pedagogy and risk perception
Ana Teresa Fernández Vidal; José Aurelio Díaz Quiñones; Silvia Enrique Vilaplana
2016-01-01
Cuban educators conceive programs and processes using the cultural-historical approach to human development, since this is the theory that, thanks to its founder Lev Semiónovich Vygotsky, could overcome the approaches that fragmented the analysis and understanding of the human development. Such currents of thought hyperbolized the different conditioning factors of this development and ignored the dialectical relationship between them in terms of personality formation and development, its proc...
Boothe, W. A.; Corman, J. C.; Johnson, G. G.; Cassel, T. A. V.
1976-01-01
Results are presented of an investigation of gasification and clean fuels from coal. Factors discussed include: coal and coal transportation costs; clean liquid and gas fuel process efficiencies and costs; and cost, performance, and environmental intrusion elements of the integrated low-Btu coal gasification system. Cost estimates for the balance-of-plant requirements associated with advanced energy conversion systems utilizing coal or coal-derived fuels are included.
Stefanie E. Grund
2012-03-01
Full Text Available Drosha is a key enzyme in microRNA biogenesis, generating the precursor miRNA (pre-miRNA by excising the stem-loop embedded in the primary transcripts (pri-miRNA. The specificity for the pri-miRNAs and determination of the cleavage site are provided by its binding partner DGCR8, which is necessary for efficient processing. The crucial Drosha domains for pri-miRNA cleavage are the middle part, the two enzymatic RNase III domains (RIIID, and the dsRNA binding domain (dsRBD in the C-terminus. Here, we identify alternatively spliced transcripts in human melanoma and NT2 cell lines, encoding C-terminally truncated Drosha proteins lacking part of the RIIIDb and the entire dsRBD. Proteins generated from these alternative splice variants fail to bind to DGCR8 but still interact with Ewing sarcoma protein (EWS. In vitro as well as in vivo, the Drosha splice variants are deficient in pri-miRNA processing. However, the aberrant transcripts in melanoma cells do not consistently reduce mature miRNA levels compared with melanoma cell lines lacking those splice variants, possibly owing to their limited abundance. Our findings show that alternative processing-deficient Drosha splice variants exist in melanoma cells. In elevated amounts, these alternatively spliced transcripts could provide one potential mechanism accounting for the deregulation of miRNAs in cancer cells. On the basis of our results, the search for alternative inactive splice variants might be fruitful in different tumor entities to unravel the molecular basis of the previously observed decreased microRNA processing efficiency in cancer.
Home; Journals; Resonance – Journal of Science Education; Volume 8; Issue 2. Supersymmetry. Akshay Kulkarni P Ramadevi. General Article Volume 8 Issue 2 February 2003 pp 28-41 ... Author Affiliations. Akshay Kulkarni1 P Ramadevi1. Physics Department, Indian Institute of Technology, Mumbai 400 076, India.
2003-01-01
This document summarizes the main 2002 energy indicators for France. A first table lists the evolution of general indicators between 1973 and 2002: energy bill, price of imported crude oil, energy independence, primary and final energy consumption. The main 2002 results are detailed separately for natural gas, petroleum and coal (consumption, imports, exports, production, stocks, prices). (J.S.)
Jensen, Christian Skov; Lando, David; Pedersen, Lasse Heje
We characterize when physical probabilities, marginal utilities, and the discount rate can be recovered from observed state prices for several future time periods. We make no assumptions of the probability distribution, thus generalizing the time-homogeneous stationary model of Ross (2015). Recov...
Department of Surgery, University of Cape Town Health Sciences Faculty, Groote Schuur Hospital, Observatory, Cape Town,. South Africa ... included all district, regional and tertiary hospitals in the nine provinces. Clinics and so-called ..... large contingency of senior general surgeons from countries such as Cuba, who have ...
effect of fatigue on patient safety, and owing to increasing emphasis on lifestyle issues .... increasing emphasis on an appropriate work-life balance in professional life.10 ... experience, were the most negative about the EWTD in general.3,13 ...
in the endoscopy room. GENERAL SURGERY. T du Toit, O C Buchel, S J A Smit. Department of Surgery, University of the Free State, Bloemfontein, ... The lack of video instrumentation in developing countries: Redundant fibre-optic instruments (the old. “eye scope”) are still being used. This instrument brings endoscopists ...
Staff Association
2016-01-01
5th April, 2016 – Ordinary General Assembly of the Staff Association! In the first semester of each year, the Staff Association (SA) invites its members to attend and participate in the Ordinary General Assembly (OGA). This year the OGA will be held on Tuesday, April 5th 2016 from 11:00 to 12:00 in BE Auditorium, Meyrin (6-2-024). During the Ordinary General Assembly, the activity and financial reports of the SA are presented and submitted for approval to the members. This is the occasion to get a global view on the activities of the SA, its financial management, and an opportunity to express one’s opinion, including taking part in the votes. Other points are listed on the agenda, as proposed by the Staff Council. Who can vote? Only “ordinary” members (MPE) of the SA can vote. Associated members (MPA) of the SA and/or affiliated pensioners have a right to vote on those topics that are of direct interest to them. Who can give his/her opinion? The Ordinary General Asse...
could cripple the global economy. Greater attention ... Africa and 5.7 general surgeons per 100 000 in the US.12 One of the key ... 100 000 insured population working in the private sector, which is comparable with the United States (US).
IAS Admin
. A q-ary necklace of length n is an equivalence class of q-coloured strings of length n under rota- tion. In this article, we study various generaliza- tions and derive analytical expressions to count the number of these generalized necklaces.
Lando, David; Pedersen, Lasse Heje; Jensen, Christian Skov
We characterize when physical probabilities, marginal utilities, and the discount rate can be recovered from observed state prices for several future time periods. We make no assumptions of the probability distribution, thus generalizing the time-homogeneous stationary model of Ross (2015...... our model empirically, testing the predictive power of the recovered expected return and other recovered statistics....
Straumann, Norbert
2013-01-01
This book provides a completely revised and expanded version of the previous classic edition ‘General Relativity and Relativistic Astrophysics’. In Part I the foundations of general relativity are thoroughly developed, while Part II is devoted to tests of general relativity and many of its applications. Binary pulsars – our best laboratories for general relativity – are studied in considerable detail. An introduction to gravitational lensing theory is included as well, so as to make the current literature on the subject accessible to readers. Considerable attention is devoted to the study of compact objects, especially to black holes. This includes a detailed derivation of the Kerr solution, Israel’s proof of his uniqueness theorem, and a derivation of the basic laws of black hole physics. Part II ends with Witten’s proof of the positive energy theorem, which is presented in detail, together with the required tools on spin structures and spinor analysis. In Part III, all of the differential geomet...
Benos, Dale J; Vollmer, Sara H
2010-12-01
Modifying images for scientific publication is now quick and easy due to changes in technology. This has created a need for new image processing guidelines and attitudes, such as those offered to the research community by Doug Cromey (Cromey 2010). We suggest that related changes in technology have simplified the task of detecting misconduct for journal editors as well as researchers, and that this simplification has caused a shift in the responsibility for reporting misconduct. We also argue that the concept of best practices in image processing can serve as a general model for education in best practices in research.
Rodriguez-Iturbe, I.; Porporato, A.; Laio, F.; Ridolfi, L.
This series of four papers studies the complex dynamics of water-controlled ecosystems from the hydro-ecological point of view [e.g., I. Rodriguez-Iturbe, Water Resour. Res. 36 (1) (2000) 3-9]. After this general outline, the role of climate, soil, and vegetation is modeled in Part II [F. Laio, A. Porporato, L. Ridolfi, I. Rodriguez-Iturbe, Adv. Water Res. 24 (7) (2001) 707-723] to investigate the probabilistic structure of soil moisture dynamics and the water balance. Particular attention is given to the impact of timing and amount of rainfall, plant physiology, and soil properties. From the statistical characterization of the crossing properties of arbitrary levels of soil moisture, Part III develops an expression for vegetation water stress [A. Porporato, F. Laio, L. Ridolfi, I. Rodriguez-Iturbe, Adv. Water Res. 24 (7) (2001) 725-744]. This measure of stress is then employed to quantify the response of plants to soil moisture deficit as well as to infer plant suitability to given environmental conditions and understand some of the reasons for possible coexistence of different species. Detailed applications of these concepts are developed in Part IV [F. Laio, A. Porporato, C.P. Fernandez-Illescas, I. Rodriguez-Iturbe, Adv. Water Res. 24 (7) (2001) 745-762], where we investigate the dynamics of three different water-controlled ecosystems.
Nikolaev, A. V.; Alymenko, N. I.; Kamenskikh, A. A.; Alymenko, D. N.; Nikolaev, V. A.; Petrov, A. I.
2017-10-01
The article specifies measuring data of air parameters and its volume flow in the shafts and on the surface, collected in BKPRU-2 (Berezniki potash plant and mine 2) («Uralkali» PJSC) in normal operation mode, after shutdown of the main mine fan (GVU) and within several hours. As a result of the test it has been established that thermal pressure between the mine shafts is active continuously regardless of the GVU operation mode or other draught sources. Also it has been discovered that depth of the mine shafts has no impact on thermal pressure value. By the same difference of shaft elevation marks and parameters of outer air between the shafts, by their different depth, thermal pressure of the same value will be active. Value of the general mine natural draught defined as an algebraic sum of thermal pressure values between the shafts depends only on the difference of temperature and pressure of outer air and air in the shaft bottoms on condition of shutdown of the air handling system (unit-heaters, air conditioning systems).
Willard, Stephen
2004-01-01
Among the best available reference introductions to general topology, this volume is appropriate for advanced undergraduate and beginning graduate students. Its treatment encompasses two broad areas of topology: ""continuous topology,"" represented by sections on convergence, compactness, metrization and complete metric spaces, uniform spaces, and function spaces; and ""geometric topology,"" covered by nine sections on connectivity properties, topological characterization theorems, and homotopy theory. Many standard spaces are introduced in the related problems that accompany each section (340
Maldeghem, Hendrik
1998-01-01
This book is intended to be an introduction to the fascinating theory ofgeneralized polygons for both the graduate student and the specialized researcher in the field. It gathers together a lot of basic properties (some of which are usually referred to in research papers as belonging to folklore) and very recent and sometimes deep results. I have chosen a fairly strict geometrical approach, which requires some knowledge of basic projective geometry. Yet, it enables one to prove some typically group-theoretical results such as the determination of the automorphism groups of certain Moufang polygons. As such, some basic group-theoretical knowledge is required of the reader. The notion of a generalized polygon is a relatively recent one. But it is one of the most important concepts in incidence geometry. Generalized polygons are the building bricks of Tits buildings. They are the prototypes and precursors of more general geometries such as partial geometries, partial quadrangles, semi-partial ge ometries, near...
Dory, A.B.
1982-01-01
This presentation is divided into two main sections. In the first, the author explores the issues of radiation and tailings disposal, and then examines the Canadian nuclear regulatory process from the point of view of jurisdiction, objectives, philosophy and mechanics. The compliance inspection program is outlined, and the author discussed the relationships between the AECB and other regulatory agencies, the public and uranium mine-mill workers. The section concludes with an examination of the stance of the medical profession on nuclear issues. In part two, the radiological hazards for uranium miners are examined: radon daughters, gamma radiation, thoron daughters and uranium dust. The author touches on new regulations being drafted, the assessment of past exposures in mine atmospheres, and the regulatory approach at the surface exploration stage. The presentation concludes with the author's brief observations on the findings of other uranium mining inquiries and on future requirements in the industry's interests
Li, Yun; Wang, Shengpei; Pan, Chuxiong; Xue, Fushan; Xian, Junfang; Huang, Yaqi; Wang, Xiaoyi; Li, Tianzuo; He, Huiguang
2018-01-01
The mechanism of general anesthesia (GA) has been explored for hundreds of years, but unclear. Previous studies indicated a possible correlation between NREM sleep and GA. The purpose of this study is to compare them by in vivo human brain function to probe the neuromechanism of consciousness, so as to find out a clue to GA mechanism. 24 healthy participants were equally assigned to sleep or propofol sedation group by sleeping ability. EEG and Ramsay Sedation Scale were applied to determine sleep stage and sedation depth respectively. Resting-state functional magnetic resonance imaging (RS-fMRI) was acquired at each status. Regional homogeneity (ReHo) and seed-based whole brain functional connectivity maps (WB-FC maps) were compared. During sleep, ReHo primarily weakened on frontal lobe (especially preoptic area), but strengthened on brainstem. While during sedation, ReHo changed in various brain areas, including cingulate, precuneus, thalamus and cerebellum. Cingulate, fusiform and insula were concomitance of sleep and sedation. Comparing to sleep, FCs between the cortex and subcortical centers (centralized in cerebellum) were significantly attenuated under sedation. As sedation deepening, cerebellum-based FC maps were diminished, while thalamus- and brainstem-based FC maps were increased. There're huge distinctions in human brain function between sleep and GA. Sleep mainly rely on brainstem and frontal lobe function, while sedation is prone to affect widespread functional network. The most significant differences exist in the precuneus and cingulate, which may play important roles in mechanisms of inducing unconciousness by anesthetics. Institutional Review Board (IRB) ChiCTR-IOC-15007454.
Diphoton generalized distribution amplitudes
El Beiyad, M.; Pire, B.; Szymanowski, L.; Wallon, S.
2008-01-01
We calculate the leading order diphoton generalized distribution amplitudes by calculating the amplitude of the process γ*γ→γγ in the low energy and high photon virtuality region at the Born order and in the leading logarithmic approximation. As in the case of the anomalous photon structure functions, the γγ generalized distribution amplitudes exhibit a characteristic lnQ 2 behavior and obey inhomogeneous QCD evolution equations.
Vaturi, Sylvain
1969-01-01
Computerized edition is essential for data processing exploitation. When a more or less complex edition program is required for each task, then the need for a general edition program become obvious. The aim of this study is to create a general edition program. Universal programs are capable to execute numerous and varied tasks. For a more precise processing, the execution of which is frequently required, the use of a specialized program is preferable because, contradictory to the universal program, it goes straight to the point [fr
Kwon, Yeong Sik; Lee, Dong Seop; Ryu, Haung Ryong; Jang, Cheol Hyeon; Choi, Bong Jong; Choi, Sang Won
1993-07-01
The book concentrates on the latest general chemistry, which is divided int twenty-three chapters. It deals with basic conception and stoichiometry, nature of gas, structure of atoms, quantum mechanics, symbol and structure of an electron of ion and molecule, chemical thermodynamics, nature of solid, change of state and liquid, properties of solution, chemical equilibrium, solution and acid-base, equilibrium of aqueous solution, electrochemistry, chemical reaction speed, molecule spectroscopy, hydrogen, oxygen and water, metallic atom; 1A, IIA, IIIA, carbon and atom IVA, nonmetal atom and an inert gas, transition metals, lanthanons, and actinoids, nuclear properties and radioactivity, biochemistry and environment chemistry.
Gourgoulhon, Eric
2013-01-01
The author proposes a course on general relativity. He first presents a geometrical framework by addressing, presenting and discussion the following notions: the relativistic space-time, the metric tensor, Universe lines, observers, principle of equivalence and geodesics. In the next part, he addresses gravitational fields with spherical symmetry: presentation of the Schwarzschild metrics, radial light geodesics, gravitational spectral shift (Einstein effect), orbitals of material objects, photon trajectories. The next parts address the Einstein equation, black holes, gravitational waves, and cosmological solutions. Appendices propose a discussion of the relationship between relativity and GPS, some problems and their solutions, and Sage codes
The Generalized Quantum Statistics
Hwang, WonYoung; Ji, Jeong-Young; Hong, Jongbae
1999-01-01
The concept of wavefunction reduction should be introduced to standard quantum mechanics in any physical processes where effective reduction of wavefunction occurs, as well as in the measurement processes. When the overlap is negligible, each particle obey Maxwell-Boltzmann statistics even if the particles are in principle described by totally symmetrized wavefunction [P.R.Holland, The Quantum Theory of Motion, Cambridge Unversity Press, 1993, p293]. We generalize the conjecture. That is, par...
Generalized Nonlinear Yule Models
Lansky, Petr; Polito, Federico; Sacerdote, Laura
2016-01-01
With the aim of considering models with persistent memory we propose a fractional nonlinear modification of the classical Yule model often studied in the context of macrovolution. Here the model is analyzed and interpreted in the framework of the development of networks such as the World Wide Web. Nonlinearity is introduced by replacing the linear birth process governing the growth of the in-links of each specific webpage with a fractional nonlinear birth process with completely general birth...
John Cossey
2015-03-01
Full Text Available Quasinormal subgroups have been studied for nearly 80 years. In finite groups, questions concerning them invariably reduce to p-groups, and here they have the added interest of being invariant under projectivities, unlike normal subgroups. However, it has been shown recently that certain groups, constructed by Berger and Gross in 1982, of an important universal nature with regard to the existence of core-free quasinormal subgroups gener- ally, have remarkably few such subgroups. Therefore in order to overcome this misfortune, a generalization of the concept of quasi- normality will be defined. It could be the beginning of a lengthy undertaking. But some of the initial findings are encouraging, in particular the fact that this larger class of subgroups also remains invariant under projectivities of finite p-groups, thus connecting group and subgroup lattice structures.
Nicklisch, F.
1984-01-01
Growing complexity of technical matter has meant that technical expertise is called upon in more and more legal proceedings. The technical expert is, in general terms, the mediator between technology and the law, he is also entrusted with the task of pointing up the differences in approach and in the nature of authority in these two areas and thus paving the way for mutual understanding. The evaluation of the technical expert's opinion is one of the cardinal problems bound up with the role of the expert in legal procedure. After the presentation of the expert's opinion, the judge is supposed to possess so much specialised knowledge that he can assess the opinion itself in scientific and technical respects and put his finger on any errors the expert may have made. This problem can only be solved via an assessment opinion. First of all, the opinion can be assessed indirectly via evaluation of the credentials and the neutrality and independence of the expert. In direct terms, the opinion can be subjected to a certain - albeit restricted - scrutiny, whether it is generally convincing, as far as the layman is competent to judge. This interpretation alone makes it possible to classify and integrate legally the technical standards and regulations represent expert statements on the scientific and technical theorems based on the knowledge and experience gained in a given area. They are designed to reflect prevailing opinion among leading representatives of the profession and can thus themselves be regarded as expert opinions. As a rule, these opinions will have such weight that - other than in exceptional cases - they will not be invalidated in procedure by deviating opinions from individual experts. (orig./HSCH) [de
Rady, E.A.; Kozae, A.M.; Abd El-Monsef, M.M.E.
2004-01-01
The process of analyzing data under uncertainty is a main goal for many real life problems. Statistical analysis for such data is an interested area for research. The aim of this paper is to introduce a new method concerning the generalization and modification of the rough set theory introduced early by Pawlak [Int. J. Comput. Inform. Sci. 11 (1982) 314
Stijnen, Mandy M N; Jansen, Maria W J; Duimel-Peeters, Inge G P; Vrijhoef, Hubertus J M
2014-10-25
Population ageing fosters new models of care delivery for older people that are increasingly integrated into existing care systems. In the Netherlands, a primary-care based preventive home visitation programme has been developed for potentially frail community-dwelling older people (aged ≥75 years), consisting of a comprehensive geriatric assessment during a home visit by a practice nurse followed by targeted interdisciplinary care and follow-up over time. A theory-based process evaluation was designed to examine (1) the extent to which the home visitation programme was implemented as planned and (2) the extent to which general practices successfully redesigned their care delivery. Using a mixed-methods approach, the focus was on fidelity (quality of implementation), dose delivered (completeness), dose received (exposure and satisfaction), reach (participation rate), recruitment, and context. Twenty-four general practices participated, of which 13 implemented the home visitation programme and 11 delivered usual care to older people. Data collection consisted of semi-structured interviews with practice nurses (PNs), general practitioners (GPs), and older people; feedback meetings with PNs; structured registration forms filled-out by PNs; and narrative descriptions of the recruitment procedures and registration of inclusion and drop-outs by members of the research team. Fidelity of implementation was acceptable, but time constraints and inadequate reach (i.e., the relatively healthy older people participated) negatively influenced complete delivery of protocol elements, such as interdisciplinary cooperation and follow-up of older people over time. The home visitation programme was judged positively by PNs, GPs, and older people. Useful tools were offered to general practices for organising proactive geriatric care. The home visitation programme did not have major shortcomings in itself, but the delivery offered room for improvement. General practices received
Fokker-Planck modeling of pitting corrosion in underground pipelines
Camacho, Eliana Nogueira [Risco Ambiental Engenharia, Rio de Janeiro, RJ (Brazil); Melo, Paulo F. Frutuoso e [Coordenacao dos Programas de Pos-Graduacao de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear; Saldanha, Pedro Luiz C. [Comissao Nacional de Energia Nuclear (CGRC/CNEN), Rio de Janeiro, RJ (Brazil). Coordenacao Geral de Reatores e Ciclo do Combustivel; Silva, Edson de Pinho da [Universidade Federal Rural do Rio de Janeiro (UFRRJ), Seropedica, RJ (Brazil). Dept. of Physics
2011-07-01
Full text: The stochastic nature of pitting corrosion has been recognized since the 1930s. It has been learned that this damage retains no memory of its past. Instead, the future state is determined only by the knowledge of its present state. This Markovian property that underlies the stochastic process governing pitting corrosion has been explored as a discrete Markovian process by many authors since the beginning of the 1990s for underground pipelines of the oil and gas industries and nuclear power plants. Corrosion is a genuine continuous time and space state Markovian process, so to model it as a discrete time and/or state space is an approximation to the problem. Markovian chains approaches, with an increasing number of states, could involve a large number of parameters, the transition rates between states, to be experimentally determined. Besides, such an increase in the number of states produces matrices with huge dimensions leading to time-consuming computational solutions. Recent approaches involving Markovian discrete process have overcome those difficulties but, on the other hand, a large number of soil and pipe stochastic variables have to be known. In this work we propose a continuous time and space state approach to the evolution of pit corrosion depths in underground pipelines. In order to illustrate the application of the model for defect depth growth a combination of real life data and Monte Carlo simulation was used. The process is described by a Fokker-Planck equation. The Fokker-Planck equation is completely determined by the knowledge of two functions known as the drift and diffusion coefficients. In this work we also show that those functions can be estimated from corrosion depth data from in-line inspections. Some particular forms of drift and diffusion coefficients lead to particular Fokker-Planck equations for which analytical solutions are known, as is the case for the Wiener process, the Ornstein-Uhlenbeck process and the Brownian motion
Allen, Johnie J; Anderson, Craig A; Bushman, Brad J
2018-02-01
The General Aggression Model (GAM) is a comprehensive, integrative, framework for understanding aggression. It considers the role of social, cognitive, personality, developmental, and biological factors on aggression. Proximate processes of GAM detail how person and situation factors influence cognitions, feelings, and arousal, which in turn affect appraisal and decision processes, which in turn influence aggressive or nonaggressive behavioral outcomes. Each cycle of the proximate processes serves as a learning trial that affects the development and accessibility of aggressive knowledge structures. Distal processes of GAM detail how biological and persistent environmental factors can influence personality through changes in knowledge structures. GAM has been applied to understand aggression in many contexts including media violence effects, domestic violence, intergroup violence, temperature effects, pain effects, and the effects of global climate change. Copyright © 2017 Elsevier Ltd. All rights reserved.
Categorization = Decision Making + Generalization
Seger, Carol A; Peterson, Erik J.
2013-01-01
We rarely, if ever, repeatedly encounter exactly the same situation. This makes generalization crucial for real world decision making. We argue that categorization, the study of generalizable representations, is a type of decision making, and that categorization learning research would benefit from approaches developed to study the neuroscience of decision making. Similarly, methods developed to examine generalization and learning within the field of categorization may enhance decision making research. We first discuss perceptual information processing and integration, with an emphasis on accumulator models. We then examine learning the value of different decision making choices via experience, emphasizing reinforcement learning modeling approaches. Next we discuss how value is combined with other factors in decision making, emphasizing the effects of uncertainty. Finally, we describe how a final decision is selected via thresholding processes implemented by the basal ganglia and related regions. We also consider how memory related functions in the hippocampus may be integrated with decision making mechanisms and contribute to categorization. PMID:23548891
General relativity and mathematics; Relatividad General y Matematicas
Mars, M.
2015-07-01
General relativity is more than a theory of gravity, since any physical process occupies space and lasts for a time, forcing to reconcile that physical theory that describes what the dynamic nature of space-time itself. (Author)
Cancer Investigation in General Practice
Jensen, Jacob Reinholdt; Møller, Henrik; Thomsen, Janus Laust
2014-01-01
Initiation of cancer investigations in general practice Background Close to 90% of all cancers are diagnosed because the patient presents symptoms and signs. Of these patients, 85% initiate the diagnostic pathway in general practice. Therefore, the initiation of a diagnostic pathway in general...... practice becomes extremely important. On average, a general practitioner (GP) is involved in 7500 consultations each year, and in the diagnostic process of 8-10 incident cancers. One half of cancer patients consult their GP with either general symptoms, which are not indicative of cancer, or vague and non......-specific symptoms. The other half present with what the GP assess as alarm symptoms. Three months prior to diagnosis, patients who are later diagnosed with cancer have twice as many GP consultations than a comparable reference population. Thus the complex diagnostic process in general practice requires the GP...
Glauber model and its generalizations
Bialkowski, G.
The physical aspects of the Glauber model problems are studied: potential model, profile function and Feynman diagrams approaches. Different generalizations of the Glauber model are discussed: particularly higher and lower energy processes and large angles [fr
St Clair Gibson, A; Swart, J; Tucker, R
2018-02-01
Either central (brain) or peripheral (body physiological system) control mechanisms, or a combination of these, have been championed in the last few decades in the field of Exercise Sciences as how physiological activity and fatigue processes are regulated. In this review, we suggest that the concept of 'central' or 'peripheral' mechanisms are both artificial constructs that have 'straight-jacketed' research in the field, and rather that competition between psychological and physiological homeostatic drives is central to the regulation of both, and that governing principles, rather than distinct physical processes, underpin all physical system and exercise regulation. As part of the Integrative Governor theory we develop in this review, we suggest that both psychological and physiological drives and requirements are underpinned by homeostatic principles, and that regulation of the relative activity of each is by dynamic negative feedback activity, as the fundamental general operational controller. Because of this competitive, dynamic interplay, we propose that the activity in all systems will oscillate, that these oscillations create information, and comparison of this oscillatory information with either prior information, current activity, or activity templates create efferent responses that change the activity in the different systems in a similarly dynamic manner. Changes in a particular system are always the result of perturbations occurring outside the system itself, the behavioural causative 'history' of this external activity will be evident in the pattern of the oscillations, and awareness of change occurs as a result of unexpected rather than planned change in physiological activity or psychological state.
Borregaard, Michael K.; Matthews, Thomas J.; Whittaker, Robert James
2016-01-01
Aim: Island biogeography focuses on understanding the processes that underlie a set of well-described patterns on islands, but it lacks a unified theoretical framework for integrating these processes. The recently proposed general dynamic model (GDM) of oceanic island biogeography offers a step...... towards this goal. Here, we present an analysis of causality within the GDM and investigate its potential for the further development of island biogeographical theory. Further, we extend the GDM to include subduction-based island arcs and continental fragment islands. Location: A conceptual analysis...... of evolutionary processes in simulations derived from the mechanistic assumptions of the GDM corresponded broadly to those initially suggested, with the exception of trends in extinction rates. Expanding the model to incorporate different scenarios of island ontogeny and isolation revealed a sensitivity...
Kumar, S.; Gezari, S.; Heinis, S.; Chornock, R.; Berger, E.; Soderberg, A.; Stubbs, C. W.; Kirshner, R. P.; Rest, A.; Huber, M. E.; Narayan, G.; Marion, G. H.; Burgett, W. S.; Foley, R. J.; Scolnic, D.; Riess, A. G.; Lawrence, A.; Smartt, S. J.; Smith, K.; Wood-Vasey, W. M.
2015-01-01
We present a novel method for the light-curve characterization of Pan-STARRS1 Medium Deep Survey (PS1 MDS) extragalactic sources into stochastic variables (SVs) and burst-like (BL) transients, using multi-band image-differencing time-series data. We select detections in difference images associated with galaxy hosts using a star/galaxy catalog extracted from the deep PS1 MDS stacked images, and adopt a maximum a posteriori formulation to model their difference-flux time-series in four Pan-STARRS1 photometric bands g P1 , r P1 , i P1 , and z P1 . We use three deterministic light-curve models to fit BL transients; a Gaussian, a Gamma distribution, and an analytic supernova (SN) model, and one stochastic light-curve model, the Ornstein-Uhlenbeck process, in order to fit variability that is characteristic of active galactic nuclei (AGNs). We assess the quality of fit of the models band-wise and source-wise, using their estimated leave-out-one cross-validation likelihoods and corrected Akaike information criteria. We then apply a K-means clustering algorithm on these statistics, to determine the source classification in each band. The final source classification is derived as a combination of the individual filter classifications, resulting in two measures of classification quality, from the averages across the photometric filters of (1) the classifications determined from the closest K-means cluster centers, and (2) the square distances from the clustering centers in the K-means clustering spaces. For a verification set of AGNs and SNe, we show that SV and BL occupy distinct regions in the plane constituted by these measures. We use our clustering method to characterize 4361 extragalactic image difference detected sources, in the first 2.5 yr of the PS1 MDS, into 1529 BL, and 2262 SV, with a purity of 95.00% for AGNs, and 90.97% for SN based on our verification sets. We combine our light-curve classifications with their nuclear or off-nuclear host galaxy offsets, to
Reaney, Ashley M; Saldarriaga-Córdoba, Mónica; Pincheira-Donoso, Daniel
2018-02-06
Life diversifies via adaptive radiation when natural selection drives the evolution of ecologically distinct species mediated by their access to novel niche space, or via non-adaptive radiation when new species diversify while retaining ancestral niches. However, while cases of adaptive radiation are widely documented, examples of non-adaptively radiating lineages remain rarely observed. A prolific cold-climate lizard radiation from South America (Phymaturus), sister to a hyper-diverse adaptive radiation (Liolaemus), has extensively diversified phylogenetically and geographically, but with exceptionally minimal ecological and life-history diversification. This lineage, therefore, may offer unique opportunities to investigate the non-adaptive basis of diversification, and in combination with Liolaemus, to cover the whole spectrum of modes of diversification predicted by theory, from adaptive to non-adaptive. Using phylogenetic macroevolutionary modelling performed on a newly created 58-species molecular tree, we establish the tempo and mode of diversification in the Phymaturus radiation. Lineage accumulation in Phymaturus opposes a density-dependent (or 'niche-filling') process of diversification. Concurrently, we found that body size diversification is better described by an Ornstein-Uhlenbeck evolutionary model, suggesting stabilizing selection as the mechanism underlying niche conservatism (i.e., maintaining two fundamental size peaks), and which has predominantly evolved around two major adaptive peaks on a 'Simpsonian' adaptive landscape. Lineage diversification of the Phymaturus genus does not conform to an adaptive radiation, as it is characterised by a constant rate of species accumulation during the clade's history. Their strict habitat requirements (rocky outcrops), predominantly invariant herbivory, and especially the constant viviparous reproduction across species have likely limited their opportunities for adaptive diversifications throughout novel
Nonlinear dynamics in micromechanical and nanomechanical resonators and oscillators
Dunn, Tyler
In recent years, the study of nonlinear dynamics in microelectromechanical and nanoelectromechanical systems (MEMS and NEMS) has attracted considerable attention, motivated by both fundamental and practical interests. One example is the phenomenon of stochastic resonance. Previous measurements have established the presence of this counterintuitive effect in NEMS, showing that certain amounts of white noise can effectively amplify weak switching signals in nanomechanical memory elements and switches. However, other types of noise, particularly noises with 1/falpha spectra, also bear relevance in these and many other systems. At a more fundamental level, the role which noise color plays in stochastic resonance remains an open question in the field. To these ends, this work presents systematic measurements of stochastic resonance in a nanomechanical resonator using 1/f alpha and Ornstein-Uhlenbeck noise types. All of the studied noise spectra induce stochastic resonance, proving that colored noise can also be beneficial; however, stronger noise correlations suppress the effect, decreasing the maximum signal-to-noise ratio and increasing the optimal noise intensity. Evidence suggests that 1/falpha noise spectra with increasing noise color lead to increasingly asymmetric switching, reducing the achievable amplification. Another manifestly nonlinear effect anticipated in these systems is modal coupling. Measurements presented here demonstrate interactions between various mode types on a wide scale, providing the first reported observations of coupling in bulk longitudinal modes of MEMS. As a result of anharmonic elastic effects, each mode shifts in frequency by an amount proportional to the squared displacement (or energy) of a coupled mode. Since all resonator modes couple in this manner, these effects enable nonlinear measurement of energy and mechanical nonlinear signal processing across a wide range of frequencies. Finally, while these experiments address nonlinear
Kumar, S.; Gezari, S.; Heinis, S. [Department of Astronomy, University of Maryland, Stadium Drive, College Park, MD 21224 (United States); Chornock, R.; Berger, E.; Soderberg, A.; Stubbs, C. W.; Kirshner, R. P. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Rest, A. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Huber, M. E.; Narayan, G.; Marion, G. H.; Burgett, W. S. [Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822 (United States); Foley, R. J. [Astronomy Department, University of Illinois at Urbana-Champaign, 1002 West Green Street, Urbana, IL 61801 (United States); Scolnic, D.; Riess, A. G. [Department of Physics and Astronomy, Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218 (United States); Lawrence, A. [Institute for Astronomy, University of Edinburgh Scottish Universities Physics Alliance, Royal Observatory, Blackford Hill, Edinburgh EH9 3HJ (United Kingdom); Smartt, S. J.; Smith, K. [Astrophysics Research Centre, School of Mathematics and Physics, Queen' s University Belfast, Belfast BT7 1NN (United Kingdom); Wood-Vasey, W. M. [Pittsburgh Particle Physics, Astrophysics, and Cosmology Center, Department of Physics and Astronomy, University of Pittsburgh, 3941 O' Hara Street, Pittsburgh, PA 15260 (United States); and others
2015-03-20
We present a novel method for the light-curve characterization of Pan-STARRS1 Medium Deep Survey (PS1 MDS) extragalactic sources into stochastic variables (SVs) and burst-like (BL) transients, using multi-band image-differencing time-series data. We select detections in difference images associated with galaxy hosts using a star/galaxy catalog extracted from the deep PS1 MDS stacked images, and adopt a maximum a posteriori formulation to model their difference-flux time-series in four Pan-STARRS1 photometric bands g {sub P1}, r {sub P1}, i {sub P1}, and z {sub P1}. We use three deterministic light-curve models to fit BL transients; a Gaussian, a Gamma distribution, and an analytic supernova (SN) model, and one stochastic light-curve model, the Ornstein-Uhlenbeck process, in order to fit variability that is characteristic of active galactic nuclei (AGNs). We assess the quality of fit of the models band-wise and source-wise, using their estimated leave-out-one cross-validation likelihoods and corrected Akaike information criteria. We then apply a K-means clustering algorithm on these statistics, to determine the source classification in each band. The final source classification is derived as a combination of the individual filter classifications, resulting in two measures of classification quality, from the averages across the photometric filters of (1) the classifications determined from the closest K-means cluster centers, and (2) the square distances from the clustering centers in the K-means clustering spaces. For a verification set of AGNs and SNe, we show that SV and BL occupy distinct regions in the plane constituted by these measures. We use our clustering method to characterize 4361 extragalactic image difference detected sources, in the first 2.5 yr of the PS1 MDS, into 1529 BL, and 2262 SV, with a purity of 95.00% for AGNs, and 90.97% for SN based on our verification sets. We combine our light-curve classifications with their nuclear or off-nuclear host
Hamlin, J K
2014-01-01
The ability to distinguish friends from foes allows humans to engage in mutually beneficial cooperative acts while avoiding the costs associated with cooperating with the wrong individuals. One way to do so effectively is to observe how unknown individuals behave toward third parties, and to selectively cooperate with those who help others while avoiding those who harm others. Recent research suggests that a preference for prosocial over antisocial individuals emerges by the time that infants are 3 months of age, and by 8 months, but not before, infants evaluate others' actions in context: they prefer those who harm, rather than help, individuals who have previously harmed others. Currently there are at least two reasons for younger infants' failure to show context-dependent social evaluations. First, this failure may reflect fundamental change in infants' social evaluation system over the first year of life, in which infants first prefer helpers in any situation and only later evaluate prosocial and antisocial actors in context. On the other hand, it is possible that this developmental change actually reflects domain-general limitations of younger infants, such as limited memory and processing capacities. To distinguish between these possibilities, 4.5-month-olds in the current studies were habituated, rather than familiarized as in previous work, to one individual helping and another harming a third party, greatly increasing infants' exposure to the characters' actions. Following habituation, 4.5-month-olds displayed context-dependent social preferences, selectively reaching for helpers of prosocial and hinderers of antisocial others. Such results suggest that younger infants' failure to display global social evaluation in previous work reflected domain-general rather than domain-specific limitations.
J Kiley eHamlin
2014-06-01
Full Text Available The ability to distinguish friends from foes allows humans to engage in mutually beneficial cooperative acts while avoiding the costs associated with cooperating with the wrong individuals. One way to do so effectively is to observe how unknown individuals behave toward third parties, and to selectively cooperate with those who help others while avoiding those who harm others. Recent research suggests that a preference for prosocial over antisocial individuals emerges by the time that infants are 3 months of age, and by 8 months, but not before, infants evaluate others’ actions in context: they prefer those who harm, rather than help, individuals who have previously harmed others. Currently there are at least two reasons for younger infants’ failure to show context-dependent social evaluations. First, this failure may reflect fundamental change in infants’ social evaluation system over the first year of life, in which infants first prefer helpers in any situation and only later evaluate prosocial and antisocial actors in context. On the other hand, it is possible that this developmental change actually reflects domain-general limitations of younger infants, such as limited memory and processing capacities. To distinguish between these possibilities, 4.5-month-olds in the current studies were habituated, rather than familiarized as in previous work, to one individual helping and another harming a third party, greatly increasing infants’ exposure to the characters’ actions. Following habituation, 4.5-month-olds displayed context-dependent social preferences, selectively reaching for helpers of prosocial and hinderers of antisocial others. Such results suggest that younger infants’ failure to display global social evaluation in previous work reflected domain-general rather than domain-specific limitations.