Statistical properties of several models of fractional random point processes
Bendjaballah, C.
2011-08-01
Statistical properties of several models of fractional random point processes have been analyzed from the counting and time interval statistics points of view. Based on the criterion of the reduced variance, it is seen that such processes exhibit nonclassical properties. The conditions for these processes to be treated as conditional Poisson processes are examined. Numerical simulations illustrate part of the theoretical calculations.
Scattering analysis of point processes and random measures
International Nuclear Information System (INIS)
Hanisch, K.H.
1984-01-01
In the present paper scattering analysis of point processes and random measures is studied. Known formulae which connect the scattering intensity with the pair distribution function of the studied structures are proved in a rigorous manner with tools of the theory of point processes and random measures. For some special fibre processes the scattering intensity is computed. For a class of random measures, namely for 'grain-germ-models', a new formula is proved which yields the pair distribution function of the 'grain-germ-model' in terms of the pair distribution function of the underlying point process (the 'germs') and of the mean structure factor and the mean squared structure factor of the particles (the 'grains'). (author)
Transforming spatial point processes into Poisson processes using random superposition
DEFF Research Database (Denmark)
Møller, Jesper; Berthelsen, Kasper Klitgaaard
with a complementary spatial point process Y to obtain a Poisson process X∪Y with intensity function β. Underlying this is a bivariate spatial birth-death process (Xt,Yt) which converges towards the distribution of (X,Y). We study the joint distribution of X and Y, and their marginal and conditional distributions....... In particular, we introduce a fast and easy simulation procedure for Y conditional on X. This may be used for model checking: given a model for the Papangelou intensity of the original spatial point process, this model is used to generate the complementary process, and the resulting superposition is a Poisson...... process with intensity function β if and only if the true Papangelou intensity is used. Whether the superposition is actually such a Poisson process can easily be examined using well known results and fast simulation procedures for Poisson processes. We illustrate this approach to model checking...
Investigation of Random Switching Driven by a Poisson Point Process
DEFF Research Database (Denmark)
Simonsen, Maria; Schiøler, Henrik; Leth, John-Josef
2015-01-01
This paper investigates the switching mechanism of a two-dimensional switched system, when the switching events are generated by a Poisson point process. A model, in the shape of a stochastic process, for such a system is derived and the distribution of the trajectory's position is developed...... together with marginal density functions for the coordinate functions. Furthermore, the joint probability distribution is given explicitly....
ON THE ESTIMATION OF DISTANCE DISTRIBUTION FUNCTIONS FOR POINT PROCESSES AND RANDOM SETS
Directory of Open Access Journals (Sweden)
Dietrich Stoyan
2011-05-01
Full Text Available This paper discusses various estimators for the nearest neighbour distance distribution function D of a stationary point process and for the quadratic contact distribution function Hq of a stationary random closed set. It recommends the use of Hanisch's estimator of D, which is of Horvitz-Thompson type, and the minussampling estimator of Hq. This recommendation is based on simulations for Poisson processes and Boolean models.
A random point process model for the score in sport matches
Czech Academy of Sciences Publication Activity Database
Volf, Petr
2009-01-01
Roč. 20, č. 2 (2009), s. 121-131 ISSN 1471-678X R&D Projects: GA AV ČR(CZ) IAA101120604 Institutional research plan: CEZ:AV0Z10750506 Keywords : sport statistics * scoring intensity * Cox’s regression model Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2009/SI/volf-a random point process model for the score in sport matches.pdf
Application of random-point processes to the detection of radiation sources
International Nuclear Information System (INIS)
Woods, J.W.
1978-01-01
In this report the mathematical theory of random-point processes is reviewed and it is shown how use of the theory can obtain optimal solutions to the problem of detecting radiation sources. As noted, the theory also applies to image processing in low-light-level or low-count-rate situations. Paralleling Snyder's work, the theory is extended to the multichannel case of a continuous, two-dimensional (2-D), energy-time space. This extension essentially involves showing that the data are doubly stochastic Poisson (DSP) point processes in energy as well as time. Further, a new 2-D recursive formulation is presented for the radiation-detection problem with large computational savings over nonrecursive techniques when the number of channels is large (greater than or equal to 30). Finally, some adaptive strategies for on-line ''learning'' of unknown, time-varying signal and background-intensity parameters and statistics are present and discussed. These adaptive procedures apply when a complete statistical description is not available a priori
Poisson branching point processes
International Nuclear Information System (INIS)
Matsuo, K.; Teich, M.C.; Saleh, B.E.A.
1984-01-01
We investigate the statistical properties of a special branching point process. The initial process is assumed to be a homogeneous Poisson point process (HPP). The initiating events at each branching stage are carried forward to the following stage. In addition, each initiating event independently contributes a nonstationary Poisson point process (whose rate is a specified function) located at that point. The additional contributions from all points of a given stage constitute a doubly stochastic Poisson point process (DSPP) whose rate is a filtered version of the initiating point process at that stage. The process studied is a generalization of a Poisson branching process in which random time delays are permitted in the generation of events. Particular attention is given to the limit in which the number of branching stages is infinite while the average number of added events per event of the previous stage is infinitesimal. In the special case when the branching is instantaneous this limit of continuous branching corresponds to the well-known Yule--Furry process with an initial Poisson population. The Poisson branching point process provides a useful description for many problems in various scientific disciplines, such as the behavior of electron multipliers, neutron chain reactions, and cosmic ray showers
Smooth random change point models.
van den Hout, Ardo; Muniz-Terrera, Graciela; Matthews, Fiona E
2011-03-15
Change point models are used to describe processes over time that show a change in direction. An example of such a process is cognitive ability, where a decline a few years before death is sometimes observed. A broken-stick model consists of two linear parts and a breakpoint where the two lines intersect. Alternatively, models can be formulated that imply a smooth change between the two linear parts. Change point models can be extended by adding random effects to account for variability between subjects. A new smooth change point model is introduced and examples are presented that show how change point models can be estimated using functions in R for mixed-effects models. The Bayesian inference using WinBUGS is also discussed. The methods are illustrated using data from a population-based longitudinal study of ageing, the Cambridge City over 75 Cohort Study. The aim is to identify how many years before death individuals experience a change in the rate of decline of their cognitive ability. Copyright © 2010 John Wiley & Sons, Ltd.
Yang, X. I. A.; Marusic, I.; Meneveau, C.
2016-06-01
Townsend [Townsend, The Structure of Turbulent Shear Flow (Cambridge University Press, Cambridge, UK, 1976)] hypothesized that the logarithmic region in high-Reynolds-number wall-bounded flows consists of space-filling, self-similar attached eddies. Invoking this hypothesis, we express streamwise velocity fluctuations in the inertial layer in high-Reynolds-number wall-bounded flows as a hierarchical random additive process (HRAP): uz+=∑i=1Nzai . Here u is the streamwise velocity fluctuation, + indicates normalization in wall units, z is the wall normal distance, and ai's are independently, identically distributed random additives, each of which is associated with an attached eddy in the wall-attached hierarchy. The number of random additives is Nz˜ln(δ /z ) where δ is the boundary layer thickness and ln is natural log. Due to its simplified structure, such a process leads to predictions of the scaling behaviors for various turbulence statistics in the logarithmic layer. Besides reproducing known logarithmic scaling of moments, structure functions, and correlation function [" close="]3/2 uz(x ) uz(x +r ) >, new logarithmic laws in two-point statistics such as uz4(x ) > 1 /2, 1/3, etc. can be derived using the HRAP formalism. Supporting empirical evidence for the logarithmic scaling in such statistics is found from the Melbourne High Reynolds Number Boundary Layer Wind Tunnel measurements. We also show that, at high Reynolds numbers, the above mentioned new logarithmic laws can be derived by assuming the arrival of an attached eddy at a generic point in the flow field to be a Poisson process [Woodcock and Marusic, Phys. Fluids 27, 015104 (2015), 10.1063/1.4905301]. Taken together, the results provide new evidence supporting the essential ingredients of the attached eddy hypothesis to describe streamwise velocity fluctuations of large, momentum transporting eddies in wall-bounded turbulence, while observed deviations suggest the need for further extensions of the
Díaz Fernández, Ester
2010-01-01
In this thesis, new models and methodologies are introduced for the analysis of dynamic processes characterized by image sequences with spatial temporal overlapping. The spatial temporal overlapping exists in many natural phenomena and should be addressed properly in several Science disciplines such as Microscopy, Material Sciences, Biology, Geostatistics or Communication Networks. This work is related to the Point Process and Random Closed Set theories, within Stochastic Ge...
Thinning spatial point processes into Poisson processes
DEFF Research Database (Denmark)
Møller, Jesper; Schoenberg, Frederic Paik
2010-01-01
are identified, and where we simulate backwards and forwards in order to obtain the thinned process. In the case of a Cox process, a simple independent thinning technique is proposed. In both cases, the thinning results in a Poisson process if and only if the true Papangelou conditional intensity is used, and......In this paper we describe methods for randomly thinning certain classes of spatial point processes. In the case of a Markov point process, the proposed method involves a dependent thinning of a spatial birth-and-death process, where clans of ancestors associated with the original points......, thus, can be used as a graphical exploratory tool for inspecting the goodness-of-fit of a spatial point process model. Several examples, including clustered and inhibitive point processes, are considered....
Thinning spatial point processes into Poisson processes
DEFF Research Database (Denmark)
Møller, Jesper; Schoenberg, Frederic Paik
, and where one simulates backwards and forwards in order to obtain the thinned process. In the case of a Cox process, a simple independent thinning technique is proposed. In both cases, the thinning results in a Poisson process if and only if the true Papangelou conditional intensity is used, and thus can......This paper describes methods for randomly thinning certain classes of spatial point processes. In the case of a Markov point process, the proposed method involves a dependent thinning of a spatial birth-and-death process, where clans of ancestors associated with the original points are identified...... be used as a diagnostic for assessing the goodness-of-fit of a spatial point process model. Several examples, including clustered and inhibitive point processes, are considered....
Directory of Open Access Journals (Sweden)
Lotfi Khribi
2017-12-01
Full Text Available In the Bayesian framework, the usual choice of prior in the prediction of homogeneous Poisson processes with random effects is the gamma one. Here, we propose the use of higher order maximum entropy priors. Their advantage is illustrated in a simulation study and the choice of the best order is established by two goodness-of-fit criteria: Kullback–Leibler divergence and a discrepancy measure. This procedure is illustrated on a warranty data set from the automobile industry.
Lévy based Cox point processes
DEFF Research Database (Denmark)
Hellmund, Gunnar; Prokesová, Michaela; Jensen, Eva Bjørn Vedel
2008-01-01
In this paper we introduce Lévy-driven Cox point processes (LCPs) as Cox point processes with driving intensity function Λ defined by a kernel smoothing of a Lévy basis (an independently scattered, infinitely divisible random measure). We also consider log Lévy-driven Cox point processes (LLCPs......) with Λ equal to the exponential of such a kernel smoothing. Special cases are shot noise Cox processes, log Gaussian Cox processes, and log shot noise Cox processes. We study the theoretical properties of Lévy-based Cox processes, including moment properties described by nth-order product densities...
International Nuclear Information System (INIS)
Reuss, J.D.; Misguich, J.H.
1993-02-01
The Campbell process is a stationary random process which can have various correlation functions, according to the choice of an elementary response function. The statistical properties of this process are presented. A numerical algorithm and a subroutine for generating such a process is built up and tested, for the physically interesting case of a Campbell process with Gaussian correlations. The (non-Gaussian) probability distribution appears to be similar to the Gamma distribution
Detecting determinism from point processes.
Andrzejak, Ralph G; Mormann, Florian; Kreuz, Thomas
2014-12-01
The detection of a nonrandom structure from experimental data can be crucial for the classification, understanding, and interpretation of the generating process. We here introduce a rank-based nonlinear predictability score to detect determinism from point process data. Thanks to its modular nature, this approach can be adapted to whatever signature in the data one considers indicative of deterministic structure. After validating our approach using point process signals from deterministic and stochastic model dynamics, we show an application to neuronal spike trains recorded in the brain of an epilepsy patient. While we illustrate our approach in the context of temporal point processes, it can be readily applied to spatial point processes as well.
Modern Statistics for Spatial Point Processes
DEFF Research Database (Denmark)
Møller, Jesper; Waagepetersen, Rasmus
2007-01-01
We summarize and discuss the current state of spatial point process theory and directions for future research, making an analogy with generalized linear models and random effect models, and illustrating the theory with various examples of applications. In particular, we consider Poisson, Gibbs...
Modern statistics for spatial point processes
DEFF Research Database (Denmark)
Møller, Jesper; Waagepetersen, Rasmus
We summarize and discuss the current state of spatial point process theory and directions for future research, making an analogy with generalized linear models and random effect models, and illustrating the theory with various examples of applications. In particular, we consider Poisson, Gibbs...
TOPOLOGY OF RANDOM POINTS YOGESHWARAN. D.
Indian Academy of Sciences (India)
Balls grow at unit rate centred at the points of the point cloud/ process. ... Idea of persistence : Keep track of births and deaths of topological features. ..... holes, Betti numbers, etc., one will be more interested in the distribution of such objects on ...
Extreme values, regular variation and point processes
Resnick, Sidney I
1987-01-01
Extremes Values, Regular Variation and Point Processes is a readable and efficient account of the fundamental mathematical and stochastic process techniques needed to study the behavior of extreme values of phenomena based on independent and identically distributed random variables and vectors It presents a coherent treatment of the distributional and sample path fundamental properties of extremes and records It emphasizes the core primacy of three topics necessary for understanding extremes the analytical theory of regularly varying functions; the probabilistic theory of point processes and random measures; and the link to asymptotic distribution approximations provided by the theory of weak convergence of probability measures in metric spaces The book is self-contained and requires an introductory measure-theoretic course in probability as a prerequisite Almost all sections have an extensive list of exercises which extend developments in the text, offer alternate approaches, test mastery and provide for enj...
Padgett, Wayne T
2009-01-01
This book is intended to fill the gap between the ""ideal precision"" digital signal processing (DSP) that is widely taught, and the limited precision implementation skills that are commonly required in fixed-point processors and field programmable gate arrays (FPGAs). These skills are often neglected at the university level, particularly for undergraduates. We have attempted to create a resource both for a DSP elective course and for the practicing engineer with a need to understand fixed-point implementation. Although we assume a background in DSP, Chapter 2 contains a review of basic theory
A logistic regression estimating function for spatial Gibbs point processes
DEFF Research Database (Denmark)
Baddeley, Adrian; Coeurjolly, Jean-François; Rubak, Ege
We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related to the p......We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related...
Processing Terrain Point Cloud Data
DeVore, Ronald
2013-01-10
Terrain point cloud data are typically acquired through some form of Light Detection And Ranging sensing. They form a rich resource that is important in a variety of applications including navigation, line of sight, and terrain visualization. Processing terrain data has not received the attention of other forms of surface reconstruction or of image processing. The goal of terrain data processing is to convert the point cloud into a succinct representation system that is amenable to the various application demands. The present paper presents a platform for terrain processing built on the following principles: (i) measuring distortion in the Hausdorff metric, which we argue is a good match for the application demands, (ii) a multiscale representation based on tree approximation using local polynomial fitting. The basic elements held in the nodes of the tree can be efficiently encoded, transmitted, visualized, and utilized for the various target applications. Several challenges emerge because of the variable resolution of the data, missing data, occlusions, and noise. Techniques for identifying and handling these challenges are developed. © 2013 Society for Industrial and Applied Mathematics.
Processing Terrain Point Cloud Data
DeVore, Ronald; Petrova, Guergana; Hielsberg, Matthew; Owens, Luke; Clack, Billy; Sood, Alok
2013-01-01
Terrain point cloud data are typically acquired through some form of Light Detection And Ranging sensing. They form a rich resource that is important in a variety of applications including navigation, line of sight, and terrain visualization
Inhomogeneous Markov point processes by transformation
DEFF Research Database (Denmark)
Jensen, Eva B. Vedel; Nielsen, Linda Stougaard
2000-01-01
We construct parametrized models for point processes, allowing for both inhomogeneity and interaction. The inhomogeneity is obtained by applying parametrized transformations to homogeneous Markov point processes. An interesting model class, which can be constructed by this transformation approach......, is that of exponential inhomogeneous Markov point processes. Statistical inference For such processes is discussed in some detail....
Testing Local Independence between Two Point Processes
DEFF Research Database (Denmark)
Allard, Denis; Brix, Anders; Chadæuf, Joël
2001-01-01
Independence test, Inhomogeneous point processes, Local test, Monte Carlo, Nonstationary, Rotations, Spatial pattern, Tiger bush......Independence test, Inhomogeneous point processes, Local test, Monte Carlo, Nonstationary, Rotations, Spatial pattern, Tiger bush...
Self-exciting point process in modeling earthquake occurrences
International Nuclear Information System (INIS)
Pratiwi, H.; Slamet, I.; Respatiwulan; Saputro, D. R. S.
2017-01-01
In this paper, we present a procedure for modeling earthquake based on spatial-temporal point process. The magnitude distribution is expressed as truncated exponential and the event frequency is modeled with a spatial-temporal point process that is characterized uniquely by its associated conditional intensity process. The earthquakes can be regarded as point patterns that have a temporal clustering feature so we use self-exciting point process for modeling the conditional intensity function. The choice of main shocks is conducted via window algorithm by Gardner and Knopoff and the model can be fitted by maximum likelihood method for three random variables. (paper)
Random processes in nuclear reactors
Williams, M M R
1974-01-01
Random Processes in Nuclear Reactors describes the problems that a nuclear engineer may meet which involve random fluctuations and sets out in detail how they may be interpreted in terms of various models of the reactor system. Chapters set out to discuss topics on the origins of random processes and sources; the general technique to zero-power problems and bring out the basic effect of fission, and fluctuations in the lifetime of neutrons, on the measured response; the interpretation of power reactor noise; and associated problems connected with mechanical, hydraulic and thermal noise sources
Residual analysis for spatial point processes
DEFF Research Database (Denmark)
Baddeley, A.; Turner, R.; Møller, Jesper
We define residuals for point process models fitted to spatial point pattern data, and propose diagnostic plots based on these residuals. The techniques apply to any Gibbs point process model, which may exhibit spatial heterogeneity, interpoint interaction and dependence on spatial covariates. Ou...... or covariate effects. Q-Q plots of the residuals are effective in diagnosing interpoint interaction. Some existing ad hoc statistics of point patterns (quadrat counts, scan statistic, kernel smoothed intensity, Berman's diagnostic) are recovered as special cases....
State estimation for temporal point processes
van Lieshout, Maria Nicolette Margaretha
2015-01-01
This paper is concerned with combined inference for point processes on the real line observed in a broken interval. For such processes, the classic history-based approach cannot be used. Instead, we adapt tools from sequential spatial point processes. For a range of models, the marginal and
Bayesian analysis of Markov point processes
DEFF Research Database (Denmark)
Berthelsen, Kasper Klitgaard; Møller, Jesper
2006-01-01
Recently Møller, Pettitt, Berthelsen and Reeves introduced a new MCMC methodology for drawing samples from a posterior distribution when the likelihood function is only specified up to a normalising constant. We illustrate the method in the setting of Bayesian inference for Markov point processes...... a partially ordered Markov point process as the auxiliary variable. As the method requires simulation from the "unknown" likelihood, perfect simulation algorithms for spatial point processes become useful....
Some common random fixed point theorems for contractive type conditions in cone random metric spaces
Directory of Open Access Journals (Sweden)
Saluja Gurucharan S.
2016-08-01
Full Text Available In this paper, we establish some common random fixed point theorems for contractive type conditions in the setting of cone random metric spaces. Our results unify, extend and generalize many known results from the current existing literature.
DEFF Research Database (Denmark)
Møller, Jesper; Ghorbani, Mohammad; Rubak, Ege Holger
We show how a spatial point process, where to each point there is associated a random quantitative mark, can be identified with a spatio-temporal point process specified by a conditional intensity function. For instance, the points can be tree locations, the marks can express the size of trees......, and the conditional intensity function can describe the distribution of a tree (i.e., its location and size) conditionally on the larger trees. This enable us to construct parametric statistical models which are easily interpretable and where likelihood-based inference is tractable. In particular, we consider maximum...
Poisson point processes imaging, tracking, and sensing
Streit, Roy L
2010-01-01
This overview of non-homogeneous and multidimensional Poisson point processes and their applications features mathematical tools and applications from emission- and transmission-computed tomography to multiple target tracking and distributed sensor detection.
Statistical aspects of determinantal point processes
DEFF Research Database (Denmark)
Lavancier, Frédéric; Møller, Jesper; Rubak, Ege
The statistical aspects of determinantal point processes (DPPs) seem largely unexplored. We review the appealing properties of DDPs, demonstrate that they are useful models for repulsiveness, detail a simulation procedure, and provide freely available software for simulation and statistical infer...
Modeling fixation locations using spatial point processes.
Barthelmé, Simon; Trukenbrod, Hans; Engbert, Ralf; Wichmann, Felix
2013-10-01
Whenever eye movements are measured, a central part of the analysis has to do with where subjects fixate and why they fixated where they fixated. To a first approximation, a set of fixations can be viewed as a set of points in space; this implies that fixations are spatial data and that the analysis of fixation locations can be beneficially thought of as a spatial statistics problem. We argue that thinking of fixation locations as arising from point processes is a very fruitful framework for eye-movement data, helping turn qualitative questions into quantitative ones. We provide a tutorial introduction to some of the main ideas of the field of spatial statistics, focusing especially on spatial Poisson processes. We show how point processes help relate image properties to fixation locations. In particular we show how point processes naturally express the idea that image features' predictability for fixations may vary from one image to another. We review other methods of analysis used in the literature, show how they relate to point process theory, and argue that thinking in terms of point processes substantially extends the range of analyses that can be performed and clarify their interpretation.
On tests of randomness for spatial point patterns
International Nuclear Information System (INIS)
Doguwa, S.I.
1990-11-01
New tests of randomness for spatial point patterns are introduced. These test statistics are then compared in a power study with the existing alternatives. These results of the power study suggest that one of the tests proposed is extremely powerful against both aggregated and regular alternatives. (author). 9 refs, 7 figs, 3 tabs
Shape Modelling Using Markov Random Field Restoration of Point Correspondences
DEFF Research Database (Denmark)
Paulsen, Rasmus Reinhold; Hilger, Klaus Baggesen
2003-01-01
A method for building statistical point distribution models is proposed. The novelty in this paper is the adaption of Markov random field regularization of the correspondence field over the set of shapes. The new approach leads to a generative model that produces highly homogeneous polygonized sh...
Fingerprint Analysis with Marked Point Processes
DEFF Research Database (Denmark)
Forbes, Peter G. M.; Lauritzen, Steffen; Møller, Jesper
We present a framework for fingerprint matching based on marked point process models. An efficient Monte Carlo algorithm is developed to calculate the marginal likelihood ratio for the hypothesis that two observed prints originate from the same finger against the hypothesis that they originate from...... different fingers. Our model achieves good performance on an NIST-FBI fingerprint database of 258 matched fingerprint pairs....
Determinantal point process models on the sphere
DEFF Research Database (Denmark)
Møller, Jesper; Nielsen, Morten; Porcu, Emilio
defined on Sd × Sd . We review the appealing properties of such processes, including their specific moment properties, density expressions and simulation procedures. Particularly, we characterize and construct isotropic DPPs models on Sd , where it becomes essential to specify the eigenvalues......We consider determinantal point processes on the d-dimensional unit sphere Sd . These are finite point processes exhibiting repulsiveness and with moment properties determined by a certain determinant whose entries are specified by a so-called kernel which we assume is a complex covariance function...... and eigenfunctions in a spectral representation for the kernel, and we figure out how repulsive isotropic DPPs can be. Moreover, we discuss the shortcomings of adapting existing models for isotropic covariance functions and consider strategies for developing new models, including a useful spectral approach....
Estimating Function Approaches for Spatial Point Processes
Deng, Chong
Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting
Statistics of stationary points of random finite polynomial potentials
International Nuclear Information System (INIS)
Mehta, Dhagash; Niemerg, Matthew; Sun, Chuang
2015-01-01
The stationary points (SPs) of the potential energy landscapes (PELs) of multivariate random potentials (RPs) have found many applications in many areas of Physics, Chemistry and Mathematical Biology. However, there are few reliable methods available which can find all the SPs accurately. Hence, one has to rely on indirect methods such as Random Matrix theory. With a combination of the numerical polynomial homotopy continuation method and a certification method, we obtain all the certified SPs of the most general polynomial RP for each sample chosen from the Gaussian distribution with mean 0 and variance 1. While obtaining many novel results for the finite size case of the RP, we also discuss the implications of our results on mathematics of random systems and string theory landscapes. (paper)
Point cloud processing for smart systems
Directory of Open Access Journals (Sweden)
Jaromír Landa
2013-01-01
Full Text Available High population as well as the economical tension emphasises the necessity of effective city management – from land use planning to urban green maintenance. The management effectiveness is based on precise knowledge of the city environment. Point clouds generated by mobile and terrestrial laser scanners provide precise data about objects in the scanner vicinity. From these data pieces the state of the roads, buildings, trees and other objects important for this decision-making process can be obtained. Generally, they can support the idea of “smart” or at least “smarter” cities.Unfortunately the point clouds do not provide this type of information automatically. It has to be extracted. This extraction is done by expert personnel or by object recognition software. As the point clouds can represent large areas (streets or even cities, usage of expert personnel to identify the required objects can be very time-consuming, therefore cost ineffective. Object recognition software allows us to detect and identify required objects semi-automatically or automatically.The first part of the article reviews and analyses the state of current art point cloud object recognition techniques. The following part presents common formats used for point cloud storage and frequently used software tools for point cloud processing. Further, a method for extraction of geospatial information about detected objects is proposed. Therefore, the method can be used not only to recognize the existence and shape of certain objects, but also to retrieve their geospatial properties. These objects can be later directly used in various GIS systems for further analyses.
Parametric methods for spatial point processes
DEFF Research Database (Denmark)
Møller, Jesper
is studied in Section 4, and Bayesian inference in Section 5. On one hand, as the development in computer technology and computational statistics continues,computationally-intensive simulation-based methods for likelihood inference probably will play a increasing role for statistical analysis of spatial...... inference procedures for parametric spatial point process models. The widespread use of sensible but ad hoc methods based on summary statistics of the kind studied in Chapter 4.3 have through the last two decades been supplied by likelihood based methods for parametric spatial point process models......(This text is submitted for the volume ‘A Handbook of Spatial Statistics' edited by A.E. Gelfand, P. Diggle, M. Fuentes, and P. Guttorp, to be published by Chapmand and Hall/CRC Press, and planned to appear as Chapter 4.4 with the title ‘Parametric methods'.) 1 Introduction This chapter considers...
Statistical aspects of determinantal point processes
DEFF Research Database (Denmark)
Lavancier, Frédéric; Møller, Jesper; Rubak, Ege Holger
The statistical aspects of determinantal point processes (DPPs) seem largely unexplored. We review the appealing properties of DDPs, demonstrate that they are useful models for repulsiveness, detail a simulation procedure, and provide freely available software for simulation and statistical...... inference. We pay special attention to stationary DPPs, where we give a simple condition ensuring their existence, construct parametric models, describe how they can be well approximated so that the likelihood can be evaluated and realizations can be simulated, and discuss how statistical inference...
Gradients estimation from random points with volumetric tensor in turbulence
Watanabe, Tomoaki; Nagata, Koji
2017-12-01
We present an estimation method of fully-resolved/coarse-grained gradients from randomly distributed points in turbulence. The method is based on a linear approximation of spatial gradients expressed with the volumetric tensor, which is a 3 × 3 matrix determined by a geometric distribution of the points. The coarse grained gradient can be considered as a low pass filtered gradient, whose cutoff is estimated with the eigenvalues of the volumetric tensor. The present method, the volumetric tensor approximation, is tested for velocity and passive scalar gradients in incompressible planar jet and mixing layer. Comparison with a finite difference approximation on a Cartesian grid shows that the volumetric tensor approximation computes the coarse grained gradients fairly well at a moderate computational cost under various conditions of spatial distributions of points. We also show that imposing the solenoidal condition improves the accuracy of the present method for solenoidal vectors, such as a velocity vector in incompressible flows, especially when the number of the points is not large. The volumetric tensor approximation with 4 points poorly estimates the gradient because of anisotropic distribution of the points. Increasing the number of points from 4 significantly improves the accuracy. Although the coarse grained gradient changes with the cutoff length, the volumetric tensor approximation yields the coarse grained gradient whose magnitude is close to the one obtained by the finite difference. We also show that the velocity gradient estimated with the present method well captures the turbulence characteristics such as local flow topology, amplification of enstrophy and strain, and energy transfer across scales.
Saddle-points of a two dimensional random lattice theory
International Nuclear Information System (INIS)
Pertermann, D.
1985-07-01
A two dimensional random lattice theory with a free massless scalar field is considered. We analyse the field theoretic generating functional for any given choice of positions of the lattice sites. Asking for saddle-points of this generating functional with respect to the positions we find the hexagonal lattice and a triangulated version of the hypercubic lattice as candidates. The investigation of the neighbourhood of a single lattice site yields triangulated rectangles and regular polygons extremizing the above generating functional on the local level. (author)
Osada, Hirofumi; Osada, Shota
2018-01-01
We prove tail triviality of determinantal point processes μ on continuous spaces. Tail triviality has been proved for such processes only on discrete spaces, and hence we have generalized the result to continuous spaces. To do this, we construct tree representations, that is, discrete approximations of determinantal point processes enjoying a determinantal structure. There are many interesting examples of determinantal point processes on continuous spaces such as zero points of the hyperbolic Gaussian analytic function with Bergman kernel, and the thermodynamic limit of eigenvalues of Gaussian random matrices for Sine_2 , Airy_2 , Bessel_2 , and Ginibre point processes. Our main theorem proves all these point processes are tail trivial.
Bridging the gap between a stationary point process and its Palm distribution
Nieuwenhuis, G.
1994-01-01
In the context of stationary point processes measurements are usually made from a time point chosen at random or from an occurrence chosen at random. That is, either the stationary distribution P or its Palm distribution P° is the ruling probability measure. In this paper an approach is presented to
A signal theoretic introduction to random processes
Howard, Roy M
2015-01-01
A fresh introduction to random processes utilizing signal theory By incorporating a signal theory basis, A Signal Theoretic Introduction to Random Processes presents a unique introduction to random processes with an emphasis on the important random phenomena encountered in the electronic and communications engineering field. The strong mathematical and signal theory basis provides clarity and precision in the statement of results. The book also features: A coherent account of the mathematical fundamentals and signal theory that underpin the presented material Unique, in-depth coverage of
Multiple Monte Carlo Testing with Applications in Spatial Point Processes
DEFF Research Database (Denmark)
Mrkvička, Tomáš; Myllymäki, Mari; Hahn, Ute
with a function as the test statistic, 3) several Monte Carlo tests with functions as test statistics. The rank test has correct (global) type I error in each case and it is accompanied with a p-value and with a graphical interpretation which shows which subtest or which distances of the used test function......(s) lead to the rejection at the prescribed significance level of the test. Examples of null hypothesis from point process and random set statistics are used to demonstrate the strength of the rank envelope test. The examples include goodness-of-fit test with several test functions, goodness-of-fit test...
Probability, random variables, and random processes theory and signal processing applications
Shynk, John J
2012-01-01
Probability, Random Variables, and Random Processes is a comprehensive textbook on probability theory for engineers that provides a more rigorous mathematical framework than is usually encountered in undergraduate courses. It is intended for first-year graduate students who have some familiarity with probability and random variables, though not necessarily of random processes and systems that operate on random signals. It is also appropriate for advanced undergraduate students who have a strong mathematical background. The book has the following features: Several app
Pseudo random signal processing theory and application
Zepernick, Hans-Jurgen
2013-01-01
In recent years, pseudo random signal processing has proven to be a critical enabler of modern communication, information, security and measurement systems. The signal's pseudo random, noise-like properties make it vitally important as a tool for protecting against interference, alleviating multipath propagation and allowing the potential of sharing bandwidth with other users. Taking a practical approach to the topic, this text provides a comprehensive and systematic guide to understanding and using pseudo random signals. Covering theoretical principles, design methodologies and applications
Marked point process for modelling seismic activity (case study in Sumatra and Java)
Pratiwi, Hasih; Sulistya Rini, Lia; Wayan Mangku, I.
2018-05-01
Earthquake is a natural phenomenon that is random, irregular in space and time. Until now the forecast of earthquake occurrence at a location is still difficult to be estimated so that the development of earthquake forecast methodology is still carried out both from seismology aspect and stochastic aspect. To explain the random nature phenomena, both in space and time, a point process approach can be used. There are two types of point processes: temporal point process and spatial point process. The temporal point process relates to events observed over time as a sequence of time, whereas the spatial point process describes the location of objects in two or three dimensional spaces. The points on the point process can be labelled with additional information called marks. A marked point process can be considered as a pair (x, m) where x is the point of location and m is the mark attached to the point of that location. This study aims to model marked point process indexed by time on earthquake data in Sumatra Island and Java Island. This model can be used to analyse seismic activity through its intensity function by considering the history process up to time before t. Based on data obtained from U.S. Geological Survey from 1973 to 2017 with magnitude threshold 5, we obtained maximum likelihood estimate for parameters of the intensity function. The estimation of model parameters shows that the seismic activity in Sumatra Island is greater than Java Island.
Elements of random walk and diffusion processes
Ibe, Oliver C
2013-01-01
Presents an important and unique introduction to random walk theory Random walk is a stochastic process that has proven to be a useful model in understanding discrete-state discrete-time processes across a wide spectrum of scientific disciplines. Elements of Random Walk and Diffusion Processes provides an interdisciplinary approach by including numerous practical examples and exercises with real-world applications in operations research, economics, engineering, and physics. Featuring an introduction to powerful and general techniques that are used in the application of physical and dynamic
Amorphous topological insulators constructed from random point sets
Mitchell, Noah P.; Nash, Lisa M.; Hexner, Daniel; Turner, Ari M.; Irvine, William T. M.
2018-04-01
The discovery that the band structure of electronic insulators may be topologically non-trivial has revealed distinct phases of electronic matter with novel properties1,2. Recently, mechanical lattices have been found to have similarly rich structure in their phononic excitations3,4, giving rise to protected unidirectional edge modes5-7. In all of these cases, however, as well as in other topological metamaterials3,8, the underlying structure was finely tuned, be it through periodicity, quasi-periodicity or isostaticity. Here we show that amorphous Chern insulators can be readily constructed from arbitrary underlying structures, including hyperuniform, jammed, quasi-crystalline and uniformly random point sets. While our findings apply to mechanical and electronic systems alike, we focus on networks of interacting gyroscopes as a model system. Local decorations control the topology of the vibrational spectrum, endowing amorphous structures with protected edge modes—with a chirality of choice. Using a real-space generalization of the Chern number, we investigate the topology of our structures numerically, analytically and experimentally. The robustness of our approach enables the topological design and self-assembly of non-crystalline topological metamaterials on the micro and macro scale.
Some probabilistic properties of fractional point processes
Garra, Roberto
2017-05-16
In this article, the first hitting times of generalized Poisson processes N-f (t), related to Bernstein functions f are studied. For the spacefractional Poisson processes, N alpha (t), t > 0 ( corresponding to f = x alpha), the hitting probabilities P{T-k(alpha) < infinity} are explicitly obtained and analyzed. The processes N-f (t) are time-changed Poisson processes N( H-f (t)) with subordinators H-f (t) and here we study N(Sigma H-n(j= 1)f j (t)) and obtain probabilistic features of these extended counting processes. A section of the paper is devoted to processes of the form N( G(H,v) (t)) where G(H,v) (t) are generalized grey Brownian motions. This involves the theory of time-dependent fractional operators of the McBride form. While the time-fractional Poisson process is a renewal process, we prove that the space-time Poisson process is no longer a renewal process.
Some probabilistic properties of fractional point processes
Garra, Roberto; Orsingher, Enzo; Scavino, Marco
2017-01-01
P{T-k(alpha) < infinity} are explicitly obtained and analyzed. The processes N-f (t) are time-changed Poisson processes N( H-f (t)) with subordinators H-f (t) and here we study N(Sigma H-n(j= 1)f j (t)) and obtain probabilistic features
On statistical analysis of compound point process
Czech Academy of Sciences Publication Activity Database
Volf, Petr
2006-01-01
Roč. 35, 2-3 (2006), s. 389-396 ISSN 1026-597X R&D Projects: GA ČR(CZ) GA402/04/1294 Institutional research plan: CEZ:AV0Z10750506 Keywords : counting process * compound process * hazard function * Cox -model Subject RIV: BB - Applied Statistics, Operational Research
Critical points of multidimensional random Fourier series: variance estimates
Nicolaescu, Liviu I.
2013-01-01
To any positive number $\\varepsilon$ and any nonnegative even Schwartz function $w:\\mathbb{R}\\to\\mathbb{R}$ we associate the random function $u^\\varepsilon$ on the $m$-torus $T^m_\\varepsilon:=\\mathbb{R}^m/(\\varepsilon^{-1}\\mathbb{Z})^m$ defined as the real part of the random Fourier series $$ \\sum_{\
Fundamentals of applied probability and random processes
Ibe, Oliver
2014-01-01
The long-awaited revision of Fundamentals of Applied Probability and Random Processes expands on the central components that made the first edition a classic. The title is based on the premise that engineers use probability as a modeling tool, and that probability can be applied to the solution of engineering problems. Engineers and students studying probability and random processes also need to analyze data, and thus need some knowledge of statistics. This book is designed to provide students with a thorough grounding in probability and stochastic processes, demonstrate their applicability t
Intensity-dependent point spread image processing
International Nuclear Information System (INIS)
Cornsweet, T.N.; Yellott, J.I.
1984-01-01
There is ample anatomical, physiological and psychophysical evidence that the mammilian retina contains networks that mediate interactions among neighboring receptors, resulting in intersecting transformations between input images and their corresponding neural output patterns. The almost universally accepted view is that the principal form of interaction involves lateral inhibition, resulting in an output pattern that is the convolution of the input with a ''Mexican hat'' or difference-of-Gaussians spread function, having a positive center and a negative surround. A closely related process is widely applied in digital image processing, and in photography as ''unsharp masking''. The authors show that a simple and fundamentally different process, involving no inhibitory or subtractive terms can also account for the physiological and psychophysical findings that have been attributed to lateral inhibition. This process also results in a number of fundamental effects that occur in mammalian vision and that would be of considerable significance in robotic vision, but which cannot be explained by lateral inhibitory interaction
A random matrix approach to VARMA processes
International Nuclear Information System (INIS)
Burda, Zdzislaw; Jarosz, Andrzej; Nowak, Maciej A; Snarska, Malgorzata
2010-01-01
We apply random matrix theory to derive the spectral density of large sample covariance matrices generated by multivariate VMA(q), VAR(q) and VARMA(q 1 , q 2 ) processes. In particular, we consider a limit where the number of random variables N and the number of consecutive time measurements T are large but the ratio N/T is fixed. In this regime, the underlying random matrices are asymptotically equivalent to free random variables (FRV). We apply the FRV calculus to calculate the eigenvalue density of the sample covariance for several VARMA-type processes. We explicitly solve the VARMA(1, 1) case and demonstrate perfect agreement between the analytical result and the spectra obtained by Monte Carlo simulations. The proposed method is purely algebraic and can be easily generalized to q 1 >1 and q 2 >1.
Microbial profile and critical control points during processing of 'robo ...
African Journals Online (AJOL)
Microbial profile and critical control points during processing of 'robo' snack from ... the relevant critical control points especially in relation to raw materials and ... to the quality of the various raw ingredients used were the roasting using earthen
Point processes and the position distribution of infinite boson systems
International Nuclear Information System (INIS)
Fichtner, K.H.; Freudenberg, W.
1987-01-01
It is shown that to each locally normal state of a boson system one can associate a point process that can be interpreted as the position distribution of the state. The point process contains all information one can get by position measurements and is determined by the latter. On the other hand, to each so-called Σ/sup c/-point process Q they relate a locally normal state with position distribution Q
Zheng, Guanglou; Fang, Gengfa; Shankaran, Rajan; Orgun, Mehmet A; Zhou, Jie; Qiao, Li; Saleem, Kashif
2017-05-01
Generating random binary sequences (BSes) is a fundamental requirement in cryptography. A BS is a sequence of N bits, and each bit has a value of 0 or 1. For securing sensors within wireless body area networks (WBANs), electrocardiogram (ECG)-based BS generation methods have been widely investigated in which interpulse intervals (IPIs) from each heartbeat cycle are processed to produce BSes. Using these IPI-based methods to generate a 128-bit BS in real time normally takes around half a minute. In order to improve the time efficiency of such methods, this paper presents an ECG multiple fiducial-points based binary sequence generation (MFBSG) algorithm. The technique of discrete wavelet transforms is employed to detect arrival time of these fiducial points, such as P, Q, R, S, and T peaks. Time intervals between them, including RR, RQ, RS, RP, and RT intervals, are then calculated based on this arrival time, and are used as ECG features to generate random BSes with low latency. According to our analysis on real ECG data, these ECG feature values exhibit the property of randomness and, thus, can be utilized to generate random BSes. Compared with the schemes that solely rely on IPIs to generate BSes, this MFBSG algorithm uses five feature values from one heart beat cycle, and can be up to five times faster than the solely IPI-based methods. So, it achieves a design goal of low latency. According to our analysis, the complexity of the algorithm is comparable to that of fast Fourier transforms. These randomly generated ECG BSes can be used as security keys for encryption or authentication in a WBAN system.
Non-parametric Bayesian inference for inhomogeneous Markov point processes
DEFF Research Database (Denmark)
Berthelsen, Kasper Klitgaard; Møller, Jesper; Johansen, Per Michael
is a shot noise process, and the interaction function for a pair of points depends only on the distance between the two points and is a piecewise linear function modelled by a marked Poisson process. Simulation of the resulting posterior using a Metropolis-Hastings algorithm in the "conventional" way...
A tutorial on Palm distributions for spatial point processes
DEFF Research Database (Denmark)
Coeurjolly, Jean-Francois; Møller, Jesper; Waagepetersen, Rasmus Plenge
2017-01-01
This tutorial provides an introduction to Palm distributions for spatial point processes. Initially, in the context of finite point processes, we give an explicit definition of Palm distributions in terms of their density functions. Then we review Palm distributions in the general case. Finally, we...
Bassier, M.; Bonduel, M.; Van Genechten, B.; Vergauwen, M.
2017-11-01
Point cloud segmentation is a crucial step in scene understanding and interpretation. The goal is to decompose the initial data into sets of workable clusters with similar properties. Additionally, it is a key aspect in the automated procedure from point cloud data to BIM. Current approaches typically only segment a single type of primitive such as planes or cylinders. Also, current algorithms suffer from oversegmenting the data and are often sensor or scene dependent. In this work, a method is presented to automatically segment large unstructured point clouds of buildings. More specifically, the segmentation is formulated as a graph optimisation problem. First, the data is oversegmented with a greedy octree-based region growing method. The growing is conditioned on the segmentation of planes as well as smooth surfaces. Next, the candidate clusters are represented by a Conditional Random Field after which the most likely configuration of candidate clusters is computed given a set of local and contextual features. The experiments prove that the used method is a fast and reliable framework for unstructured point cloud segmentation. Processing speeds up to 40,000 points per second are recorded for the region growing. Additionally, the recall and precision of the graph clustering is approximately 80%. Overall, nearly 22% of oversegmentation is reduced by clustering the data. These clusters will be classified and used as a basis for the reconstruction of BIM models.
Provable quantum advantage in randomness processing
Dale, H; Jennings, D; Rudolph, T
2015-01-01
Quantum advantage is notoriously hard to find and even harder to prove. For example the class of functions computable with classical physics actually exactly coincides with the class computable quantum-mechanically. It is strongly believed, but not proven, that quantum computing provides exponential speed-up for a range of problems, such as factoring. Here we address a computational scenario of "randomness processing" in which quantum theory provably yields, not only resource reduction over c...
Energy Technology Data Exchange (ETDEWEB)
Muecke, E.P.; Saias, I.; Zhu, B.
1996-05-01
This paper studies the point location problem in Delaunay triangulations without preprocessing and additional storage. The proposed procedure finds the query point simply by walking through the triangulation, after selecting a good starting point by random sampling. The analysis generalizes and extends a recent result of d = 2 dimensions by proving this procedure to take expected time close to O(n{sup 1/(d+1)}) for point location in Delaunay triangulations of n random points in d = 3 dimensions. Empirical results in both two and three dimensions show that this procedure is efficient in practice.
SHAPE FROM TEXTURE USING LOCALLY SCALED POINT PROCESSES
Directory of Open Access Journals (Sweden)
Eva-Maria Didden
2015-09-01
Full Text Available Shape from texture refers to the extraction of 3D information from 2D images with irregular texture. This paper introduces a statistical framework to learn shape from texture where convex texture elements in a 2D image are represented through a point process. In a first step, the 2D image is preprocessed to generate a probability map corresponding to an estimate of the unnormalized intensity of the latent point process underlying the texture elements. The latent point process is subsequently inferred from the probability map in a non-parametric, model free manner. Finally, the 3D information is extracted from the point pattern by applying a locally scaled point process model where the local scaling function represents the deformation caused by the projection of a 3D surface onto a 2D image.
International Nuclear Information System (INIS)
Setti, Francesco; Bini, Ruggero; Lunardelli, Massimo; Bosetti, Paolo; Bruschi, Stefania; De Cecco, Mariolino
2012-01-01
Many contemporary works show the interest of the scientific community in measuring the shape of artefacts made by single point incremental forming. In this paper, we will present an algorithm able to detect feature points with a random pattern, check the compatibility of associations exploiting multi-stereo constraints and reject outliers and perform a 3D reconstruction by dense random patterns. The algorithm is suitable for a real-time application, in fact it needs just three images and a synchronous relatively fast processing. The proposed method has been tested on a simple geometry and results have been compared with a coordinate measurement machine acquisition. (paper)
Fundamentals of applied probability and random processes
Ibe, Oliver
2005-01-01
This book is based on the premise that engineers use probability as a modeling tool, and that probability can be applied to the solution of engineering problems. Engineers and students studying probability and random processes also need to analyze data, and thus need some knowledge of statistics. This book is designed to provide students with a thorough grounding in probability and stochastic processes, demonstrate their applicability to real-world problems, and introduce the basics of statistics. The book''s clear writing style and homework problems make it ideal for the classroom or for self-study.* Good and solid introduction to probability theory and stochastic processes * Logically organized; writing is presented in a clear manner * Choice of topics is comprehensive within the area of probability * Ample homework problems are organized into chapter sections
A MOSUM procedure for the estimation of multiple random change points
Eichinger, Birte; Kirch, Claudia
2018-01-01
In this work, we investigate statistical properties of change point estimators based on moving sum statistics. We extend results for testing in a classical situation with multiple deterministic change points by allowing for random exogenous change points that arise in Hidden Markov or regime switching models among others. To this end, we consider a multiple mean change model with possible time series errors and prove that the number and location of change points are estimated consistently by ...
About the problem of generating three-dimensional pseudo-random points.
Carpintero, D. D.
The author demonstrates that a popular pseudo-random number generator is not adequate in some circumstances to generate n-dimensional random points, n > 2. This problem is particularly noxious when direction cosines are generated. He proposes several soultions, among them a good generator that satisfies all statistical criteria.
Post-Processing in the Material-Point Method
DEFF Research Database (Denmark)
Andersen, Søren; Andersen, Lars Vabbersgaard
The material-point method (MPM) is a numerical method for dynamic or static analysis of solids using a discretization in time and space. The method has shown to be successful in modelling physical problems involving large deformations, which are difficult to model with traditional numerical tools...... such as the finite element method. In the material-point method, a set of material points is utilized to track the problem in time and space, while a computational background grid is utilized to obtain spatial derivatives relevant to the physical problem. Currently, the research within the material-point method......-point method. The first idea involves associating a volume with each material point and displaying the deformation of this volume. In the discretization process, the physical domain is divided into a number of smaller volumes each represented by a simple shape; here quadrilaterals are chosen for the presented...
Asymptotic theory of weakly dependent random processes
Rio, Emmanuel
2017-01-01
Presenting tools to aid understanding of asymptotic theory and weakly dependent processes, this book is devoted to inequalities and limit theorems for sequences of random variables that are strongly mixing in the sense of Rosenblatt, or absolutely regular. The first chapter introduces covariance inequalities under strong mixing or absolute regularity. These covariance inequalities are applied in Chapters 2, 3 and 4 to moment inequalities, rates of convergence in the strong law, and central limit theorems. Chapter 5 concerns coupling. In Chapter 6 new deviation inequalities and new moment inequalities for partial sums via the coupling lemmas of Chapter 5 are derived and applied to the bounded law of the iterated logarithm. Chapters 7 and 8 deal with the theory of empirical processes under weak dependence. Lastly, Chapter 9 describes links between ergodicity, return times and rates of mixing in the case of irreducible Markov chains. Each chapter ends with a set of exercises. The book is an updated and extended ...
Probability, random processes, and ergodic properties
Gray, Robert M
1988-01-01
This book has been written for several reasons, not all of which are academic. This material was for many years the first half of a book in progress on information and ergodic theory. The intent was and is to provide a reasonably self-contained advanced treatment of measure theory, prob ability theory, and the theory of discrete time random processes with an emphasis on general alphabets and on ergodic and stationary properties of random processes that might be neither ergodic nor stationary. The intended audience was mathematically inc1ined engineering graduate students and visiting scholars who had not had formal courses in measure theoretic probability . Much of the material is familiar stuff for mathematicians, but many of the topics and results have not previously appeared in books. The original project grew too large and the first part contained much that would likely bore mathematicians and dis courage them from the second part. Hence I finally followed the suggestion to separate the material and split...
Multivariate Product-Shot-noise Cox Point Process Models
DEFF Research Database (Denmark)
Jalilian, Abdollah; Guan, Yongtao; Mateu, Jorge
We introduce a new multivariate product-shot-noise Cox process which is useful for model- ing multi-species spatial point patterns with clustering intra-specific interactions and neutral, negative or positive inter-specific interactions. The auto and cross pair correlation functions of the process...... can be obtained in closed analytical forms and approximate simulation of the process is straightforward. We use the proposed process to model interactions within and among five tree species in the Barro Colorado Island plot....
PROCESSING UAV AND LIDAR POINT CLOUDS IN GRASS GIS
Directory of Open Access Journals (Sweden)
V. Petras
2016-06-01
Full Text Available Today’s methods of acquiring Earth surface data, namely lidar and unmanned aerial vehicle (UAV imagery, non-selectively collect or generate large amounts of points. Point clouds from different sources vary in their properties such as number of returns, density, or quality. We present a set of tools with applications for different types of points clouds obtained by a lidar scanner, structure from motion technique (SfM, and a low-cost 3D scanner. To take advantage of the vertical structure of multiple return lidar point clouds, we demonstrate tools to process them using 3D raster techniques which allow, for example, the development of custom vegetation classification methods. Dense point clouds obtained from UAV imagery, often containing redundant points, can be decimated using various techniques before further processing. We implemented and compared several decimation techniques in regard to their performance and the final digital surface model (DSM. Finally, we will describe the processing of a point cloud from a low-cost 3D scanner, namely Microsoft Kinect, and its application for interaction with physical models. All the presented tools are open source and integrated in GRASS GIS, a multi-purpose open source GIS with remote sensing capabilities. The tools integrate with other open source projects, specifically Point Data Abstraction Library (PDAL, Point Cloud Library (PCL, and OpenKinect libfreenect2 library to benefit from the open source point cloud ecosystem. The implementation in GRASS GIS ensures long term maintenance and reproducibility by the scientific community but also by the original authors themselves.
Analysis of tree stand horizontal structure using random point field methods
Directory of Open Access Journals (Sweden)
O. P. Sekretenko
2015-06-01
Full Text Available This paper uses the model approach to analyze the horizontal structure of forest stands. The main types of models of random point fields and statistical procedures that can be used to analyze spatial patterns of trees of uneven and even-aged stands are described. We show how modern methods of spatial statistics can be used to address one of the objectives of forestry – to clarify the laws of natural thinning of forest stand and the corresponding changes in its spatial structure over time. Studying natural forest thinning, we describe the consecutive stages of modeling: selection of the appropriate parametric model, parameter estimation and generation of point patterns in accordance with the selected model, the selection of statistical functions to describe the horizontal structure of forest stands and testing of statistical hypotheses. We show the possibilities of a specialized software package, spatstat, which is designed to meet the challenges of spatial statistics and provides software support for modern methods of analysis of spatial data. We show that a model of stand thinning that does not consider inter-tree interaction can project the size distribution of the trees properly, but the spatial pattern of the modeled stand is not quite consistent with observed data. Using data of three even-aged pine forest stands of 25, 55, and 90-years old, we demonstrate that the spatial point process models are useful for combining measurements in the forest stands of different ages to study the forest stand natural thinning.
DEFF Research Database (Denmark)
Häggström, Olle; Lieshout, Marie-Colette van; Møller, Jesper
1999-01-01
The area-interaction process and the continuum random-cluster model are characterized in terms of certain functional forms of their respective conditional intensities. In certain cases, these two point process models can be derived from a bivariate point process model which in many respects...... is simpler to analyse and simulate. Using this correspondence we devise a two-component Gibbs sampler, which can be used for fast and exact simulation by extending the recent ideas of Propp and Wilson. We further introduce a Swendsen-Wang type algorithm. The relevance of the results within spatial statistics...
MODELLING AND SIMULATION OF A NEUROPHYSIOLOGICAL EXPERIMENT BY SPATIO-TEMPORAL POINT PROCESSES
Directory of Open Access Journals (Sweden)
Viktor Beneš
2011-05-01
Full Text Available We present a stochastic model of an experimentmonitoring the spiking activity of a place cell of hippocampus of an experimental animal moving in an arena. Doubly stochastic spatio-temporal point process is used to model and quantify overdispersion. Stochastic intensity is modelled by a Lévy based random field while the animal path is simplified to a discrete random walk. In a simulation study first a method suggested previously is used. Then it is shown that a solution of the filtering problem yields the desired inference to the random intensity. Two approaches are suggested and the new one based on finite point process density is applied. Using Markov chain Monte Carlo we obtain numerical results from the simulated model. The methodology is discussed.
Dew point vs bubble point : a misunderstood constraint on gravity drainage processes
Energy Technology Data Exchange (ETDEWEB)
Nenninger, J. [N-Solv Corp., Calgary, AB (Canada); Gunnewiek, L. [Hatch Ltd., Mississauga, ON (Canada)
2009-07-01
This study demonstrated that gravity drainage processes that use blended fluids such as solvents have an inherently unstable material balance due to differences between dew point and bubble point compositions. The instability can lead to the accumulation of volatile components within the chamber, and impair mass and heat transfer processes. Case studies were used to demonstrate the large temperature gradients within the vapour chamber caused by temperature differences between the bubble point and dew point for blended fluids. A review of published data showed that many experiments on in-situ processes do not account for unstable material balances caused by a lack of steam trap control. A study of temperature profiles during steam assisted gravity drainage (SAGD) studies showed significant temperature depressions caused by methane accumulations at the outside perimeter of the steam chamber. It was demonstrated that the condensation of large volumes of purified solvents provided an efficient mechanism for the removal of methane from the chamber. It was concluded that gravity drainage processes can be optimized by using pure propane during the injection process. 22 refs., 1 tab., 18 figs.
A MARKED POINT PROCESS MODEL FOR VEHICLE DETECTION IN AERIAL LIDAR POINT CLOUDS
Directory of Open Access Journals (Sweden)
A. Börcs
2012-07-01
Full Text Available In this paper we present an automated method for vehicle detection in LiDAR point clouds of crowded urban areas collected from an aerial platform. We assume that the input cloud is unordered, but it contains additional intensity and return number information which are jointly exploited by the proposed solution. Firstly, the 3-D point set is segmented into ground, vehicle, building roof, vegetation and clutter classes. Then the points with the corresponding class labels and intensity values are projected to the ground plane, where the optimal vehicle configuration is described by a Marked Point Process (MPP model of 2-D rectangles. Finally, the Multiple Birth and Death algorithm is utilized to find the configuration with the highest confidence.
Traffic and random processes an introduction
Mauro, Raffaele
2015-01-01
This book deals in a basic and systematic manner with a the fundamentals of random function theory and looks at some aspects related to arrival, vehicle headway and operational speed processes at the same time. The work serves as a useful practical and educational tool and aims at providing stimulus and motivation to investigate issues of such a strong applicative interest. It has a clearly discursive and concise structure, in which numerical examples are given to clarify the applications of the suggested theoretical model. Some statistical characterizations are fully developed in order to illustrate the peculiarities of specific modeling approaches; finally, there is a useful bibliography for in-depth thematic analysis.
Pointo - a Low Cost Solution to Point Cloud Processing
Houshiar, H.; Winkler, S.
2017-11-01
With advance in technology access to data especially 3D point cloud data becomes more and more an everyday task. 3D point clouds are usually captured with very expensive tools such as 3D laser scanners or very time consuming methods such as photogrammetry. Most of the available softwares for 3D point cloud processing are designed for experts and specialists in this field and are usually very large software packages containing variety of methods and tools. This results in softwares that are usually very expensive to acquire and also very difficult to use. Difficulty of use is caused by complicated user interfaces that is required to accommodate a large list of features. The aim of these complex softwares is to provide a powerful tool for a specific group of specialist. However they are not necessary required by the majority of the up coming average users of point clouds. In addition to complexity and high costs of these softwares they generally rely on expensive and modern hardware and only compatible with one specific operating system. Many point cloud customers are not point cloud processing experts or willing to spend the high acquisition costs of these expensive softwares and hardwares. In this paper we introduce a solution for low cost point cloud processing. Our approach is designed to accommodate the needs of the average point cloud user. To reduce the cost and complexity of software our approach focuses on one functionality at a time in contrast with most available softwares and tools that aim to solve as many problems as possible at the same time. Our simple and user oriented design improve the user experience and empower us to optimize our methods for creation of an efficient software. In this paper we introduce Pointo family as a series of connected softwares to provide easy to use tools with simple design for different point cloud processing requirements. PointoVIEWER and PointoCAD are introduced as the first components of the Pointo family to provide a
On estimation of the intensity function of a point process
Lieshout, van M.N.M.
2010-01-01
Abstract. Estimation of the intensity function of spatial point processes is a fundamental problem. In this paper, we interpret the Delaunay tessellation field estimator recently introduced by Schaap and Van de Weygaert as an adaptive kernel estimator and give explicit expressions for the mean and
Spatio-temporal point process filtering methods with an application
Czech Academy of Sciences Publication Activity Database
Frcalová, B.; Beneš, V.; Klement, Daniel
2010-01-01
Roč. 21, 3-4 (2010), s. 240-252 ISSN 1180-4009 R&D Projects: GA AV ČR(CZ) IAA101120604 Institutional research plan: CEZ:AV0Z50110509 Keywords : cox point process * filtering * spatio-temporal modelling * spike Subject RIV: BA - General Mathematics Impact factor: 0.750, year: 2010
A case study on point process modelling in disease mapping
Czech Academy of Sciences Publication Activity Database
Beneš, Viktor; Bodlák, M.; Moller, J.; Waagepetersen, R.
2005-01-01
Roč. 24, č. 3 (2005), s. 159-168 ISSN 1580-3139 R&D Projects: GA MŠk 0021620839; GA ČR GA201/03/0946 Institutional research plan: CEZ:AV0Z10750506 Keywords : log Gaussian Cox point process * Bayesian estimation Subject RIV: BB - Applied Statistics, Operational Research
A J–function for inhomogeneous point processes
M.N.M. van Lieshout (Marie-Colette)
2010-01-01
htmlabstractWe propose new summary statistics for intensity-reweighted moment stationary point processes that generalise the well known J-, empty space, and nearest-neighbour distance dis- tribution functions, represent them in terms of generating functionals and conditional intensities, and relate
Polewski, Przemyslaw; Yao, Wei; Heurich, Marco; Krzystek, Peter; Stilla, Uwe
2018-06-01
In this study, we present a method for improving the quality of automatic single fallen tree stem segmentation in ALS data by applying a specialized constrained conditional random field (CRF). The entire processing pipeline is composed of two steps. First, short stem segments of equal length are detected and a subset of them is selected for further processing, while in the second step the chosen segments are merged to form entire trees. The first step is accomplished using the specialized CRF defined on the space of segment labelings, capable of finding segment candidates which are easier to merge subsequently. To achieve this, the CRF considers not only the features of every candidate individually, but incorporates pairwise spatial interactions between adjacent segments into the model. In particular, pairwise interactions include a collinearity/angular deviation probability which is learned from training data as well as the ratio of spatial overlap, whereas unary potentials encode a learned probabilistic model of the laser point distribution around each segment. Each of these components enters the CRF energy with its own balance factor. To process previously unseen data, we first calculate the subset of segments for merging on a grid of balance factors by minimizing the CRF energy. Then, we perform the merging and rank the balance configurations according to the quality of their resulting merged trees, obtained from a learned tree appearance model. The final result is derived from the top-ranked configuration. We tested our approach on 5 plots from the Bavarian Forest National Park using reference data acquired in a field inventory. Compared to our previous segment selection method without pairwise interactions, an increase in detection correctness and completeness of up to 7 and 9 percentage points, respectively, was observed.
ERROR DISTRIBUTION EVALUATION OF THE THIRD VANISHING POINT BASED ON RANDOM STATISTICAL SIMULATION
Directory of Open Access Journals (Sweden)
C. Li
2012-07-01
Full Text Available POS, integrated by GPS / INS (Inertial Navigation Systems, has allowed rapid and accurate determination of position and attitude of remote sensing equipment for MMS (Mobile Mapping Systems. However, not only does INS have system error, but also it is very expensive. Therefore, in this paper error distributions of vanishing points are studied and tested in order to substitute INS for MMS in some special land-based scene, such as ground façade where usually only two vanishing points can be detected. Thus, the traditional calibration approach based on three orthogonal vanishing points is being challenged. In this article, firstly, the line clusters, which parallel to each others in object space and correspond to the vanishing points, are detected based on RANSAC (Random Sample Consensus and parallelism geometric constraint. Secondly, condition adjustment with parameters is utilized to estimate nonlinear error equations of two vanishing points (VX, VY. How to set initial weights for the adjustment solution of single image vanishing points is presented. Solving vanishing points and estimating their error distributions base on iteration method with variable weights, co-factor matrix and error ellipse theory. Thirdly, under the condition of known error ellipses of two vanishing points (VX, VY and on the basis of the triangle geometric relationship of three vanishing points, the error distribution of the third vanishing point (VZ is calculated and evaluated by random statistical simulation with ignoring camera distortion. Moreover, Monte Carlo methods utilized for random statistical estimation are presented. Finally, experimental results of vanishing points coordinate and their error distributions are shown and analyzed.
Error Distribution Evaluation of the Third Vanishing Point Based on Random Statistical Simulation
Li, C.
2012-07-01
POS, integrated by GPS / INS (Inertial Navigation Systems), has allowed rapid and accurate determination of position and attitude of remote sensing equipment for MMS (Mobile Mapping Systems). However, not only does INS have system error, but also it is very expensive. Therefore, in this paper error distributions of vanishing points are studied and tested in order to substitute INS for MMS in some special land-based scene, such as ground façade where usually only two vanishing points can be detected. Thus, the traditional calibration approach based on three orthogonal vanishing points is being challenged. In this article, firstly, the line clusters, which parallel to each others in object space and correspond to the vanishing points, are detected based on RANSAC (Random Sample Consensus) and parallelism geometric constraint. Secondly, condition adjustment with parameters is utilized to estimate nonlinear error equations of two vanishing points (VX, VY). How to set initial weights for the adjustment solution of single image vanishing points is presented. Solving vanishing points and estimating their error distributions base on iteration method with variable weights, co-factor matrix and error ellipse theory. Thirdly, under the condition of known error ellipses of two vanishing points (VX, VY) and on the basis of the triangle geometric relationship of three vanishing points, the error distribution of the third vanishing point (VZ) is calculated and evaluated by random statistical simulation with ignoring camera distortion. Moreover, Monte Carlo methods utilized for random statistical estimation are presented. Finally, experimental results of vanishing points coordinate and their error distributions are shown and analyzed.
Random fixed point equations and inverse problems using "collage method" for contraction mappings
Kunze, H. E.; La Torre, D.; Vrscay, E. R.
2007-10-01
In this paper we are interested in the direct and inverse problems for the following class of random fixed point equations T(w,x(w))=x(w) where is a given operator, [Omega] is a probability space and X is a Polish metric space. The inverse problem is solved by recourse to the collage theorem for contractive maps. We then consider two applications: (i) random integral equations, and (ii) random iterated function systems with greyscale maps (RIFSM), for which noise is added to the classical IFSM.
Some properties of point processes in statistical optics
International Nuclear Information System (INIS)
Picinbono, B.; Bendjaballah, C.
2010-01-01
The analysis of the statistical properties of the point process (PP) of photon detection times can be used to determine whether or not an optical field is classical, in the sense that its statistical description does not require the methods of quantum optics. This determination is, however, more difficult than ordinarily admitted and the first aim of this paper is to illustrate this point by using some results of the PP theory. For example, it is well known that the analysis of the photodetection of classical fields exhibits the so-called bunching effect. But this property alone cannot be used to decide the nature of a given optical field. Indeed, we have presented examples of point processes for which a bunching effect appears and yet they cannot be obtained from a classical field. These examples are illustrated by computer simulations. Similarly, it is often admitted that for fields with very low light intensity the bunching or antibunching can be described by using the statistical properties of the distance between successive events of the point process, which simplifies the experimental procedure. We have shown that, while this property is valid for classical PPs, it has no reason to be true for nonclassical PPs, and we have presented some examples of this situation also illustrated by computer simulations.
Shot-noise-weighted processes : a new family of spatial point processes
M.N.M. van Lieshout (Marie-Colette); I.S. Molchanov (Ilya)
1995-01-01
textabstractThe paper suggests a new family of of spatial point processes distributions. They are defined by means of densities with respect to the Poisson point process within a bounded set. These densities are given in terms of a functional of the shot-noise process with a given influence
International Nuclear Information System (INIS)
Boyer, T.H.
1975-01-01
The theory of classical electrodynamics with classical electromagnetic zero-point radiation is outlined here under the title random electrodynamics. The work represents a reanalysis of the bounds of validity of classical electron theory which should sharpen the understanding of the connections and distinctions between classical and quantum theories. The new theory of random electrodynamics is a classical electron theory involving Newton's equations for particle motion due to the Lorentz force, and Maxwell's equations for the electromagnetic fields with point particles as sources. However, the theory departs from the classical electron theory of Lorentz in that it adopts a new boundary condition on Maxwell's equations. It is assumed that the homogeneous boundary condition involves random classical electromagnetic radiation with a Lorentz-invariant spectrum, classical electromagnetic zero-point radiation. The implications of random electrodynamics for atomic structure, atomic spectra, and particle-interference effects are discussed on an order-of-magnitude or heuristic level. Some detailed mathematical connections and some merely heuristic connections are noted between random electrodynamics and quantum theory. (U.S.)
Hematological clozapine monitoring with a point-of-care device: A randomized cross-over trial
DEFF Research Database (Denmark)
Nielsen, Jimmi; Thode, Dorrit; Stenager, Elsebeth
for several reasons, perhaps most importantly because of the mandatory hematological monitoring. The Chempaq Express Blood Counter (Chempaq XBC) is a point-of-care device providing counts of white blood cells (WBC) and granulocytes based on a capillary blood sampling. A randomized cross-over trial design...
Statistical theory of dislocation configurations in a random array of point obstacles
International Nuclear Information System (INIS)
Labusch, R.
1977-01-01
The stable configurations of a dislocation in an infinite random array of point obstacles are analyzed using the mathematical methods of statistical mechanics. The theory provides exact distribution functions of the forces on pinning points and of the link lengths between points on the line. The expected number of stable configurations is a function of the applied stress. This number drops to zero at the critical stress. Due to a degeneracy problem in the line count, the value of the flow stress cannot be determined rigorously, but we can give a good approximation that is very close to the empirical value
Two-step estimation for inhomogeneous spatial point processes
DEFF Research Database (Denmark)
Waagepetersen, Rasmus; Guan, Yongtao
This paper is concerned with parameter estimation for inhomogeneous spatial point processes with a regression model for the intensity function and tractable second order properties (K-function). Regression parameters are estimated using a Poisson likelihood score estimating function and in a second...... step minimum contrast estimation is applied for the residual clustering parameters. Asymptotic normality of parameter estimates is established under certain mixing conditions and we exemplify how the results may be applied in ecological studies of rain forests....
A case study on point process modelling in disease mapping
DEFF Research Database (Denmark)
Møller, Jesper; Waagepetersen, Rasmus Plenge; Benes, Viktor
2005-01-01
of the risk on the covariates. Instead of using the common areal level approaches we base the analysis on a Bayesian approach for a log Gaussian Cox point process with covariates. Posterior characteristics for a discretized version of the log Gaussian Cox process are computed using Markov chain Monte Carlo...... methods. A particular problem which is thoroughly discussed is to determine a model for the background population density. The risk map shows a clear dependency with the population intensity models and the basic model which is adopted for the population intensity determines what covariates influence...... the risk of TBE. Model validation is based on the posterior predictive distribution of various summary statistics....
DEFF Research Database (Denmark)
Møller, Jesper; Diaz-Avalos, Carlos
Spatio-temporal Cox point process models with a multiplicative structure for the driving random intensity, incorporating covariate information into temporal and spatial components, and with a residual term modelled by a shot-noise process, are considered. Such models are flexible and tractable fo...... dataset consisting of 2796 days and 5834 spatial locations of fires. The model is compared with a spatio-temporal log-Gaussian Cox point process model, and likelihood-based methods are discussed to some extent....
UNDERSTANDING SEVERE WEATHER PROCESSES THROUGH SPATIOTEMPORAL RELATIONAL RANDOM FORESTS
National Aeronautics and Space Administration — UNDERSTANDING SEVERE WEATHER PROCESSES THROUGH SPATIOTEMPORAL RELATIONAL RANDOM FORESTS AMY MCGOVERN, TIMOTHY SUPINIE, DAVID JOHN GAGNE II, NATHANIEL TROUTMAN,...
A Marked Point Process Framework for Extracellular Electrical Potentials
Directory of Open Access Journals (Sweden)
Carlos A. Loza
2017-12-01
Full Text Available Neuromodulations are an important component of extracellular electrical potentials (EEP, such as the Electroencephalogram (EEG, Electrocorticogram (ECoG and Local Field Potentials (LFP. This spatially temporal organized multi-frequency transient (phasic activity reflects the multiscale spatiotemporal synchronization of neuronal populations in response to external stimuli or internal physiological processes. We propose a novel generative statistical model of a single EEP channel, where the collected signal is regarded as the noisy addition of reoccurring, multi-frequency phasic events over time. One of the main advantages of the proposed framework is the exceptional temporal resolution in the time location of the EEP phasic events, e.g., up to the sampling period utilized in the data collection. Therefore, this allows for the first time a description of neuromodulation in EEPs as a Marked Point Process (MPP, represented by their amplitude, center frequency, duration, and time of occurrence. The generative model for the multi-frequency phasic events exploits sparseness and involves a shift-invariant implementation of the clustering technique known as k-means. The cost function incorporates a robust estimation component based on correntropy to mitigate the outliers caused by the inherent noise in the EEP. Lastly, the background EEP activity is explicitly modeled as the non-sparse component of the collected signal to further improve the delineation of the multi-frequency phasic events in time. The framework is validated using two publicly available datasets: the DREAMS sleep spindles database and one of the Brain-Computer Interface (BCI competition datasets. The results achieve benchmark performance and provide novel quantitative descriptions based on power, event rates and timing in order to assess behavioral correlates beyond the classical power spectrum-based analysis. This opens the possibility for a unifying point process framework of
Framework for adaptive multiscale analysis of nonhomogeneous point processes.
Helgason, Hannes; Bartroff, Jay; Abry, Patrice
2011-01-01
We develop the methodology for hypothesis testing and model selection in nonhomogeneous Poisson processes, with an eye toward the application of modeling and variability detection in heart beat data. Modeling the process' non-constant rate function using templates of simple basis functions, we develop the generalized likelihood ratio statistic for a given template and a multiple testing scheme to model-select from a family of templates. A dynamic programming algorithm inspired by network flows is used to compute the maximum likelihood template in a multiscale manner. In a numerical example, the proposed procedure is nearly as powerful as the super-optimal procedures that know the true template size and true partition, respectively. Extensions to general history-dependent point processes is discussed.
Simple computation of reaction–diffusion processes on point clouds
Macdonald, Colin B.; Merriman, Barry; Ruuth, Steven J.
2013-01-01
The study of reaction-diffusion processes is much more complicated on general curved surfaces than on standard Cartesian coordinate spaces. Here we show how to formulate and solve systems of reaction-diffusion equations on surfaces in an extremely simple way, using only the standard Cartesian form of differential operators, and a discrete unorganized point set to represent the surface. Our method decouples surface geometry from the underlying differential operators. As a consequence, it becomes possible to formulate and solve rather general reaction-diffusion equations on general surfaces without having to consider the complexities of differential geometry or sophisticated numerical analysis. To illustrate the generality of the method, computations for surface diffusion, pattern formation, excitable media, and bulk-surface coupling are provided for a variety of complex point cloud surfaces.
Simple computation of reaction–diffusion processes on point clouds
Macdonald, Colin B.
2013-05-20
The study of reaction-diffusion processes is much more complicated on general curved surfaces than on standard Cartesian coordinate spaces. Here we show how to formulate and solve systems of reaction-diffusion equations on surfaces in an extremely simple way, using only the standard Cartesian form of differential operators, and a discrete unorganized point set to represent the surface. Our method decouples surface geometry from the underlying differential operators. As a consequence, it becomes possible to formulate and solve rather general reaction-diffusion equations on general surfaces without having to consider the complexities of differential geometry or sophisticated numerical analysis. To illustrate the generality of the method, computations for surface diffusion, pattern formation, excitable media, and bulk-surface coupling are provided for a variety of complex point cloud surfaces.
Digital analyzer for point processes based on first-in-first-out memories
Basano, Lorenzo; Ottonello, Pasquale; Schiavi, Enore
1992-06-01
We present an entirely new version of a multipurpose instrument designed for the statistical analysis of point processes, especially those characterized by high bunching. A long sequence of pulses can be recorded in the RAM bank of a personal computer via a suitably designed front end which employs a pair of first-in-first-out (FIFO) memories; these allow one to build an analyzer that, besides being simpler from the electronic point of view, is capable of sustaining much higher intensity fluctuations of the point process. The overflow risk of the device is evaluated by treating the FIFO pair as a queueing system. The apparatus was tested using both a deterministic signal and a sequence of photoelectrons obtained from laser light scattered by random surfaces.
Random skew plane partitions and the Pearcey process
DEFF Research Database (Denmark)
Reshetikhin, Nicolai; Okounkov, Andrei
2007-01-01
We study random skew 3D partitions weighted by q vol and, specifically, the q → 1 asymptotics of local correlations near various points of the limit shape. We obtain sine-kernel asymptotics for correlations in the bulk of the disordered region, Airy kernel asymptotics near a general point of the ...
Statistical representation of a spray as a point process
International Nuclear Information System (INIS)
Subramaniam, S.
2000-01-01
The statistical representation of a spray as a finite point process is investigated. One objective is to develop a better understanding of how single-point statistical information contained in descriptions such as the droplet distribution function (ddf), relates to the probability density functions (pdfs) associated with the droplets themselves. Single-point statistical information contained in the droplet distribution function (ddf) is shown to be related to a sequence of single surrogate-droplet pdfs, which are in general different from the physical single-droplet pdfs. It is shown that the ddf contains less information than the fundamental single-point statistical representation of the spray, which is also described. The analysis shows which events associated with the ensemble of spray droplets can be characterized by the ddf, and which cannot. The implications of these findings for the ddf approach to spray modeling are discussed. The results of this study also have important consequences for the initialization and evolution of direct numerical simulations (DNS) of multiphase flows, which are usually initialized on the basis of single-point statistics such as the droplet number density in physical space. If multiphase DNS are initialized in this way, this implies that even the initial representation contains certain implicit assumptions concerning the complete ensemble of realizations, which are invalid for general multiphase flows. Also the evolution of a DNS initialized in this manner is shown to be valid only if an as yet unproven commutation hypothesis holds true. Therefore, it is questionable to what extent DNS that are initialized in this manner constitute a direct simulation of the physical droplets. Implications of these findings for large eddy simulations of multiphase flows are also discussed. (c) 2000 American Institute of Physics
Energy risk management through self-exciting marked point process
International Nuclear Information System (INIS)
Herrera, Rodrigo
2013-01-01
Crude oil is a dynamically traded commodity that affects many economies. We propose a collection of marked self-exciting point processes with dependent arrival rates for extreme events in oil markets and related risk measures. The models treat the time among extreme events in oil markets as a stochastic process. The main advantage of this approach is its capability to capture the short, medium and long-term behavior of extremes without involving an arbitrary stochastic volatility model or a prefiltration of the data, as is common in extreme value theory applications. We make use of the proposed model in order to obtain an improved estimate for the Value at Risk in oil markets. Empirical findings suggest that the reliability and stability of Value at Risk estimates improve as a result of finer modeling approach. This is supported by an empirical application in the representative West Texas Intermediate (WTI) and Brent crude oil markets. - Highlights: • We propose marked self-exciting point processes for extreme events in oil markets. • This approach captures the short and long-term behavior of extremes. • We improve the estimates for the VaR in the WTI and Brent crude oil markets
Weak convergence of marked point processes generated by crossings of multivariate jump processes
DEFF Research Database (Denmark)
Tamborrino, Massimiliano; Sacerdote, Laura; Jacobsen, Martin
2014-01-01
We consider the multivariate point process determined by the crossing times of the components of a multivariate jump process through a multivariate boundary, assuming to reset each component to an initial value after its boundary crossing. We prove that this point process converges weakly...... process converging to a multivariate Ornstein–Uhlenbeck process is discussed as a guideline for applying diffusion limits for jump processes. We apply our theoretical findings to neural network modeling. The proposed model gives a mathematical foundation to the generalization of the class of Leaky...
Variational approach for spatial point process intensity estimation
DEFF Research Database (Denmark)
Coeurjolly, Jean-Francois; Møller, Jesper
is assumed to be of log-linear form β+θ⊤z(u) where z is a spatial covariate function and the focus is on estimating θ. The variational estimator is very simple to implement and quicker than alternative estimation procedures. We establish its strong consistency and asymptotic normality. We also discuss its...... finite-sample properties in comparison with the maximum first order composite likelihood estimator when considering various inhomogeneous spatial point process models and dimensions as well as settings were z is completely or only partially known....
Two-step estimation for inhomogeneous spatial point processes
DEFF Research Database (Denmark)
Waagepetersen, Rasmus; Guan, Yongtao
2009-01-01
The paper is concerned with parameter estimation for inhomogeneous spatial point processes with a regression model for the intensity function and tractable second-order properties (K-function). Regression parameters are estimated by using a Poisson likelihood score estimating function and in the ...... and in the second step minimum contrast estimation is applied for the residual clustering parameters. Asymptotic normality of parameter estimates is established under certain mixing conditions and we exemplify how the results may be applied in ecological studies of rainforests....
A Computerized Approach to Trickle-Process, Random Assignment.
Braucht, G. Nicholas; Reichardt, Charles S.
1993-01-01
Procedures for implementing random assignment with trickle processing and ways they can be corrupted are described. A computerized method for implementing random assignment with trickle processing is presented as a desirable alternative in many situations and a way of protecting against threats to assignment validity. (SLD)
CLINSULF sub-dew-point process for sulphur recovery
Energy Technology Data Exchange (ETDEWEB)
Heisel, M.; Marold, F.
1988-01-01
In a 2-reactor system, the CLINSULF process allows very high sulphur recovery rates. When operated at 100/sup 0/C at the outlet, i.e. below the sulphur solidification point, a sulphur recovery rate of more than 99.2% was achieved in a 2-reactor series. Assuming a 70% sulphur recovery in an upstream Claus furnace plus sulphur condenser, an overall sulphur recovery of more than 99.8% results for the 2-reactor system. This is approximately 2% higher than in conventional Claus plus SDP units, which mostly consist of 4 reactors or more. This means the the CLINSULF SSP process promises to be an improvement both in respect of efficiency and low investment cost.
Self-Exciting Point Process Modeling of Conversation Event Sequences
Masuda, Naoki; Takaguchi, Taro; Sato, Nobuo; Yano, Kazuo
Self-exciting processes of Hawkes type have been used to model various phenomena including earthquakes, neural activities, and views of online videos. Studies of temporal networks have revealed that sequences of social interevent times for individuals are highly bursty. We examine some basic properties of event sequences generated by the Hawkes self-exciting process to show that it generates bursty interevent times for a wide parameter range. Then, we fit the model to the data of conversation sequences recorded in company offices in Japan. In this way, we can estimate relative magnitudes of the self excitement, its temporal decay, and the base event rate independent of the self excitation. These variables highly depend on individuals. We also point out that the Hawkes model has an important limitation that the correlation in the interevent times and the burstiness cannot be independently modulated.
Imitation learning of Non-Linear Point-to-Point Robot Motions using Dirichlet Processes
DEFF Research Database (Denmark)
Krüger, Volker; Tikhanoff, Vadim; Natale, Lorenzo
2012-01-01
In this paper we discuss the use of the infinite Gaussian mixture model and Dirichlet processes for learning robot movements from demonstrations. Starting point of this work is an earlier paper where the authors learn a non-linear dynamic robot movement model from a small number of observations....... The model in that work is learned using a classical finite Gaussian mixture model (FGMM) where the Gaussian mixtures are appropriately constrained. The problem with this approach is that one needs to make a good guess for how many mixtures the FGMM should use. In this work, we generalize this approach...... our algorithm on the same data that was used in [5], where the authors use motion capture devices to record the demonstrations. As further validation we test our approach on novel data acquired on our iCub in a different demonstration scenario in which the robot is physically driven by the human...
Benchmarking of radiological departments. Starting point for successful process optimization
International Nuclear Information System (INIS)
Busch, Hans-Peter
2010-01-01
Continuous optimization of the process of organization and medical treatment is part of the successful management of radiological departments. The focus of this optimization can be cost units such as CT and MRI or the radiological parts of total patient treatment. Key performance indicators for process optimization are cost- effectiveness, service quality and quality of medical treatment. The potential for improvements can be seen by comparison (benchmark) with other hospitals and radiological departments. Clear definitions of key data and criteria are absolutely necessary for comparability. There is currently little information in the literature regarding the methodology and application of benchmarks especially from the perspective of radiological departments and case-based lump sums, even though benchmarking has frequently been applied to radiological departments by hospital management. The aim of this article is to describe and discuss systematic benchmarking as an effective starting point for successful process optimization. This includes the description of the methodology, recommendation of key parameters and discussion of the potential for cost-effectiveness analysis. The main focus of this article is cost-effectiveness (efficiency and effectiveness) with respect to cost units and treatment processes. (orig.)
Random covering of the circle: the configuration-space of the free deposition process
Energy Technology Data Exchange (ETDEWEB)
Huillet, Thierry [Laboratoire de Physique Theorique et Modelisation, CNRS-UMR 8089 et Universite de Cergy-Pontoise, 5 mail Gay-Lussac, 95031, Neuville sur Oise (France)
2003-12-12
Consider a circle of circumference 1. Throw at random n points, sequentially, on this circle and append clockwise an arc (or rod) of length s to each such point. The resulting random set (the free gas of rods) is a collection of a random number of clusters with random sizes. It models a free deposition process on a 1D substrate. For such processes, we shall consider the occurrence times (number of rods) and probabilities, as n grows, of the following configurations: those avoiding rod overlap (the hard-rod gas), those for which the largest gap is smaller than rod length s (the packing gas), those (parking configurations) for which hard rod and packing constraints are both fulfilled and covering configurations. Special attention is paid to the statistical properties of each such (rare) configuration in the asymptotic density domain when ns = {rho}, for some finite density {rho} of points. Using results from spacings in the random division of the circle, explicit large deviation rate functions can be computed in each case from state equations. Lastly, a process consisting in selecting at random one of these specific equilibrium configurations (called the observable) can be modelled. When particularized to the parking model, this system produces parking configurations differently from Renyi's random sequential adsorption model.
Linear and quadratic models of point process systems: contributions of patterned input to output.
Lindsay, K A; Rosenberg, J R
2012-08-01
In the 1880's Volterra characterised a nonlinear system using a functional series connecting continuous input and continuous output. Norbert Wiener, in the 1940's, circumvented problems associated with the application of Volterra series to physical problems by deriving from it a new series of terms that are mutually uncorrelated with respect to Gaussian processes. Subsequently, Brillinger, in the 1970's, introduced a point-process analogue of Volterra's series connecting point-process inputs to the instantaneous rate of point-process output. We derive here a new series from this analogue in which its terms are mutually uncorrelated with respect to Poisson processes. This new series expresses how patterned input in a spike train, represented by third-order cross-cumulants, is converted into the instantaneous rate of an output point-process. Given experimental records of suitable duration, the contribution of arbitrary patterned input to an output process can, in principle, be determined. Solutions for linear and quadratic point-process models with one and two inputs and a single output are investigated. Our theoretical results are applied to isolated muscle spindle data in which the spike trains from the primary and secondary endings from the same muscle spindle are recorded in response to stimulation of one and then two static fusimotor axons in the absence and presence of a random length change imposed on the parent muscle. For a fixed mean rate of input spikes, the analysis of the experimental data makes explicit which patterns of two input spikes contribute to an output spike. Copyright © 2012 Elsevier Ltd. All rights reserved.
Neustifter, Benjamin; Rathbun, Stephen L; Shiffman, Saul
2012-01-01
Ecological Momentary Assessment is an emerging method of data collection in behavioral research that may be used to capture the times of repeated behavioral events on electronic devices, and information on subjects' psychological states through the electronic administration of questionnaires at times selected from a probability-based design as well as the event times. A method for fitting a mixed Poisson point process model is proposed for the impact of partially-observed, time-varying covariates on the timing of repeated behavioral events. A random frailty is included in the point-process intensity to describe variation among subjects in baseline rates of event occurrence. Covariate coefficients are estimated using estimating equations constructed by replacing the integrated intensity in the Poisson score equations with a design-unbiased estimator. An estimator is also proposed for the variance of the random frailties. Our estimators are robust in the sense that no model assumptions are made regarding the distribution of the time-varying covariates or the distribution of the random effects. However, subject effects are estimated under gamma frailties using an approximate hierarchical likelihood. The proposed approach is illustrated using smoking data.
Probability on graphs random processes on graphs and lattices
Grimmett, Geoffrey
2018-01-01
This introduction to some of the principal models in the theory of disordered systems leads the reader through the basics, to the very edge of contemporary research, with the minimum of technical fuss. Topics covered include random walk, percolation, self-avoiding walk, interacting particle systems, uniform spanning tree, random graphs, as well as the Ising, Potts, and random-cluster models for ferromagnetism, and the Lorentz model for motion in a random medium. This new edition features accounts of major recent progress, including the exact value of the connective constant of the hexagonal lattice, and the critical point of the random-cluster model on the square lattice. The choice of topics is strongly motivated by modern applications, and focuses on areas that merit further research. Accessible to a wide audience of mathematicians and physicists, this book can be used as a graduate course text. Each chapter ends with a range of exercises.
Discrete random signal processing and filtering primer with Matlab
Poularikas, Alexander D
2013-01-01
Engineers in all fields will appreciate a practical guide that combines several new effective MATLAB® problem-solving approaches and the very latest in discrete random signal processing and filtering.Numerous Useful Examples, Problems, and Solutions - An Extensive and Powerful ReviewWritten for practicing engineers seeking to strengthen their practical grasp of random signal processing, Discrete Random Signal Processing and Filtering Primer with MATLAB provides the opportunity to doubly enhance their skills. The author, a leading expert in the field of electrical and computer engineering, offe
Point process analyses of variations in smoking rate by setting, mood, gender, and dependence
Shiffman, Saul; Rathbun, Stephen L.
2010-01-01
The immediate emotional and situational antecedents of ad libitum smoking are still not well understood. We re-analyzed data from Ecological Momentary Assessment using novel point-process analyses, to assess how craving, mood, and social setting influence smoking rate, as well as assessing the moderating effects of gender and nicotine dependence. 304 smokers recorded craving, mood, and social setting using electronic diaries when smoking and at random nonsmoking times over 16 days of smoking. Point-process analysis, which makes use of the known random sampling scheme for momentary variables, examined main effects of setting and interactions with gender and dependence. Increased craving was associated with higher rates of smoking, particularly among women. Negative affect was not associated with smoking rate, even in interaction with arousal, but restlessness was associated with substantially higher smoking rates. Women's smoking tended to be less affected by negative affect. Nicotine dependence had little moderating effect on situational influences. Smoking rates were higher when smokers were alone or with others smoking, and smoking restrictions reduced smoking rates. However, the presence of others smoking undermined the effects of restrictions. The more sensitive point-process analyses confirmed earlier findings, including the surprising conclusion that negative affect by itself was not related to smoking rates. Contrary to hypothesis, men's and not women's smoking was influenced by negative affect. Both smoking restrictions and the presence of others who are not smoking suppress smoking, but others’ smoking undermines the effects of restrictions. Point-process analyses of EMA data can bring out even small influences on smoking rate. PMID:21480683
A CASE STUDY ON POINT PROCESS MODELLING IN DISEASE MAPPING
Directory of Open Access Journals (Sweden)
Viktor Beneš
2011-05-01
Full Text Available We consider a data set of locations where people in Central Bohemia have been infected by tick-borne encephalitis (TBE, and where population census data and covariates concerning vegetation and altitude are available. The aims are to estimate the risk map of the disease and to study the dependence of the risk on the covariates. Instead of using the common area level approaches we base the analysis on a Bayesian approach for a log Gaussian Cox point process with covariates. Posterior characteristics for a discretized version of the log Gaussian Cox process are computed using Markov chain Monte Carlo methods. A particular problem which is thoroughly discussed is to determine a model for the background population density. The risk map shows a clear dependency with the population intensity models and the basic model which is adopted for the population intensity determines what covariates influence the risk of TBE. Model validation is based on the posterior predictive distribution of various summary statistics.
Mean-field inference of Hawkes point processes
International Nuclear Information System (INIS)
Bacry, Emmanuel; Gaïffas, Stéphane; Mastromatteo, Iacopo; Muzy, Jean-François
2016-01-01
We propose a fast and efficient estimation method that is able to accurately recover the parameters of a d-dimensional Hawkes point-process from a set of observations. We exploit a mean-field approximation that is valid when the fluctuations of the stochastic intensity are small. We show that this is notably the case in situations when interactions are sufficiently weak, when the dimension of the system is high or when the fluctuations are self-averaging due to the large number of past events they involve. In such a regime the estimation of a Hawkes process can be mapped on a least-squares problem for which we provide an analytic solution. Though this estimator is biased, we show that its precision can be comparable to the one of the maximum likelihood estimator while its computation speed is shown to be improved considerably. We give a theoretical control on the accuracy of our new approach and illustrate its efficiency using synthetic datasets, in order to assess the statistical estimation error of the parameters. (paper)
Corner-point criterion for assessing nonlinear image processing imagers
Landeau, Stéphane; Pigois, Laurent; Foing, Jean-Paul; Deshors, Gilles; Swiathy, Greggory
2017-10-01
Range performance modeling of optronics imagers attempts to characterize the ability to resolve details in the image. Today, digital image processing is systematically used in conjunction with the optoelectronic system to correct its defects or to exploit tiny detection signals to increase performance. In order to characterize these processing having adaptive and non-linear properties, it becomes necessary to stimulate the imagers with test patterns whose properties are similar to the actual scene image ones, in terms of dynamic range, contours, texture and singular points. This paper presents an approach based on a Corner-Point (CP) resolution criterion, derived from the Probability of Correct Resolution (PCR) of binary fractal patterns. The fundamental principle lies in the respectful perception of the CP direction of one pixel minority value among the majority value of a 2×2 pixels block. The evaluation procedure considers the actual image as its multi-resolution CP transformation, taking the role of Ground Truth (GT). After a spatial registration between the degraded image and the original one, the degradation is statistically measured by comparing the GT with the degraded image CP transformation, in terms of localized PCR at the region of interest. The paper defines this CP criterion and presents the developed evaluation techniques, such as the measurement of the number of CP resolved on the target, the transformation CP and its inverse transform that make it possible to reconstruct an image of the perceived CPs. Then, this criterion is compared with the standard Johnson criterion, in the case of a linear blur and noise degradation. The evaluation of an imaging system integrating an image display and a visual perception is considered, by proposing an analysis scheme combining two methods: a CP measurement for the highly non-linear part (imaging) with real signature test target and conventional methods for the more linear part (displaying). The application to
Multiplicative point process as a model of trading activity
Gontis, V.; Kaulakys, B.
2004-11-01
Signals consisting of a sequence of pulses show that inherent origin of the 1/ f noise is a Brownian fluctuation of the average interevent time between subsequent pulses of the pulse sequence. In this paper, we generalize the model of interevent time to reproduce a variety of self-affine time series exhibiting power spectral density S( f) scaling as a power of the frequency f. Furthermore, we analyze the relation between the power-law correlations and the origin of the power-law probability distribution of the signal intensity. We introduce a stochastic multiplicative model for the time intervals between point events and analyze the statistical properties of the signal analytically and numerically. Such model system exhibits power-law spectral density S( f)∼1/ fβ for various values of β, including β= {1}/{2}, 1 and {3}/{2}. Explicit expressions for the power spectra in the low-frequency limit and for the distribution density of the interevent time are obtained. The counting statistics of the events is analyzed analytically and numerically, as well. The specific interest of our analysis is related with the financial markets, where long-range correlations of price fluctuations largely depend on the number of transactions. We analyze the spectral density and counting statistics of the number of transactions. The model reproduces spectral properties of the real markets and explains the mechanism of power-law distribution of trading activity. The study provides evidence that the statistical properties of the financial markets are enclosed in the statistics of the time interval between trades. A multiplicative point process serves as a consistent model generating this statistics.
Directory of Open Access Journals (Sweden)
Reza Heshmat
2013-06-01
Full Text Available Introduction:Most women have experienced child birth and its pain, which is inevitable. If this pain is not controlled it leads to prolonged labor and injury to the mother and fetus. This study was conducted to identify the effect of acupressure on sanyinjiao and hugo points on delivery pain in nulliparous women. Methods:This was a randomized controlled clinical trial on 84 nulliparous women in hospitals of Ardebil, Iran. The participants were divided by randomized blocks of 4 and 6 into two groups. The intervention was in the form of applying pressure at sanyinjiao and hugo points based on different dilatations. The intensity of the pain before and after the intervention was recorded by visual scale of pain assessment. To determine the effect of pressure on the intensity of labor pain, analytical descriptive test was conducted in SPSS (version 13. Results:There was a significant decrease in mean intensity of pain after each intervention in the experimental group with different dilatations (4, 6, 8, and 10 cm. Moreover, the Student’s independent t-test results indicated that the mean intensity of pain in the experimental group after the intervention in all four dilatations was significantly lower than the control group. Repeated measures ANOVA test indicated that in both experimental and control groups in four time periods, there was a statistically significant difference. Conclusion:Acupressure on sanyinjiao and hugo points decreases the labor pain. Therefore, this method can be used effectively in the labor process.
Level sets and extrema of random processes and fields
Azais, Jean-Marc
2009-01-01
A timely and comprehensive treatment of random field theory with applications across diverse areas of study Level Sets and Extrema of Random Processes and Fields discusses how to understand the properties of the level sets of paths as well as how to compute the probability distribution of its extremal values, which are two general classes of problems that arise in the study of random processes and fields and in related applications. This book provides a unified and accessible approach to these two topics and their relationship to classical theory and Gaussian processes and fields, and the most modern research findings are also discussed. The authors begin with an introduction to the basic concepts of stochastic processes, including a modern review of Gaussian fields and their classical inequalities. Subsequent chapters are devoted to Rice formulas, regularity properties, and recent results on the tails of the distribution of the maximum. Finally, applications of random fields to various areas of mathematics a...
Reznik, A. L.; Tuzikov, A. V.; Solov'ev, A. A.; Torgov, A. V.
2016-11-01
Original codes and combinatorial-geometrical computational schemes are presented, which are developed and applied for finding exact analytical formulas that describe the probability of errorless readout of random point images recorded by a scanning aperture with a limited number of threshold levels. Combinatorial problems encountered in the course of the study and associated with the new generalization of Catalan numbers are formulated and solved. An attempt is made to find the explicit analytical form of these numbers, which is, on the one hand, a necessary stage of solving the basic research problem and, on the other hand, an independent self-consistent problem.
A heuristic for the distribution of point counts for random curves over a finite field.
Achter, Jeffrey D; Erman, Daniel; Kedlaya, Kiran S; Wood, Melanie Matchett; Zureick-Brown, David
2015-04-28
How many rational points are there on a random algebraic curve of large genus g over a given finite field Fq? We propose a heuristic for this question motivated by a (now proven) conjecture of Mumford on the cohomology of moduli spaces of curves; this heuristic suggests a Poisson distribution with mean q+1+1/(q-1). We prove a weaker version of this statement in which g and q tend to infinity, with q much larger than g. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Multifractal properties of diffusion-limited aggregates and random multiplicative processes
International Nuclear Information System (INIS)
Canessa, E.
1991-04-01
We consider the multifractal properties of irreversible diffusion-limited aggregation (DLA) from the point of view of the self-similarity of fluctuations in random multiplicative processes. In particular we analyse the breakdown of multifractal behaviour and phase transition associated with the negative moments of the growth probabilities in DLA. (author). 20 refs, 5 figs
Seeking a fingerprint: analysis of point processes in actigraphy recording
Gudowska-Nowak, Ewa; Ochab, Jeremi K.; Oleś, Katarzyna; Beldzik, Ewa; Chialvo, Dante R.; Domagalik, Aleksandra; Fąfrowicz, Magdalena; Marek, Tadeusz; Nowak, Maciej A.; Ogińska, Halszka; Szwed, Jerzy; Tyburczyk, Jacek
2016-05-01
Motor activity of humans displays complex temporal fluctuations which can be characterised by scale-invariant statistics, thus demonstrating that structure and fluctuations of such kinetics remain similar over a broad range of time scales. Previous studies on humans regularly deprived of sleep or suffering from sleep disorders predicted a change in the invariant scale parameters with respect to those for healthy subjects. In this study we investigate the signal patterns from actigraphy recordings by means of characteristic measures of fractional point processes. We analyse spontaneous locomotor activity of healthy individuals recorded during a week of regular sleep and a week of chronic partial sleep deprivation. Behavioural symptoms of lack of sleep can be evaluated by analysing statistics of duration times during active and resting states, and alteration of behavioural organisation can be assessed by analysis of power laws detected in the event count distribution, distribution of waiting times between consecutive movements and detrended fluctuation analysis of recorded time series. We claim that among different measures characterising complexity of the actigraphy recordings and their variations implied by chronic sleep distress, the exponents characterising slopes of survival functions in resting states are the most effective biomarkers distinguishing between healthy and sleep-deprived groups.
Kuang, Jiayi; Li, Yuxuan; He, Yufeng; Gan, Lin; Wang, Aiming; Chen, Yanhua; Li, Xiaoting; Guo, Lin; Tang, Rongjun
2016-04-01
To compare the effects of oblique insertion at anatomical points and conventional acupuncture for sacroiliac joint injury. Eighty patients were randomly divided into an observation group and a control group, 40 cases in each one. In the observation group, oblique insertion therapy at anatomical points was used, and the 9 points of equal division (anatomical points) marked by palpating the anatomical symbol were treated as the insertion acupoints. In the control group, conventional acupuncture was applied, and perpendicular insertion was adopted at Huantiao (GB 30), Zhibian (BL 54) and Weizhong (BL 40), etc. In the two groups, the! treatment was given once a day and 5 times per week. Ten treatments were made into one course and two courses were required. The clinical effects, the changes of visual analogue scale (VAS) and Oswestry dysfunctional index. (ODI) before and after treatment were observed in the two groups. The total effective rate of the observation group was 90.0% (36/40), which was better than 72.5% (29/40) of the control group (P sacroiliac joint injury is superior to that of conventional acupuncture, which can effectively relieve pain and improve the disfunction.
Directory of Open Access Journals (Sweden)
Chao Hsing Yeh
2013-01-01
Full Text Available Objectives. This prospective, randomized clinical trial (RCT was designed to investigate the feasibility and effects of a 4-week auricular point acupressure (APA for chronic low back pain (CLBP. Methods. Participants were randomized to either true APA (true acupoints with taped seeds on the designated ear points for CLBP or sham APA (sham acupoints with taped seeds but on different locations than those designated for CLBP. The duration of treatment was four weeks. Participants were assessed before treatment, weekly during treatment, and 1 month following treatment. Results. Participants in the true APA group who completed the 4-week APA treatment had a 70% reduction in worst pain intensity, a 75% reduction in overall pain intensity, and a 42% improvement in disability due to back pain from baseline assessment. The reductions of worst pain and overall pain intensity in the true APA group were statistically greater than participants in the sham group (P<0.01 at the completion of a 4-week APA and 1 month followup. Discussion. The preliminary findings of this feasibility study showed a reduction in pain intensity and improvement in physical function suggesting that APA may be a promising treatment for patients with CLBP.
Segre, Lisa S; Brock, Rebecca L; O'Hara, Michael W
2015-04-01
Depression in low-income, ethnic-minority women of childbearing age is prevalent and compromises infant and child development. Yet numerous barriers prevent treatment delivery. Listening Visits (LV), an empirically supported intervention developed for delivery by British home-visiting nurses, could address this unmet mental health need. This randomized controlled trial (RCT) evaluated the effectiveness of LV delivered at a woman's usual point-of-care, including home visits or an ob-gyn office. Listening Visits were delivered to depressed pregnant women or mothers of young children by their point-of-care provider (e.g., home visitor or physician's assistant), all of whom had low levels of prior counseling experience. Three quarters of the study's participants were low-income. Of those who reported ethnicity, all identified themselves as minorities. Participants from 4 study sites (N = 66) were randomized in a 2:1 ratio, to LV or a wait-list control group (WLC). Assessments, conducted at baseline and 8 weeks, evaluated depression, quality of life, and treatment satisfaction. Depressive severity, depressive symptoms, and quality of life significantly improved among LV recipients as compared with women receiving standard social/health services. Women valued LV as evidenced by their high attendance rates and treatment satisfaction ratings. In a stepped model of depression care, LV can provide an accessible, acceptable, and effective first-line treatment option for at-risk women who otherwise are unlikely to receive treatment. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Analysis of multi-species point patterns using multivariate log Gaussian Cox processes
DEFF Research Database (Denmark)
Waagepetersen, Rasmus; Guan, Yongtao; Jalilian, Abdollah
Multivariate log Gaussian Cox processes are flexible models for multivariate point patterns. However, they have so far only been applied in bivariate cases. In this paper we move beyond the bivariate case in order to model multi-species point patterns of tree locations. In particular we address t...... of the data. The selected number of common latent fields provides an index of complexity of the multivariate covariance structure. Hierarchical clustering is used to identify groups of species with similar patterns of dependence on the common latent fields.......Multivariate log Gaussian Cox processes are flexible models for multivariate point patterns. However, they have so far only been applied in bivariate cases. In this paper we move beyond the bivariate case in order to model multi-species point patterns of tree locations. In particular we address...... the problems of identifying parsimonious models and of extracting biologically relevant information from the fitted models. The latent multivariate Gaussian field is decomposed into components given in terms of random fields common to all species and components which are species specific. This allows...
Cura, Rémi; Perret, Julien; Paparoditis, Nicolas
2017-05-01
In addition to more traditional geographical data such as images (rasters) and vectors, point cloud data are becoming increasingly available. Such data are appreciated for their precision and true three-Dimensional (3D) nature. However, managing point clouds can be difficult due to scaling problems and specificities of this data type. Several methods exist but are usually fairly specialised and solve only one aspect of the management problem. In this work, we propose a comprehensive and efficient point cloud management system based on a database server that works on groups of points (patches) rather than individual points. This system is specifically designed to cover the basic needs of point cloud users: fast loading, compressed storage, powerful patch and point filtering, easy data access and exporting, and integrated processing. Moreover, the proposed system fully integrates metadata (like sensor position) and can conjointly use point clouds with other geospatial data, such as images, vectors, topology and other point clouds. Point cloud (parallel) processing can be done in-base with fast prototyping capabilities. Lastly, the system is built on open source technologies; therefore it can be easily extended and customised. We test the proposed system with several billion points obtained from Lidar (aerial and terrestrial) and stereo-vision. We demonstrate loading speeds in the ˜50 million pts/h per process range, transparent-for-user and greater than 2 to 4:1 compression ratio, patch filtering in the 0.1 to 1 s range, and output in the 0.1 million pts/s per process range, along with classical processing methods, such as object detection.
Sunusi, Nurtiti
2018-03-01
The study of time distribution of occurrences of extreme rain phenomena plays a very important role in the analysis and weather forecast in an area. The timing of extreme rainfall is difficult to predict because its occurrence is random. This paper aims to determine the inter event time distribution of extreme rain events and minimum waiting time until the occurrence of next extreme event through a point process approach. The phenomenon of extreme rain events over a given period of time is following a renewal process in which the time for events is a random variable τ. The distribution of random variable τ is assumed to be a Pareto, Log Normal, and Gamma. To estimate model parameters, a moment method is used. Consider Rt as the time of the last extreme rain event at one location is the time difference since the last extreme rainfall event. if there are no extreme rain events up to t 0, there will be an opportunity for extreme rainfall events at (t 0, t 0 + δt 0). Furthermore from the three models reviewed, the minimum waiting time until the next extreme rainfall will be determined. The result shows that Log Nrmal model is better than Pareto and Gamma model for predicting the next extreme rainfall in South Sulawesi while the Pareto model can not be used.
DEFF Research Database (Denmark)
Møller, Jesper; Diaz-Avalos, Carlos
2010-01-01
Spatio-temporal Cox point process models with a multiplicative structure for the driving random intensity, incorporating covariate information into temporal and spatial components, and with a residual term modelled by a shot-noise process, are considered. Such models are flexible and tractable fo...... data set consisting of 2796 days and 5834 spatial locations of fires. The model is compared with a spatio-temporal log-Gaussian Cox point process model, and likelihood-based methods are discussed to some extent....
Renewal theory for perturbed random walks and similar processes
Iksanov, Alexander
2016-01-01
This book offers a detailed review of perturbed random walks, perpetuities, and random processes with immigration. Being of major importance in modern probability theory, both theoretical and applied, these objects have been used to model various phenomena in the natural sciences as well as in insurance and finance. The book also presents the many significant results and efficient techniques and methods that have been worked out in the last decade. The first chapter is devoted to perturbed random walks and discusses their asymptotic behavior and various functionals pertaining to them, including supremum and first-passage time. The second chapter examines perpetuities, presenting results on continuity of their distributions and the existence of moments, as well as weak convergence of divergent perpetuities. Focusing on random processes with immigration, the third chapter investigates the existence of moments, describes long-time behavior and discusses limit theorems, both with and without scaling. Chapters fou...
Compositions, Random Sums and Continued Random Fractions of Poisson and Fractional Poisson Processes
Orsingher, Enzo; Polito, Federico
2012-08-01
In this paper we consider the relation between random sums and compositions of different processes. In particular, for independent Poisson processes N α ( t), N β ( t), t>0, we have that N_{α}(N_{β}(t)) stackrel{d}{=} sum_{j=1}^{N_{β}(t)} Xj, where the X j s are Poisson random variables. We present a series of similar cases, where the outer process is Poisson with different inner processes. We highlight generalisations of these results where the external process is infinitely divisible. A section of the paper concerns compositions of the form N_{α}(tauk^{ν}), ν∈(0,1], where tauk^{ν} is the inverse of the fractional Poisson process, and we show how these compositions can be represented as random sums. Furthermore we study compositions of the form Θ( N( t)), t>0, which can be represented as random products. The last section is devoted to studying continued fractions of Cauchy random variables with a Poisson number of levels. We evaluate the exact distribution and derive the scale parameter in terms of ratios of Fibonacci numbers.
Migliorati, Giovanni; Nobile, Fabio; Tempone, Raul
2015-01-01
We study the accuracy of the discrete least-squares approximation on a finite dimensional space of a real-valued target function from noisy pointwise evaluations at independent random points distributed according to a given sampling probability
Money creation process in a random redistribution model
Chen, Siyan; Wang, Yougui; Li, Keqiang; Wu, Jinshan
2014-01-01
In this paper, the dynamical process of money creation in a random exchange model with debt is investigated. The money creation kinetics are analyzed by both the money-transfer matrix method and the diffusion method. From both approaches, we attain the same conclusion: the source of money creation in the case of random exchange is the agents with neither money nor debt. These analytical results are demonstrated by computer simulations.
DEFF Research Database (Denmark)
Jensen, Eva B. Vedel; Kiêu, K
1994-01-01
Unbiased stereological estimators of d-dimensional volume in R(n) are derived, based on information from an isotropic random r-slice through a specified point. The content of the slice can be subsampled by means of a spatial grid. The estimators depend only on spatial distances. As a fundamental ...... lemma, an explicit formula for the probability that an isotropic random r-slice in R(n) through 0 hits a fixed point in R(n) is given....
Scaling behaviour of randomly alternating surface growth processes
International Nuclear Information System (INIS)
Raychaudhuri, Subhadip; Shapir, Yonathan
2002-01-01
The scaling properties of the roughness of surfaces grown by two different processes randomly alternating in time are addressed. The duration of each application of the two primary processes is assumed to be independently drawn from given distribution functions. We analytically address processes in which the two primary processes are linear and extend the conclusions to nonlinear processes as well. The growth scaling exponent of the average roughness with the number of applications is found to be determined by the long time tail of the distribution functions. For processes in which both mean application times are finite, the scaling behaviour follows that of the corresponding cyclical process in which the uniform application time of each primary process is given by its mean. If the distribution functions decay with a small enough power law for the mean application times to diverge, the growth exponent is found to depend continuously on this power-law exponent. In contrast, the roughness exponent does not depend on the timing of the applications. The analytical results are supported by numerical simulations of various pairs of primary processes and with different distribution functions. Self-affine surfaces grown by two randomly alternating processes are common in nature (e.g., due to randomly changing weather conditions) and in man-made devices such as rechargeable batteries
Statistical mechanics of the $N$-point vortex system with random intensities on $R^2$
Directory of Open Access Journals (Sweden)
Cassio Neri
2005-01-01
Full Text Available The system of N -point vortices on $mathbb{R}^2$ is considered under the hypothesis that vortex intensities are independent and identically distributed random variables with respect to a law $P$ supported on $(0,1]$. It is shown that, in the limit as $N$ approaches $infty$, the 1-vortex distribution is a minimizer of the free energy functional and is associated to (some solutions of the following non-linear Poisson Equation:$$ -Delta u(x = C^{-1}int_{(0,1]} rhbox{e}^{-eta ru(x- gamma r|x|^2}P(hbox{d}r, quadforall xin mathbb{R}^2, $$where $displaystyle C = int_{(0,1]}int_{mathbb{R}^2}hbox{e}^{-eta ru(y - gamma r|y|^2}hbox{d} yP(hbox{d}r$
Imaging atomic-level random walk of a point defect in graphene
Kotakoski, Jani; Mangler, Clemens; Meyer, Jannik C.
2014-05-01
Deviations from the perfect atomic arrangements in crystals play an important role in affecting their properties. Similarly, diffusion of such deviations is behind many microstructural changes in solids. However, observation of point defect diffusion is hindered both by the difficulties related to direct imaging of non-periodic structures and by the timescales involved in the diffusion process. Here, instead of imaging thermal diffusion, we stimulate and follow the migration of a divacancy through graphene lattice using a scanning transmission electron microscope operated at 60 kV. The beam-activated process happens on a timescale that allows us to capture a significant part of the structural transformations and trajectory of the defect. The low voltage combined with ultra-high vacuum conditions ensure that the defect remains stable over long image sequences, which allows us for the first time to directly follow the diffusion of a point defect in a crystalline material.
Melnikov processes and chaos in randomly perturbed dynamical systems
Yagasaki, Kazuyuki
2018-07-01
We consider a wide class of randomly perturbed systems subjected to stationary Gaussian processes and show that chaotic orbits exist almost surely under some nondegenerate condition, no matter how small the random forcing terms are. This result is very contrasting to the deterministic forcing case, in which chaotic orbits exist only if the influence of the forcing terms overcomes that of the other terms in the perturbations. To obtain the result, we extend Melnikov’s method and prove that the corresponding Melnikov functions, which we call the Melnikov processes, have infinitely many zeros, so that infinitely many transverse homoclinic orbits exist. In addition, a theorem on the existence and smoothness of stable and unstable manifolds is given and the Smale–Birkhoff homoclinic theorem is extended in an appropriate form for randomly perturbed systems. We illustrate our theory for the Duffing oscillator subjected to the Ornstein–Uhlenbeck process parametrically.
Continuous state branching processes in random environment: The Brownian case
Palau, Sandra; Pardo, Juan Carlos
2015-01-01
We consider continuous state branching processes that are perturbed by a Brownian motion. These processes are constructed as the unique strong solution of a stochastic differential equation. The long-term extinction and explosion behaviours are studied. In the stable case, the extinction and explosion probabilities are given explicitly. We find three regimes for the asymptotic behaviour of the explosion probability and, as in the case of branching processes in random environment, we find five...
Wolde, Mistire; Tarekegn, Getahun; Kebede, Tedla
2018-05-01
Point-of-care glucometer (PoCG) devices play a significant role in self-monitoring of the blood sugar level, particularly in the follow-up of high blood sugar therapeutic response. The aim of this study was to evaluate blood glucose test results performed with four randomly selected glucometers on diabetes and control subjects versus standard wet chemistry (hexokinase) methods in Addis Ababa, Ethiopia. A prospective cross-sectional study was conducted on randomly selected 200 study participants (100 participants with diabetes and 100 healthy controls). Four randomly selected PoCG devices (CareSens N, DIAVUE Prudential, On Call Extra, i-QARE DS-W) were evaluated against hexokinase method and ISO 15197:2003 and ISO 15197:2013 standards. The minimum and maximum blood sugar values were recorded by CareSens N (21 mg/dl) and hexokinase method (498.8 mg/dl), respectively. The mean sugar values of all PoCG devices except On Call Extra showed significant differences compared with the reference hexokinase method. Meanwhile, all four PoCG devices had strong positive relationship (>80%) with the reference method (hexokinase). On the other hand, none of the four PoCG devices fulfilled the minimum accuracy measurement set by ISO 15197:2003 and ISO 15197:2013 standards. In addition, the linear regression analysis revealed that all four selected PoCG overestimated the glucose concentrations. The overall evaluation of the selected four PoCG measurements were poorly correlated with standard reference method. Therefore, before introducing PoCG devices to the market, there should be a standardized evaluation platform for validation. Further similar large-scale studies on other PoCG devices also need to be undertaken.
Equivalence of functional limit theorems for stationary point processes and their Palm distributions
Nieuwenhuis, G.
1989-01-01
Let P be the distribution of a stationary point process on the real line and let P0 be its Palm distribution. In this paper we consider two types of functional limit theorems, those in terms of the number of points of the point process in (0, t] and those in terms of the location of the nth point
Microbial profile and critical control points during processing of 'robo ...
African Journals Online (AJOL)
STORAGESEVER
2009-05-18
May 18, 2009 ... frying, surface fat draining, open-air cooling, and holding/packaging in polyethylene films during sales and distribution. The product was, however, classified under category III with respect to risk and the significance of monitoring and evaluation of quality using the hazard analysis critical control point.
Discussion of "Modern statistics for spatial point processes"
DEFF Research Database (Denmark)
Jensen, Eva Bjørn Vedel; Prokesová, Michaela; Hellmund, Gunnar
2007-01-01
ABSTRACT. The paper ‘Modern statistics for spatial point processes’ by Jesper Møller and Rasmus P. Waagepetersen is based on a special invited lecture given by the authors at the 21st Nordic Conference on Mathematical Statistics, held at Rebild, Denmark, in June 2006. At the conference, Antti...
Geometric anisotropic spatial point pattern analysis and Cox processes
DEFF Research Database (Denmark)
Møller, Jesper; Toftaker, Håkon
. In particular we study Cox process models with an elliptical pair correlation function, including shot noise Cox processes and log Gaussian Cox processes, and we develop estimation procedures using summary statistics and Bayesian methods. Our methodology is illustrated on real and synthetic datasets of spatial...
Designing neural networks that process mean values of random variables
International Nuclear Information System (INIS)
Barber, Michael J.; Clark, John W.
2014-01-01
We develop a class of neural networks derived from probabilistic models posed in the form of Bayesian networks. Making biologically and technically plausible assumptions about the nature of the probabilistic models to be represented in the networks, we derive neural networks exhibiting standard dynamics that require no training to determine the synaptic weights, that perform accurate calculation of the mean values of the relevant random variables, that can pool multiple sources of evidence, and that deal appropriately with ambivalent, inconsistent, or contradictory evidence. - Highlights: • High-level neural computations are specified by Bayesian belief networks of random variables. • Probability densities of random variables are encoded in activities of populations of neurons. • Top-down algorithm generates specific neural network implementation of given computation. • Resulting “neural belief networks” process mean values of random variables. • Such networks pool multiple sources of evidence and deal properly with inconsistent evidence
Designing neural networks that process mean values of random variables
Energy Technology Data Exchange (ETDEWEB)
Barber, Michael J. [AIT Austrian Institute of Technology, Innovation Systems Department, 1220 Vienna (Austria); Clark, John W. [Department of Physics and McDonnell Center for the Space Sciences, Washington University, St. Louis, MO 63130 (United States); Centro de Ciências Matemáticas, Universidade de Madeira, 9000-390 Funchal (Portugal)
2014-06-13
We develop a class of neural networks derived from probabilistic models posed in the form of Bayesian networks. Making biologically and technically plausible assumptions about the nature of the probabilistic models to be represented in the networks, we derive neural networks exhibiting standard dynamics that require no training to determine the synaptic weights, that perform accurate calculation of the mean values of the relevant random variables, that can pool multiple sources of evidence, and that deal appropriately with ambivalent, inconsistent, or contradictory evidence. - Highlights: • High-level neural computations are specified by Bayesian belief networks of random variables. • Probability densities of random variables are encoded in activities of populations of neurons. • Top-down algorithm generates specific neural network implementation of given computation. • Resulting “neural belief networks” process mean values of random variables. • Such networks pool multiple sources of evidence and deal properly with inconsistent evidence.
Process for structural geologic analysis of topography and point data
Eliason, Jay R.; Eliason, Valerie L. C.
1987-01-01
A quantitative method of geologic structural analysis of digital terrain data is described for implementation on a computer. Assuming selected valley segments are controlled by the underlying geologic structure, topographic lows in the terrain data, defining valley bottoms, are detected, filtered and accumulated into a series line segments defining contiguous valleys. The line segments are then vectorized to produce vector segments, defining valley segments, which may be indicative of the underlying geologic structure. Coplanar analysis is performed on vector segment pairs to determine which vectors produce planes which represent underlying geologic structure. Point data such as fracture phenomena which can be related to fracture planes in 3-dimensional space can be analyzed to define common plane orientation and locations. The vectors, points, and planes are displayed in various formats for interpretation.
A Bayesian MCMC method for point process models with intractable normalising constants
DEFF Research Database (Denmark)
Berthelsen, Kasper Klitgaard; Møller, Jesper
2004-01-01
to simulate from the "unknown distribution", perfect simulation algorithms become useful. We illustrate the method in cases whre the likelihood is given by a Markov point process model. Particularly, we consider semi-parametric Bayesian inference in connection to both inhomogeneous Markov point process models...... and pairwise interaction point processes....
On the joint statistics of stable random processes
International Nuclear Information System (INIS)
Hopcraft, K I; Jakeman, E
2011-01-01
A utilitarian continuous bi-variate random process whose first-order probability density function is a stable random variable is constructed. Results paralleling some of those familiar from the theory of Gaussian noise are derived. In addition to the joint-probability density for the process, these include fractional moments and structure functions. Although the correlation functions for stable processes other than Gaussian do not exist, we show that there is coherence between values adopted by the process at different times, which identifies a characteristic evolution with time. The distribution of the derivative of the process, and the joint-density function of the value of the process and its derivative measured at the same time are evaluated. These enable properties to be calculated analytically such as level crossing statistics and those related to the random telegraph wave. When the stable process is fractal, the proportion of time it spends at zero is finite and some properties of this quantity are evaluated, an optical interpretation for which is provided. (paper)
Generation and monitoring of a discrete stable random process
Hopcraft, K I; Matthews, J O
2002-01-01
A discrete stochastic process with stationary power law distribution is obtained from a death-multiple immigration population model. Emigrations from the population form a random series of events which are monitored by a counting process with finite-dynamic range and response time. It is shown that the power law behaviour of the population is manifested in the intermittent behaviour of the series of events. (letter to the editor)
Spatial birth-and-death processes in random environment
Fernandez, Roberto; Ferrari, Pablo A.; Guerberoff, Gustavo R.
2004-01-01
We consider birth-and-death processes of objects (animals) defined in ${\\bf Z}^d$ having unit death rates and random birth rates. For animals with uniformly bounded diameter we establish conditions on the rate distribution under which the following holds for almost all realizations of the birth rates: (i) the process is ergodic with at worst power-law time mixing; (ii) the unique invariant measure has exponential decay of (spatial) correlations; (iii) there exists a perfect-simulation algorit...
International Nuclear Information System (INIS)
Kirsch, W.; Martinelli, F.
1981-01-01
After the derivation of weak conditions under which the potential for the Schroedinger operator is well defined the authers state an ergodicity assumption of this potential which ensures that the spectrum of this operator is a fixed non random set. Then random point interaction Hamiltonians are considered in this framework. Finally the authors consider a model where for sufficiently small fluctuations around the equilibrium positions a finite number of gaps appears. (HSI)
Scaling behaviour of randomly alternating surface growth processes
Raychaudhuri, S
2002-01-01
The scaling properties of the roughness of surfaces grown by two different processes randomly alternating in time are addressed. The duration of each application of the two primary processes is assumed to be independently drawn from given distribution functions. We analytically address processes in which the two primary processes are linear and extend the conclusions to nonlinear processes as well. The growth scaling exponent of the average roughness with the number of applications is found to be determined by the long time tail of the distribution functions. For processes in which both mean application times are finite, the scaling behaviour follows that of the corresponding cyclical process in which the uniform application time of each primary process is given by its mean. If the distribution functions decay with a small enough power law for the mean application times to diverge, the growth exponent is found to depend continuously on this power-law exponent. In contrast, the roughness exponent does not depe...
INHOMOGENEITY IN SPATIAL COX POINT PROCESSES – LOCATION DEPENDENT THINNING IS NOT THE ONLY OPTION
Directory of Open Access Journals (Sweden)
Michaela Prokešová
2010-11-01
Full Text Available In the literature on point processes the by far most popular option for introducing inhomogeneity into a point process model is the location dependent thinning (resulting in a second-order intensity-reweighted stationary point process. This produces a very tractable model and there are several fast estimation procedures available. Nevertheless, this model dilutes the interaction (or the geometrical structure of the original homogeneous model in a special way. When concerning the Markov point processes several alternative inhomogeneous models were suggested and investigated in the literature. But it is not so for the Cox point processes, the canonical models for clustered point patterns. In the contribution we discuss several other options how to define inhomogeneous Cox point process models that result in point patterns with different types of geometric structure. We further investigate the possible parameter estimation procedures for such models.
A randomized controlled trial of single point acupuncture in primary dysmenorrhea.
Liu, Cun-Zhi; Xie, Jie-Ping; Wang, Lin-Peng; Liu, Yu-Qi; Song, Jia-Shan; Chen, Yin-Ying; Shi, Guang-Xia; Zhou, Wei; Gao, Shu-Zhong; Li, Shi-Liang; Xing, Jian-Min; Ma, Liang-Xiao; Wang, Yan-Xia; Zhu, Jiang; Liu, Jian-Ping
2014-06-01
Acupuncture is often used for primary dysmenorrhea. But there is no convincing evidence due to low methodological quality. We aim to assess immediate effect of acupuncture at specific acupoint compared with unrelated acupoint and nonacupoint on primary dysmenorrhea. The Acupuncture Analgesia Effect in Primary Dysmenorrhoea-II is a multicenter controlled trial conducted in six large hospitals of China. Patients who met inclusion criteria were randomly assigned to classic acupoint (N = 167), unrelated acupoint (N = 167), or non-acupoint (N = 167) group on a 1:1:1 basis. They received three sessions with electro-acupuncture at a classic acupoint (Sanyinjiao, SP6), or an unrelated acupoint (Xuanzhong, GB39), or nonacupoint location, respectively. The primary outcome was subjective pain as measured by a 100-mm visual analog scale (VAS). Measurements were obtained at 0, 5, 10, 30, and 60 minutes following the first intervention. In addition, patients scored changes of general complaints using Cox retrospective symptom scales (RSS-Cox) and 7-point verbal rating scale (VRS) during three menstrual cycles. Secondary outcomes included VAS score for average pain, pain total time, additional in-bed time, and proportion of participants using analgesics during three menstrual cycles. Five hundred and one people underwent random assignment. The primary comparison of VAS scores following the first intervention demonstrated that classic acupoint group was more effective both than unrelated acupoint (-4.0 mm, 95% CI -7.1 to -0.9, P = 0.010) and nonacupoint (-4.0 mm, 95% CI -7.0 to -0.9, P = 0.012) groups. However, no significant differences were detected among the three acupuncture groups for RSS-Cox or VRS outcomes. The per-protocol analysis showed similar pattern. No serious adverse events were noted. Specific acupoint acupuncture produced a statistically, but not clinically, significant effect compared with unrelated acupoint and nonacupoint acupuncture in
Random sampling of evolution time space and Fourier transform processing
International Nuclear Information System (INIS)
Kazimierczuk, Krzysztof; Zawadzka, Anna; Kozminski, Wiktor; Zhukov, Igor
2006-01-01
Application of Fourier Transform for processing 3D NMR spectra with random sampling of evolution time space is presented. The 2D FT is calculated for pairs of frequencies, instead of conventional sequence of one-dimensional transforms. Signal to noise ratios and linewidths for different random distributions were investigated by simulations and experiments. The experimental examples include 3D HNCA, HNCACB and 15 N-edited NOESY-HSQC spectra of 13 C 15 N labeled ubiquitin sample. Obtained results revealed general applicability of proposed method and the significant improvement of resolution in comparison with conventional spectra recorded in the same time
Random Matrices for Information Processing – A Democratic Vision
DEFF Research Database (Denmark)
Cakmak, Burak
The thesis studies three important applications of random matrices to information processing. Our main contribution is that we consider probabilistic systems involving more general random matrix ensembles than the classical ensembles with iid entries, i.e. models that account for statistical...... dependence between the entries. Specifically, the involved matrices are invariant or fulfill a certain asymptotic freeness condition as their dimensions grow to infinity. Informally speaking, all latent variables contribute to the system model in a democratic fashion – there are no preferred latent variables...
Wang, Li-Ren; Cai, Le-Yi; Lin, Ding-Sheng; Cao, Bin; Li, Zhi-Jie
2017-10-01
Random skin flaps are commonly used for wound repair and reconstruction. Electroacupuncture at The Zusanli point could enhance microcirculation and blood perfusion in random skin flaps. To determine whether electroacupuncture at The Zusanli point can improve the survival of random skin flaps in a rat model. Thirty-six male Sprague Dawley rats were randomly divided into 3 groups: control group (no electroacupuncture), Group A (electroacupuncture at a nonacupoint near The Zusanli point), and Group B (electroacupuncture at The Zusanli point). McFarlane flaps were established. On postoperative Day 2, malondialdehyde (MDA) and superoxide dismutase were detected. The flap survival rate was evaluated, inflammation was examined in hematoxylin and eosin-stained slices, and the expression of vascular endothelial growth factor (VEGF) was measured immunohistochemically on Day 7. The mean survival area of the flaps in Group B was significantly larger than that in the control group and Group A. Superoxide dismutase activity and VEGF expression level were significantly higher in Group B than those in the control group and Group A, whereas MDA and inflammation levels in Group B were significantly lower than those in the other 2 groups. Electroacupuncture at The Zusanli point can effectively improve the random flap survival.
Solution-Processed Carbon Nanotube True Random Number Generator.
Gaviria Rojas, William A; McMorrow, Julian J; Geier, Michael L; Tang, Qianying; Kim, Chris H; Marks, Tobin J; Hersam, Mark C
2017-08-09
With the growing adoption of interconnected electronic devices in consumer and industrial applications, there is an increasing demand for robust security protocols when transmitting and receiving sensitive data. Toward this end, hardware true random number generators (TRNGs), commonly used to create encryption keys, offer significant advantages over software pseudorandom number generators. However, the vast network of devices and sensors envisioned for the "Internet of Things" will require small, low-cost, and mechanically flexible TRNGs with low computational complexity. These rigorous constraints position solution-processed semiconducting single-walled carbon nanotubes (SWCNTs) as leading candidates for next-generation security devices. Here, we demonstrate the first TRNG using static random access memory (SRAM) cells based on solution-processed SWCNTs that digitize thermal noise to generate random bits. This bit generation strategy can be readily implemented in hardware with minimal transistor and computational overhead, resulting in an output stream that passes standardized statistical tests for randomness. By using solution-processed semiconducting SWCNTs in a low-power, complementary architecture to achieve TRNG, we demonstrate a promising approach for improving the security of printable and flexible electronics.
Lasso and probabilistic inequalities for multivariate point processes
DEFF Research Database (Denmark)
Hansen, Niels Richard; Reynaud-Bouret, Patricia; Rivoirard, Vincent
2015-01-01
Due to its low computational cost, Lasso is an attractive regularization method for high-dimensional statistical settings. In this paper, we consider multivariate counting processes depending on an unknown function parameter to be estimated by linear combinations of a fixed dictionary. To select...... for multivariate Hawkes processes are proven, which allows us to check these assumptions by considering general dictionaries based on histograms, Fourier or wavelet bases. Motivated by problems of neuronal activity inference, we finally carry out a simulation study for multivariate Hawkes processes and compare our...... methodology with the adaptive Lasso procedure proposed by Zou in (J. Amer. Statist. Assoc. 101 (2006) 1418–1429). We observe an excellent behavior of our procedure. We rely on theoretical aspects for the essential question of tuning our methodology. Unlike adaptive Lasso of (J. Amer. Statist. Assoc. 101 (2006...
Optimal redundant systems for works with random processing time
International Nuclear Information System (INIS)
Chen, M.; Nakagawa, T.
2013-01-01
This paper studies the optimal redundant policies for a manufacturing system processing jobs with random working times. The redundant units of the parallel systems and standby systems are subject to stochastic failures during the continuous production process. First, a job consisting of only one work is considered for both redundant systems and the expected cost functions are obtained. Next, each redundant system with a random number of units is assumed for a single work. The expected cost functions and the optimal expected numbers of units are derived for redundant systems. Subsequently, the production processes of N tandem works are introduced for parallel and standby systems, and the expected cost functions are also summarized. Finally, the number of works is estimated by a Poisson distribution for the parallel and standby systems. Numerical examples are given to demonstrate the optimization problems of redundant systems
Multifractal detrended fluctuation analysis of analog random multiplicative processes
Energy Technology Data Exchange (ETDEWEB)
Silva, L.B.M.; Vermelho, M.V.D. [Instituto de Fisica, Universidade Federal de Alagoas, Maceio - AL, 57072-970 (Brazil); Lyra, M.L. [Instituto de Fisica, Universidade Federal de Alagoas, Maceio - AL, 57072-970 (Brazil)], E-mail: marcelo@if.ufal.br; Viswanathan, G.M. [Instituto de Fisica, Universidade Federal de Alagoas, Maceio - AL, 57072-970 (Brazil)
2009-09-15
We investigate non-Gaussian statistical properties of stationary stochastic signals generated by an analog circuit that simulates a random multiplicative process with weak additive noise. The random noises are originated by thermal shot noise and avalanche processes, while the multiplicative process is generated by a fully analog circuit. The resulting signal describes stochastic time series of current interest in several areas such as turbulence, finance, biology and environment, which exhibit power-law distributions. Specifically, we study the correlation properties of the signal by employing a detrended fluctuation analysis and explore its multifractal nature. The singularity spectrum is obtained and analyzed as a function of the control circuit parameter that tunes the asymptotic power-law form of the probability distribution function.
Detection of bursts in extracellular spike trains using hidden semi-Markov point process models.
Tokdar, Surya; Xi, Peiyi; Kelly, Ryan C; Kass, Robert E
2010-08-01
Neurons in vitro and in vivo have epochs of bursting or "up state" activity during which firing rates are dramatically elevated. Various methods of detecting bursts in extracellular spike trains have appeared in the literature, the most widely used apparently being Poisson Surprise (PS). A natural description of the phenomenon assumes (1) there are two hidden states, which we label "burst" and "non-burst," (2) the neuron evolves stochastically, switching at random between these two states, and (3) within each state the spike train follows a time-homogeneous point process. If in (2) the transitions from non-burst to burst and burst to non-burst states are memoryless, this becomes a hidden Markov model (HMM). For HMMs, the state transitions follow exponential distributions, and are highly irregular. Because observed bursting may in some cases be fairly regular-exhibiting inter-burst intervals with small variation-we relaxed this assumption. When more general probability distributions are used to describe the state transitions the two-state point process model becomes a hidden semi-Markov model (HSMM). We developed an efficient Bayesian computational scheme to fit HSMMs to spike train data. Numerical simulations indicate the method can perform well, sometimes yielding very different results than those based on PS.
Modelling financial high frequency data using point processes
DEFF Research Database (Denmark)
Hautsch, Nikolaus; Bauwens, Luc
In this chapter written for a forthcoming Handbook of Financial Time Series to be published by Springer-Verlag, we review the econometric literature on dynamic duration and intensity processes applied to high frequency financial data, which was boosted by the work of Engle and Russell (1997...
Özkan, Behzat; Ünlüer, Erden E; Akyol, Pinar Y; Karagöz, Arif; Bayata, Mehmet S; Akoğlu, Haldun; Oyar, Orhan; Dalli, Ayşe; Topal, Fatih E
2015-12-01
We aimed to determine the accuracies of point-of-care ultrasound (PoCUS) and stethoscopes as part of the physical examinations of patients with dyspnea. Three emergency medicine specialists in each of two groups of ultrasound and stethoscope performers underwent didactic and hands-on training on PoCUS and stethoscope usage. All the patients enrolled were randomized to one of two predetermined PoCUS or stethoscope groups. The diagnostic performance of ultrasonography was higher than that of the stethoscope in the diagnoses of heart failure (90 vs. 86%, 1.00 vs. 0.89, and 5.00 vs. 4.92, respectively) and pneumonia (90 vs. 86.7%, 0.75 vs. 0.73, and 16.50 vs. 13.82, respectively). No significant differences were observed in the utility parameters of these modalities in these diagnoses. Although some authors argue that it is time to abandon the 'archaic tools' of past centuries, we believe that it is too early to discontinue the use of the stethoscope.
Random migration processes between two stochastic epidemic centers.
Sazonov, Igor; Kelbert, Mark; Gravenor, Michael B
2016-04-01
We consider the epidemic dynamics in stochastic interacting population centers coupled by random migration. Both the epidemic and the migration processes are modeled by Markov chains. We derive explicit formulae for the probability distribution of the migration process, and explore the dependence of outbreak patterns on initial parameters, population sizes and coupling parameters, using analytical and numerical methods. We show the importance of considering the movement of resident and visitor individuals separately. The mean field approximation for a general migration process is derived and an approximate method that allows the computation of statistical moments for networks with highly populated centers is proposed and tested numerically. Copyright © 2016 Elsevier Inc. All rights reserved.
Apparent scale correlations in a random multifractal process
DEFF Research Database (Denmark)
Cleve, Jochen; Schmiegel, Jürgen; Greiner, Martin
2008-01-01
We discuss various properties of a homogeneous random multifractal process, which are related to the issue of scale correlations. By design, the process has no built-in scale correlations. However, when it comes to observables like breakdown coefficients, which are based on a coarse......-graining of the multifractal field, scale correlations do appear. In the log-normal limit of the model process, the conditional distributions and moments of breakdown coefficients reproduce the observations made in fully developed small-scale turbulence. These findings help to understand several puzzling empirical details...
Lasso and probabilistic inequalities for multivariate point processes
Hansen, Niels Richard; Reynaud-Bouret, Patricia; Rivoirard, Vincent
2012-01-01
Due to its low computational cost, Lasso is an attractive regularization method for high-dimensional statistical settings. In this paper, we consider multivariate counting processes depending on an unknown function parameter to be estimated by linear combinations of a fixed dictionary. To select coefficients, we propose an adaptive $\\ell_{1}$-penalization methodology, where data-driven weights of the penalty are derived from new Bernstein type inequalities for martingales. Oracle inequalities...
Nuclear structure and weak rates of heavy waiting point nuclei under rp-process conditions
Nabi, Jameel-Un; Böyükata, Mahmut
2017-01-01
The structure and the weak interaction mediated rates of the heavy waiting point (WP) nuclei 80Zr, 84Mo, 88Ru, 92Pd and 96Cd along N = Z line were studied within the interacting boson model-1 (IBM-1) and the proton-neutron quasi-particle random phase approximation (pn-QRPA). The energy levels of the N = Z WP nuclei were calculated by fitting the essential parameters of IBM-1 Hamiltonian and their geometric shapes were predicted by plotting potential energy surfaces (PESs). Half-lives, continuum electron capture rates, positron decay rates, electron capture cross sections of WP nuclei, energy rates of β-delayed protons and their emission probabilities were later calculated using the pn-QRPA. The calculated Gamow-Teller strength distributions were compared with previous calculation. We present positron decay and continuum electron capture rates on these WP nuclei under rp-process conditions using the same model. For the rp-process conditions, the calculated total weak rates are twice the Skyrme HF+BCS+QRPA rates for 80Zr. For remaining nuclei the two calculations compare well. The electron capture rates are significant and compete well with the corresponding positron decay rates under rp-process conditions. The finding of the present study supports that electron capture rates form an integral part of the weak rates under rp-process conditions and has an important role for the nuclear model calculations.
Network formation determined by the diffusion process of random walkers
International Nuclear Information System (INIS)
Ikeda, Nobutoshi
2008-01-01
We studied the diffusion process of random walkers in networks formed by their traces. This model considers the rise and fall of links determined by the frequency of transports of random walkers. In order to examine the relation between the formed network and the diffusion process, a situation in which multiple random walkers start from the same vertex is investigated. The difference in diffusion rate of random walkers according to the difference in dimension of the initial lattice is very important for determining the time evolution of the networks. For example, complete subgraphs can be formed on a one-dimensional lattice while a graph with a power-law vertex degree distribution is formed on a two-dimensional lattice. We derived some formulae for predicting network changes for the 1D case, such as the time evolution of the size of nearly complete subgraphs and conditions for their collapse. The networks formed on the 2D lattice are characterized by the existence of clusters of highly connected vertices and their life time. As the life time of such clusters tends to be small, the exponent of the power-law distribution changes from γ ≅ 1-2 to γ ≅ 3
The S-Process Branching-Point at 205PB
Tonchev, Anton; Tsoneva, N.; Bhatia, C.; Arnold, C. W.; Goriely, S.; Hammond, S. L.; Kelley, J. H.; Kwan, E.; Lenske, H.; Piekarewicz, J.; Raut, R.; Rusev, G.; Shizuma, T.; Tornow, W.
2017-09-01
Accurate neutron-capture cross sections for radioactive nuclei near the line of beta stability are crucial for understanding s-process nucleosynthesis. However, neutron-capture cross sections for short-lived radionuclides are difficult to measure due to the fact that the measurements require both highly radioactive samples and intense neutron sources. We consider photon scattering using monoenergetic and 100% linearly polarized photon beams to obtain the photoabsorption cross section on 206Pb below the neutron separation energy. This observable becomes an essential ingredient in the Hauser-Feshbach statistical model for calculations of capture cross sections on 205Pb. The newly obtained photoabsorption information is also used to estimate the Maxwellian-averaged radiative cross section of 205Pb(n,g)206Pb at 30 keV. The astrophysical impact of this measurement on s-process nucleosynthesis will be discussed. This work was performed under the auspices of US DOE by LLNL under Contract DE-AC52-07NA27344.
The Hinkley Point decision: An analysis of the policy process
International Nuclear Information System (INIS)
Thomas, Stephen
2016-01-01
In 2006, the British government launched a policy to build nuclear power reactors based on a claim that the power produced would be competitive with fossil fuel and would require no public subsidy. A decade later, it is not clear how many, if any, orders will be placed and the claims on costs and subsidies have proved false. Despite this failure to deliver, the policy is still being pursued with undiminished determination. The finance model that is now proposed is seen as a model other European countries can follow so the success or otherwise of the British nuclear programme will have implications outside the UK. This paper contends that the checks and balances that should weed out misguided policies, have failed. It argues that the most serious failure is with the civil service and its inability to provide politicians with high quality advice – truth to power. It concludes that the failure is likely to be due to the unwillingness of politicians to listen to opinions that conflict with their beliefs. Other weaknesses include the lack of energy expertise in the media, the unwillingness of the public to engage in the policy process and the impotence of Parliamentary Committees. - Highlights: •Britain's nuclear power policy is failing due to high costs and problems of finance. •This has implications for European countries who want to use the same financing model. •The continued pursuit of a failing policy is due to poor advice from civil servants. •Lack of expertise in the media and lack of public engagement have contributed. •Parliamentary processes have not provided proper critical scrutiny.
Wang, Kang-Feng; Zhang, Li-Juan; Lu, Feng; Lu, Yong-Hui; Yang, Chuan-Hua
2016-06-01
To provide an evidence-based overview regarding the efficacy of Ashi points stimulation for the treatment of shoulder pain. A comprehensive search [PubMed, Chinese Biomedical Literature Database, China National Knowledge Infrastructure (CNKI), Chongqing Weipu Database for Chinese Technical Periodicals (VIP) and Wanfang Database] was conducted to identify randomized or quasi-randomized controlled trials that evaluated the effectiveness of Ashi points stimulation for shoulder pain compared with conventional treatment. The methodological quality of the included studies was assessed using the Cochrane risk of bias tool. RevMan 5.0 was used for data synthesis. Nine trials were included. Seven studies assessed the effectiveness of Ashi points stimulation on response rate compared with conventional acupuncture. Their results suggested significant effect in favour of Ashi points stimulation [odds ratio (OR): 5.89, 95% confidence interval (CI): 2.97 to 11.67, Pfirm conclusion could not be reached until further studies of high quality are available.
Computer simulation of vortex pinning in type II superconductors. II. Random point pins
International Nuclear Information System (INIS)
Brandt, E.H.
1983-01-01
Pinning of vortices in a type II superconductor by randomly positioned identical point pins is simulated using the two-dimensional method described in a previous paper (Part I). The system is characterized by the vortex and pin numbers (N/sub v/, N/sub p/), the vortex and pin interaction ranges (R/sub v/, R/sub p/), and the amplitude of the pin potential A/sub p/. The computation is performed for many cases: dilute or dense, sharp or soft, attractive or repulsive, weak or strong pins, and ideal or amorphous vortex lattice. The total pinning force F as a function of the mean vortex displacment X increases first linearly (over a distance usually much smaller than the vortex spacing and than R/sub p/) and then saturates, fluctuating about its averaging F-bar. We interpret F-bar as the maximum pinning force j/sub c/B of a large specimen. For weak pins the prediction of Larkin and Ovchinnikov for two-dimensional collective pinning is confirmed: F-bar = const. iW/R/sub p/c 66 , where W-bar is the mean square pinning force and c 66 is the shear modulus of the vortex lattice. If the initial vortex lattice is chosen highly defective (''amorphous'') the constant is 1.3--3 times larger than for the ideal triangular lattice. This finding may explain the often observed ''history effect.'' The function F-bar(A/sub p/) exhibits a jump, which for dilute, sharp, attractive pins occurs close to the ''threshold value'' predicted for isolated pins by Labusch. This jump reflects the onset of plastic deformation of the vortex lattice, and in some cases of vortex trapping, but is not a genuine threshold
Matrix product approach for the asymmetric random average process
International Nuclear Information System (INIS)
Zielen, F; Schadschneider, A
2003-01-01
We consider the asymmetric random average process which is a one-dimensional stochastic lattice model with nearest-neighbour interaction but continuous and unbounded state variables. First, the explicit functional representations, so-called beta densities, of all local interactions leading to steady states of product measure form are rigorously derived. This also completes an outstanding proof given in a previous publication. Then we present an alternative solution for the processes with factorized stationary states by using a matrix product ansatz. Due to continuous state variables we obtain a matrix algebra in the form of a functional equation which can be solved exactly
Directory of Open Access Journals (Sweden)
Z. Lari
2012-07-01
Full Text Available Over the past few years, LiDAR systems have been established as a leading technology for the acquisition of high density point clouds over physical surfaces. These point clouds will be processed for the extraction of geo-spatial information. Local point density is one of the most important properties of the point cloud that highly affects the performance of data processing techniques and the quality of extracted information from these data. Therefore, it is necessary to define a standard methodology for the estimation of local point density indices to be considered for the precise processing of LiDAR data. Current definitions of local point density indices, which only consider the 2D neighbourhood of individual points, are not appropriate for 3D LiDAR data and cannot be applied for laser scans from different platforms. In order to resolve the drawbacks of these methods, this paper proposes several approaches for the estimation of the local point density index which take the 3D relationship among the points and the physical properties of the surfaces they belong to into account. In the simplest approach, an approximate value of the local point density for each point is defined while considering the 3D relationship among the points. In the other approaches, the local point density is estimated by considering the 3D neighbourhood of the point in question and the physical properties of the surface which encloses this point. The physical properties of the surfaces enclosing the LiDAR points are assessed through eigen-value analysis of the 3D neighbourhood of individual points and adaptive cylinder methods. This paper will discuss these approaches and highlight their impact on various LiDAR data processing activities (i.e., neighbourhood definition, region growing, segmentation, boundary detection, and classification. Experimental results from airborne and terrestrial LiDAR data verify the efficacy of considering local point density variation for
A customizable stochastic state point process filter (SSPPF) for neural spiking activity.
Xin, Yao; Li, Will X Y; Min, Biao; Han, Yan; Cheung, Ray C C
2013-01-01
Stochastic State Point Process Filter (SSPPF) is effective for adaptive signal processing. In particular, it has been successfully applied to neural signal coding/decoding in recent years. Recent work has proven its efficiency in non-parametric coefficients tracking in modeling of mammal nervous system. However, existing SSPPF has only been realized in commercial software platforms which limit their computational capability. In this paper, the first hardware architecture of SSPPF has been designed and successfully implemented on field-programmable gate array (FPGA), proving a more efficient means for coefficient tracking in a well-established generalized Laguerre-Volterra model for mammalian hippocampal spiking activity research. By exploring the intrinsic parallelism of the FPGA, the proposed architecture is able to process matrices or vectors with random size, and is efficiently scalable. Experimental result shows its superior performance comparing to the software implementation, while maintaining the numerical precision. This architecture can also be potentially utilized in the future hippocampal cognitive neural prosthesis design.
Characterisation of random Gaussian and non-Gaussian stress processes in terms of extreme responses
Directory of Open Access Journals (Sweden)
Colin Bruno
2015-01-01
Full Text Available In the field of military land vehicles, random vibration processes generated by all-terrain wheeled vehicles in motion are not classical stochastic processes with a stationary and Gaussian nature. Non-stationarity of processes induced by the variability of the vehicle speed does not form a major difficulty because the designer can have good control over the vehicle speed by characterising the histogram of instantaneous speed of the vehicle during an operational situation. Beyond this non-stationarity problem, the hard point clearly lies in the fact that the random processes are not Gaussian and are generated mainly by the non-linear behaviour of the undercarriage and the strong occurrence of shocks generated by roughness of the terrain. This non-Gaussian nature is expressed particularly by very high flattening levels that can affect the design of structures under extreme stresses conventionally acquired by spectral approaches, inherent to Gaussian processes and based essentially on spectral moments of stress processes. Due to these technical considerations, techniques for characterisation of random excitation processes generated by this type of carrier need to be changed, by proposing innovative characterisation methods based on time domain approaches as described in the body of the text rather than spectral domain approaches.
Order out of Randomness: Self-Organization Processes in Astrophysics
Aschwanden, Markus J.; Scholkmann, Felix; Béthune, William; Schmutz, Werner; Abramenko, Valentina; Cheung, Mark C. M.; Müller, Daniel; Benz, Arnold; Chernov, Guennadi; Kritsuk, Alexei G.; Scargle, Jeffrey D.; Melatos, Andrew; Wagoner, Robert V.; Trimble, Virginia; Green, William H.
2018-03-01
Self-organization is a property of dissipative nonlinear processes that are governed by a global driving force and a local positive feedback mechanism, which creates regular geometric and/or temporal patterns, and decreases the entropy locally, in contrast to random processes. Here we investigate for the first time a comprehensive number of (17) self-organization processes that operate in planetary physics, solar physics, stellar physics, galactic physics, and cosmology. Self-organizing systems create spontaneous " order out of randomness", during the evolution from an initially disordered system to an ordered quasi-stationary system, mostly by quasi-periodic limit-cycle dynamics, but also by harmonic (mechanical or gyromagnetic) resonances. The global driving force can be due to gravity, electromagnetic forces, mechanical forces (e.g., rotation or differential rotation), thermal pressure, or acceleration of nonthermal particles, while the positive feedback mechanism is often an instability, such as the magneto-rotational (Balbus-Hawley) instability, the convective (Rayleigh-Bénard) instability, turbulence, vortex attraction, magnetic reconnection, plasma condensation, or a loss-cone instability. Physical models of astrophysical self-organization processes require hydrodynamic, magneto-hydrodynamic (MHD), plasma, or N-body simulations. Analytical formulations of self-organizing systems generally involve coupled differential equations with limit-cycle solutions of the Lotka-Volterra or Hopf-bifurcation type.
Exact two-point resistance, and the simple random walk on the complete graph minus N edges
Energy Technology Data Exchange (ETDEWEB)
Chair, Noureddine, E-mail: n.chair@ju.edu.jo
2012-12-15
An analytical approach is developed to obtain the exact expressions for the two-point resistance and the total effective resistance of the complete graph minus N edges of the opposite vertices. These expressions are written in terms of certain numbers that we introduce, which we call the Bejaia and the Pisa numbers; these numbers are the natural generalizations of the bisected Fibonacci and Lucas numbers. The correspondence between random walks and the resistor networks is then used to obtain the exact expressions for the first passage and mean first passage times on this graph. - Highlights: Black-Right-Pointing-Pointer We obtain exact formulas for the two-point resistance of the complete graph minus N edges. Black-Right-Pointing-Pointer We obtain also the total effective resistance of this graph. Black-Right-Pointing-Pointer We modified Schwatt's formula on trigonometrical power sum to suit our computations. Black-Right-Pointing-Pointer We introduced the generalized bisected Fibonacci and Lucas numbers: the Bejaia and the Pisa numbers. Black-Right-Pointing-Pointer The first passage and mean first passage times of the random walks have exact expressions.
Development and evaluation of spatial point process models for epidermal nerve fibers.
Olsbo, Viktor; Myllymäki, Mari; Waller, Lance A; Särkkä, Aila
2013-06-01
We propose two spatial point process models for the spatial structure of epidermal nerve fibers (ENFs) across human skin. The models derive from two point processes, Φb and Φe, describing the locations of the base and end points of the fibers. Each point of Φe (the end point process) is connected to a unique point in Φb (the base point process). In the first model, both Φe and Φb are Poisson processes, yielding a null model of uniform coverage of the skin by end points and general baseline results and reference values for moments of key physiologic indicators. The second model provides a mechanistic model to generate end points for each base, and we model the branching structure more directly by defining Φe as a cluster process conditioned on the realization of Φb as its parent points. In both cases, we derive distributional properties for observable quantities of direct interest to neurologists such as the number of fibers per base, and the direction and range of fibers on the skin. We contrast both models by fitting them to data from skin blister biopsy images of ENFs and provide inference regarding physiological properties of ENFs. Copyright © 2013 Elsevier Inc. All rights reserved.
Critical Control Points in the Processing of Cassava Tuber for Ighu ...
African Journals Online (AJOL)
Determination of the critical control points in the processing of cassava tuber into Ighu was carried out. The critical control points were determined according to the Codex guidelines for the application of the HACCP system by conducting hazard analysis. Hazard analysis involved proper examination of each processing step ...
Distinguishing different types of inhomogeneity in Neyman-Scott point processes
Czech Academy of Sciences Publication Activity Database
Mrkvička, Tomáš
2014-01-01
Roč. 16, č. 2 (2014), s. 385-395 ISSN 1387-5841 Institutional support: RVO:60077344 Keywords : clustering * growing clusters * inhomogeneous cluster centers * inhomogeneous point process * location dependent scaling * Neyman-Scott point process Subject RIV: BA - General Mathematics Impact factor: 0.913, year: 2014
The importance of topographically corrected null models for analyzing ecological point processes.
McDowall, Philip; Lynch, Heather J
2017-07-01
Analyses of point process patterns and related techniques (e.g., MaxEnt) make use of the expected number of occurrences per unit area and second-order statistics based on the distance between occurrences. Ecologists working with point process data often assume that points exist on a two-dimensional x-y plane or within a three-dimensional volume, when in fact many observed point patterns are generated on a two-dimensional surface existing within three-dimensional space. For many surfaces, however, such as the topography of landscapes, the projection from the surface to the x-y plane preserves neither area nor distance. As such, when these point patterns are implicitly projected to and analyzed in the x-y plane, our expectations of the point pattern's statistical properties may not be met. When used in hypothesis testing, we find that the failure to account for the topography of the generating surface may bias statistical tests that incorrectly identify clustering and, furthermore, may bias coefficients in inhomogeneous point process models that incorporate slope as a covariate. We demonstrate the circumstances under which this bias is significant, and present simple methods that allow point processes to be simulated with corrections for topography. These point patterns can then be used to generate "topographically corrected" null models against which observed point processes can be compared. © 2017 by the Ecological Society of America.
Zhang, Ning; Luxenhofer, Robert; Jordan, Rainer
2012-01-01
random and block copolymers. Their aqueous solutions displayed a distinct thermoresponsive behavior as a function of the side-chain composition and sequence. The cloud point (CP) of MBs with random copolymer side chains is a linear function
Polymers and Random graphs: Asymptotic equivalence to branching processes
International Nuclear Information System (INIS)
Spouge, J.L.
1985-01-01
In 1974, Falk and Thomas did a computer simulation of Flory's Equireactive RA/sub f/ Polymer model, rings forbidden and rings allowed. Asymptotically, the Rings Forbidden model tended to Stockmayer's RA/sub f/ distribution (in which the sol distribution ''sticks'' after gelation), while the Rings Allowed model tended to the Flory version of the RA/sub f/ distribution. In 1965, Whittle introduced the Tree and Pseudomultigraph models. We show that these random graphs generalize the Falk and Thomas models by incorporating first-shell substitution effects. Moreover, asymptotically the Tree model displays postgelation ''sticking.'' Hence this phenomenon results from the absence of rings and occurs independently of equireactivity. We also show that the Pseudomultigraph model is asymptotically identical to the Branching Process model introduced by Gordon in 1962. This provides a possible basis for the Branching Process model in standard statistical mechanics
Nonstationary random acoustic and electromagnetic fields as wave diffusion processes
International Nuclear Information System (INIS)
Arnaut, L R
2007-01-01
We investigate the effects of relatively rapid variations of the boundaries of an overmoded cavity on the stochastic properties of its interior acoustic or electromagnetic field. For quasi-static variations, this field can be represented as an ideal incoherent and statistically homogeneous isotropic random scalar or vector field, respectively. A physical model is constructed showing that the field dynamics can be characterized as a generalized diffusion process. The Langevin-It o-hat and Fokker-Planck equations are derived and their associated statistics and distributions for the complex analytic field, its magnitude and energy density are computed. The energy diffusion parameter is found to be proportional to the square of the ratio of the standard deviation of the source field to the characteristic time constant of the dynamic process, but is independent of the initial energy density, to first order. The energy drift vanishes in the asymptotic limit. The time-energy probability distribution is in general not separable, as a result of nonstationarity. A general solution of the Fokker-Planck equation is obtained in integral form, together with explicit closed-form solutions for several asymptotic cases. The findings extend known results on statistics and distributions of quasi-stationary ideal random fields (pure diffusions), which are retrieved as special cases
Directory of Open Access Journals (Sweden)
Xian-Liang Liu
2015-01-01
Full Text Available The purpose of this study was to evaluate the effectiveness of Acupuncture-point stimulation (APS in postoperative pain control compared with sham/placebo acupuncture or standard treatments (usual care or no treatment. Only randomized controlled trials (RCTs were included. Meta-analysis results indicated that APS interventions improved VAS scores significantly and also reduced total morphine consumption. No serious APS-related adverse effects (AEs were reported. There is Level I evidence for the effectiveness of body points plaster therapy and Level II evidence for body points electroacupuncture (EA, body points acupressure, body points APS for abdominal surgery patients, auricular points seed embedding, manual auricular acupuncture, and auricular EA. We obtained Level III evidence for body points APS in patients who underwent cardiac surgery and cesarean section and for auricular-point stimulation in patients who underwent abdominal surgery. There is insufficient evidence to conclude that APS is an effective postoperative pain therapy in surgical patients, although the evidence does support the conclusion that APS can reduce analgesic requirements without AEs. The best level of evidence was not adequate in most subgroups. Some limitations of this study may have affected the results, possibly leading to an overestimation of APS effects.
DEFF Research Database (Denmark)
Berggreen, S.; Wiik, E.; Lund, Hans
2012-01-01
The aim of this study was to evaluate the efficacy of myofascial trigger point massage in the muscles of the head, neck and shoulders regarding pain in the treatment of females with chronic tension-type headache. They were randomized into either a treatment group (n = 20) (one session of trigger......: 8.8 (95% CI 0.1117.4), p = 0.047). Furthermore, a significant decrease in the number of trigger points was observed in the treatment group compared with the control group. Myofascial trigger point massage has a beneficial effect on pain in female patients with chronic tension-type headache....... point massage per week for 10 weeks) or a control group receiving no treatment (n = 19). The patients kept a diary to record their pain on a visual analogue scale (VAS), and the daily intake of drugs (mg) during the 4 weeks before and after the treatment period. The McGill Pain Questionnaire...
International Nuclear Information System (INIS)
Mironowicz, Piotr; Tavakoli, Armin; Hameedi, Alley; Marques, Breno; Bourennane, Mohamed; Pawłowski, Marcin
2016-01-01
Quantum communication with systems of dimension larger than two provides advantages in information processing tasks. Examples include higher rates of key distribution and random number generation. The main disadvantage of using such multi-dimensional quantum systems is the increased complexity of the experimental setup. Here, we analyze a not-so-obvious problem: the relation between randomness certification and computational requirements of the post-processing of experimental data. In particular, we consider semi-device independent randomness certification from an experiment using a four dimensional quantum system to violate the classical bound of a random access code. Using state-of-the-art techniques, a smaller quantum violation requires more computational power to demonstrate randomness, which at some point becomes impossible with today’s computers although the randomness is (probably) still there. We show that by dedicating more input settings of the experiment to randomness certification, then by more computational postprocessing of the experimental data which corresponds to a quantum violation, one may increase the amount of certified randomness. Furthermore, we introduce a method that significantly lowers the computational complexity of randomness certification. Our results show how more randomness can be generated without altering the hardware and indicate a path for future semi-device independent protocols to follow. (paper)
Lambert, Amaury; Stadler, Tanja
2013-12-01
Forward-in-time models of diversification (i.e., speciation and extinction) produce phylogenetic trees that grow "vertically" as time goes by. Pruning the extinct lineages out of such trees leads to natural models for reconstructed trees (i.e., phylogenies of extant species). Alternatively, reconstructed trees can be modelled by coalescent point processes (CPPs), where trees grow "horizontally" by the sequential addition of vertical edges. Each new edge starts at some random speciation time and ends at the present time; speciation times are drawn from the same distribution independently. CPPs lead to extremely fast computation of tree likelihoods and simulation of reconstructed trees. Their topology always follows the uniform distribution on ranked tree shapes (URT). We characterize which forward-in-time models lead to URT reconstructed trees and among these, which lead to CPP reconstructed trees. We show that for any "asymmetric" diversification model in which speciation rates only depend on time and extinction rates only depend on time and on a non-heritable trait (e.g., age), the reconstructed tree is CPP, even if extant species are incompletely sampled. If rates additionally depend on the number of species, the reconstructed tree is (only) URT (but not CPP). We characterize the common distribution of speciation times in the CPP description, and discuss incomplete species sampling as well as three special model cases in detail: (1) the extinction rate does not depend on a trait; (2) rates do not depend on time; (3) mass extinctions may happen additionally at certain points in the past. Copyright © 2013 Elsevier Inc. All rights reserved.
Adepeju, M.; Rosser, G.; Cheng, T.
2016-01-01
Many physical and sociological processes are represented as discrete events in time and space. These spatio-temporal point processes are often sparse, meaning that they cannot be aggregated and treated with conventional regression models. Models based on the point process framework may be employed instead for prediction purposes. Evaluating the predictive performance of these models poses a unique challenge, as the same sparseness prevents the use of popular measures such as the root mean squ...
Kruse, Christian; Rottensteiner, Franz; Hoberg, Thorsten; Ziems, Marcel; Rebke, Julia; Heipke, Christian
2018-04-01
The aftermath of wartime attacks is often felt long after the war ended, as numerous unexploded bombs may still exist in the ground. Typically, such areas are documented in so-called impact maps which are based on the detection of bomb craters. This paper proposes a method for the automatic detection of bomb craters in aerial wartime images that were taken during the Second World War. The object model for the bomb craters is represented by ellipses. A probabilistic approach based on marked point processes determines the most likely configuration of objects within the scene. Adding and removing new objects to and from the current configuration, respectively, changing their positions and modifying the ellipse parameters randomly creates new object configurations. Each configuration is evaluated using an energy function. High gradient magnitudes along the border of the ellipse are favored and overlapping ellipses are penalized. Reversible Jump Markov Chain Monte Carlo sampling in combination with simulated annealing provides the global energy optimum, which describes the conformance with a predefined model. For generating the impact map a probability map is defined which is created from the automatic detections via kernel density estimation. By setting a threshold, areas around the detections are classified as contaminated or uncontaminated sites, respectively. Our results show the general potential of the method for the automatic detection of bomb craters and its automated generation of an impact map in a heterogeneous image stock.
Directory of Open Access Journals (Sweden)
Jingyu Sun
2014-07-01
Full Text Available To survive in the current shipbuilding industry, it is of vital importance for shipyards to have the ship components’ accuracy evaluated efficiently during most of the manufacturing steps. Evaluating components’ accuracy by comparing each component’s point cloud data scanned by laser scanners and the ship’s design data formatted in CAD cannot be processed efficiently when (1 extract components from point cloud data include irregular obstacles endogenously, or when (2 registration of the two data sets have no clear direction setting. This paper presents reformative point cloud data processing methods to solve these problems. K-d tree construction of the point cloud data fastens a neighbor searching of each point. Region growing method performed on the neighbor points of the seed point extracts the continuous part of the component, while curved surface fitting and B-spline curved line fitting at the edge of the continuous part recognize the neighbor domains of the same component divided by obstacles’ shadows. The ICP (Iterative Closest Point algorithm conducts a registration of the two sets of data after the proper registration’s direction is decided by principal component analysis. By experiments conducted at the shipyard, 200 curved shell plates are extracted from the scanned point cloud data, and registrations are conducted between them and the designed CAD data using the proposed methods for an accuracy evaluation. Results show that the methods proposed in this paper support the accuracy evaluation targeted point cloud data processing efficiently in practice.
Dregan, Alex; van Staa, Tjeerd P; McDermott, Lisa; McCann, Gerard; Ashworth, Mark; Charlton, Judith; Wolfe, Charles D A; Rudd, Anthony; Yardley, Lucy; Gulliford, Martin C
BACKGROUND AND PURPOSE: The aim of this study was to evaluate whether the remote introduction of electronic decision support tools into family practices improves risk factor control after first stroke. This study also aimed to develop methods to implement cluster randomized trials in stroke using
Directory of Open Access Journals (Sweden)
Jennifer Corcoran
2015-04-01
Full Text Available Wetlands are dynamic in space and time, providing varying ecosystem services. Field reference data for both training and assessment of wetland inventories in the State of Minnesota are typically collected as GPS points over wide geographical areas and at infrequent intervals. This status-quo makes it difficult to keep updated maps of wetlands with adequate accuracy, efficiency, and consistency to monitor change. Furthermore, point reference data may not be representative of the prevailing land cover type for an area, due to point location or heterogeneity within the ecosystem of interest. In this research, we present techniques for training a land cover classification for two study sites in different ecoregions by implementing the RandomForest classifier in three ways: (1 field and photo interpreted points; (2 fixed window surrounding the points; and (3 image objects that intersect the points. Additional assessments are made to identify the key input variables. We conclude that the image object area training method is the most accurate and the most important variables include: compound topographic index, summer season green and blue bands, and grid statistics from LiDAR point cloud data, especially those that relate to the height of the return.
5th Seminar on Stochastic Processes, Random Fields and Applications
Russo, Francesco; Dozzi, Marco
2008-01-01
This volume contains twenty-eight refereed research or review papers presented at the 5th Seminar on Stochastic Processes, Random Fields and Applications, which took place at the Centro Stefano Franscini (Monte Verità) in Ascona, Switzerland, from May 30 to June 3, 2005. The seminar focused mainly on stochastic partial differential equations, random dynamical systems, infinite-dimensional analysis, approximation problems, and financial engineering. The book will be a valuable resource for researchers in stochastic analysis and professionals interested in stochastic methods in finance. Contributors: Y. Asai, J.-P. Aubin, C. Becker, M. Benaïm, H. Bessaih, S. Biagini, S. Bonaccorsi, N. Bouleau, N. Champagnat, G. Da Prato, R. Ferrière, F. Flandoli, P. Guasoni, V.B. Hallulli, D. Khoshnevisan, T. Komorowski, R. Léandre, P. Lescot, H. Lisei, J.A. López-Mimbela, V. Mandrekar, S. Méléard, A. Millet, H. Nagai, A.D. Neate, V. Orlovius, M. Pratelli, N. Privault, O. Raimond, M. Röckner, B. Rüdiger, W.J. Runggaldi...
Edit distance for marked point processes revisited: An implementation by binary integer programming
Energy Technology Data Exchange (ETDEWEB)
Hirata, Yoshito; Aihara, Kazuyuki [Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505 (Japan)
2015-12-15
We implement the edit distance for marked point processes [Suzuki et al., Int. J. Bifurcation Chaos 20, 3699–3708 (2010)] as a binary integer program. Compared with the previous implementation using minimum cost perfect matching, the proposed implementation has two advantages: first, by using the proposed implementation, we can apply a wide variety of software and hardware, even spin glasses and coherent ising machines, to calculate the edit distance for marked point processes; second, the proposed implementation runs faster than the previous implementation when the difference between the numbers of events in two time windows for a marked point process is large.
Li, Fangling; Bi, Dingyan
2017-08-12
To explore the effects differences for the third lumbar transverse process syndrome between acupuncture mainly at zygapophyseal joint and transverse process and conventional acupuncture. Eighty cases were randomly assigned into an observation group and a control group, 40 cases in each one. In the observation group, patients were treated with acupuncture at zygapophyseal joint, transverse process, the superior gluteus nerve into the hip point and Weizhong (BL 40), and those in the control group were treated with acupuncture at Qihaishu (BL 24), Jiaji (EX-B 2) of L 2 -L 4 , the superior gluteus nerve into the hip point and Weizhong (BL 40). The treatment was given 6 times a week for 2 weeks, once a day. The visual analogue scale (VAS), Japanese Orthopaedic Association (JOA) low back pain score and simplified Chinese Oswestry disability index (SC-ODI) were observed before and after treatment as well as 6 months after treatment, and the clinical effects were evaluated. The total effective rate in the observation group was 95.0% (38/40), which was significantly higher than 82.5% (33/40) in the control group ( P process for the third lumbar transverse process syndrome achieves good effect, which is better than that of conventional acupuncture on relieving pain, improving lumbar function and life quality.
The cylindrical K-function and Poisson line cluster point processes
DEFF Research Database (Denmark)
Møller, Jesper; Safavimanesh, Farzaneh; Rasmussen, Jakob G.
Poisson line cluster point processes, is also introduced. Parameter estimation based on moment methods or Bayesian inference for this model is discussed when the underlying Poisson line process and the cluster memberships are treated as hidden processes. To illustrate the methodologies, we analyze two...
Hierarchical spatial point process analysis for a plant community with high biodiversity
DEFF Research Database (Denmark)
Illian, Janine B.; Møller, Jesper; Waagepetersen, Rasmus
2009-01-01
A complex multivariate spatial point pattern of a plant community with high biodiversity is modelled using a hierarchical multivariate point process model. In the model, interactions between plants with different post-fire regeneration strategies are of key interest. We consider initially a maxim...
International Nuclear Information System (INIS)
Bhattacharyya, Pratip; Chakrabarti, Bikas K
2008-01-01
We study different ways of determining the mean distance (r n ) between a reference point and its nth neighbour among random points distributed with uniform density in a D-dimensional Euclidean space. First, we present a heuristic method; though this method provides only a crude mathematical result, it shows a simple way of estimating (r n ). Next, we describe two alternative means of deriving the exact expression of (r n ): we review the method using absolute probability and develop an alternative method using conditional probability. Finally, we obtain an approximation to (r n ) from the mean volume between the reference point and its nth neighbour and compare it with the heuristic and exact results
Definition of distance for nonlinear time series analysis of marked point process data
Energy Technology Data Exchange (ETDEWEB)
Iwayama, Koji, E-mail: koji@sat.t.u-tokyo.ac.jp [Research Institute for Food and Agriculture, Ryukoku Univeristy, 1-5 Yokotani, Seta Oe-cho, Otsu-Shi, Shiga 520-2194 (Japan); Hirata, Yoshito; Aihara, Kazuyuki [Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505 (Japan)
2017-01-30
Marked point process data are time series of discrete events accompanied with some values, such as economic trades, earthquakes, and lightnings. A distance for marked point process data allows us to apply nonlinear time series analysis to such data. We propose a distance for marked point process data which can be calculated much faster than the existing distance when the number of marks is small. Furthermore, under some assumptions, the Kullback–Leibler divergences between posterior distributions for neighbors defined by this distance are small. We performed some numerical simulations showing that analysis based on the proposed distance is effective. - Highlights: • A new distance for marked point process data is proposed. • The distance can be computed fast enough for a small number of marks. • The method to optimize parameter values of the distance is also proposed. • Numerical simulations indicate that the analysis based on the distance is effective.
Christidis, Nikolaos; Omrani, Shahin; Fredriksson, Lars; Gjelset, Mattias; Louca, Sofia; Hedenberg-Magnusson, Britt; Ernberg, Malin
2015-01-01
Serotonin (5-HT) mediates pain by peripheral 5-HT3-receptors. Results from a few studies indicate that intramuscular injections of 5-HT3-antagonists may reduce musculoskeletal pain. The aim of this study was to investigate if repeated intramuscular tender-point injections of the 5-HT3-antagonist granisetron alleviate pain in patients with myofascial temporomandibular disorders (M-TMD). This prospective, randomized, controlled, double blind, parallel-arm trial (RCT) was carried out during at two centers in Stockholm, Sweden. The randomization was performed by a researcher who did not participate in data collection with an internet-based application ( www.randomization.com ). 40 patients with a diagnose of M-TMD according to the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) were randomized to receive repeated injections, one week apart, with either granisetron (GRA; 3 mg) or isotonic saline as control (CTR). The median weekly pain intensities decreased significantly at all follow-ups (1-, 2-, 6-months) in the GRA-group (Friedman test; P 0.075). The numbers needed to treat (NNT) were 4 at the 1- and 6-month follow-ups, and 3.3 at the 2-month follow-up in favor of granisetron. Repeated intramuscular tender-point injections with granisetron provide a new pharmacological treatment possibility for myofascial pain patients with repeated intramuscular tender-point injections with the serotonin type 3 antagonist granisetron. It showed a clinically relevant pain reducing effect in the temporomandibular region, both in a short- and long-term aspect. European Clinical Trials Database 2005-006042-41 as well as at Clinical Trials NCT02230371 .
Zer, Alona; Prince, Rebecca M; Amir, Eitan; Abdul Razak, Albiruni
2016-05-01
Randomized controlled trials (RCTs) in soft tissue sarcoma (STS) have used varying end points. The surrogacy of intermediate end points, such as progression-free survival (PFS), response rate (RR), and 3-month and 6-month PFS (3moPFS and 6moPFS) with overall survival (OS), remains unknown. The quality of efficacy and toxicity reporting in these studies is also uncertain. A systematic review of systemic therapy RCTs in STS was performed. Surrogacy between intermediate end points and OS was explored using weighted linear regression for the hazard ratio for OS with the hazard ratio for PFS or the odds ratio for RR, 3moPFS, and 6moPFS. The quality of reporting for efficacy and toxicity was also evaluated. Fifty-two RCTs published between 1974 and 2014, comprising 9,762 patients, met the inclusion criteria. There were significant correlations between PFS and OS (R = 0.61) and between RR and OS (R = 0.51). Conversely, there were nonsignificant correlations between 3moPFS and 6moPFS with OS. A reduction in the use of RR as the primary end point was observed over time, favoring time-based events (P for trend = .02). In 14% of RCTs, the primary end point was not met, but the study was reported as being positive. Toxicity was comprehensively reported in 47% of RCTs, whereas 14% inadequately reported toxicity. In advanced STS, PFS and RR seem to be appropriate surrogates for OS. There is poor correlation between OS and both 3moPFS and 6moPFS. As such, caution is urged with the use of these as primary end points in randomized STS trials. The quality of toxicity reporting and interpretation of results is suboptimal. © 2016 by American Society of Clinical Oncology.
Markov Random Field Restoration of Point Correspondences for Active Shape Modelling
DEFF Research Database (Denmark)
Hilger, Klaus Baggesen; Paulsen, Rasmus Reinhold; Larsen, Rasmus
2004-01-01
In this paper it is described how to build a statistical shape model using a training set with a sparse of landmarks. A well defined model mesh is selected and fitted to all shapes in the training set using thin plate spline warping. This is followed by a projection of the points of the warped...
Di Cesare, Annalisa; Giombini, Arrigo; Di Cesare, Mariachiara; Ripani, Maurizio; Vulpiani, Maria Chiara; Saraceni, Vincenzo Maria
2011-02-01
The goal of this study was to compare the effects of trigger point (TRP) mesotherapy and acupuncture (ACP) mesotherapy in the treatment of patients with chronic low back pain. Short term randomized controlled trial. 62 subjects with chronic low back pain were recruited at outpatients Physical Medicine and Rehabilitation Clinic at the University of Rome "La Sapienza" in the period between July 2006 and May 2008. Study subjects were assigned to receive 4 weeks treatments with either trigger point mesotherapy (TRP mesotherapy, n=29) or acupoints mesotherapy (ACP mesotherapy, n=33). Pain intensity with a pain visual analogic scale (VAS) and verbal rating scale (VRS) and pain disability with McGill Pain Questionnaire Short Form (SFMPQ), Roland Morris Disability Questionnaire (RMQ) and Oswestry Low Back Pain Disability Questionaire (ODQ). ACP mesotherapy shows a more effective results in VRS and VAS measures in the follow-up (p(VRS)=mesotherapy group. Our results suggest that the response to ACP mesotherapy may be greater than the response to TRP mesotherapy in the short term follow-up. This technique could be nevertheless a viable option as an adjunct treatment in an overall treatment planning of CLBP. Copyright © 2010 Elsevier Ltd. All rights reserved.
Process and results of analytical framework and typology development for POINT
DEFF Research Database (Denmark)
Gudmundsson, Henrik; Lehtonen, Markku; Bauler, Tom
2009-01-01
POINT is a project about how indicators are used in practice; to what extent and in what way indicators actually influence, support, or hinder policy and decision making processes, and what could be done to enhance the positive role of indicators in such processes. The project needs an analytical......, a set of core concepts and associated typologies, a series of analytic schemes proposed, and a number of research propositions and questions for the subsequent empirical work in POINT....
Random number generation as an index of controlled processing.
Jahanshahi, Marjan; Saleem, T; Ho, Aileen K; Dirnberger, Georg; Fuller, R
2006-07-01
Random number generation (RNG) is a functionally complex process that is highly controlled and therefore dependent on Baddeley's central executive. This study addresses this issue by investigating whether key predictions from this framework are compatible with empirical data. In Experiment 1, the effect of increasing task demands by increasing the rate of the paced generation was comprehensively examined. As expected, faster rates affected performance negatively because central resources were increasingly depleted. Next, the effects of participants' exposure were manipulated in Experiment 2 by providing increasing amounts of practice on the task. There was no improvement over 10 practice trials, suggesting that the high level of strategic control required by the task was constant and not amenable to any automatization gain with repeated exposure. Together, the results demonstrate that RNG performance is a highly controlled and demanding process sensitive to additional demands on central resources (Experiment 1) and is unaffected by repeated performance or practice (Experiment 2). These features render the easily administered RNG task an ideal and robust index of executive function that is highly suitable for repeated clinical use. ((c) 2006 APA, all rights reserved).
Chen, Baojiang; Zhou, Xiao-Hua
2013-05-01
Life history data arising in clusters with prespecified assessment time points for patients often feature incomplete data since patients may choose to visit the clinic based on their needs. Markov process models provide a useful tool describing disease progression for life history data. The literature mainly focuses on time homogeneous process. In this paper we develop methods to deal with non-homogeneous Markov process with incomplete clustered life history data. A correlated random effects model is developed to deal with the nonignorable missingness, and a time transformation is employed to address the non-homogeneity in the transition model. Maximum likelihood estimate based on the Monte-Carlo EM algorithm is advocated for parameter estimation. Simulation studies demonstrate that the proposed method works well in many situations. We also apply this method to an Alzheimer's disease study.
Spatial Mixture Modelling for Unobserved Point Processes: Examples in Immunofluorescence Histology.
Ji, Chunlin; Merl, Daniel; Kepler, Thomas B; West, Mike
2009-12-04
We discuss Bayesian modelling and computational methods in analysis of indirectly observed spatial point processes. The context involves noisy measurements on an underlying point process that provide indirect and noisy data on locations of point outcomes. We are interested in problems in which the spatial intensity function may be highly heterogenous, and so is modelled via flexible nonparametric Bayesian mixture models. Analysis aims to estimate the underlying intensity function and the abundance of realized but unobserved points. Our motivating applications involve immunological studies of multiple fluorescent intensity images in sections of lymphatic tissue where the point processes represent geographical configurations of cells. We are interested in estimating intensity functions and cell abundance for each of a series of such data sets to facilitate comparisons of outcomes at different times and with respect to differing experimental conditions. The analysis is heavily computational, utilizing recently introduced MCMC approaches for spatial point process mixtures and extending them to the broader new context here of unobserved outcomes. Further, our example applications are problems in which the individual objects of interest are not simply points, but rather small groups of pixels; this implies a need to work at an aggregate pixel region level and we develop the resulting novel methodology for this. Two examples with with immunofluorescence histology data demonstrate the models and computational methodology.
Mota, Bernardo; Pereira, Jose; Campagnolo, Manuel; Killick, Rebeca
2013-04-01
Area burned in tropical savannas of Brazil was mapped using MODIS-AQUA daily 250m resolution imagery by adapting one of the European Space Agency fire_CCI project burned area algorithms, based on change point detection and Markov random fields. The study area covers 1,44 Mkm2 and was performed with data from 2005. The daily 1000 m image quality layer was used for cloud and cloud shadow screening. The algorithm addresses each pixel as a time series and detects changes in the statistical properties of NIR reflectance values, to identify potential burning dates. The first step of the algorithm is robust filtering, to exclude outlier observations, followed by application of the Pruned Exact Linear Time (PELT) change point detection technique. Near-infrared (NIR) spectral reflectance changes between time segments, and post change NIR reflectance values are combined into a fire likelihood score. Change points corresponding to an increase in reflectance are dismissed as potential burn events, as are those occurring outside of a pre-defined fire season. In the last step of the algorithm, monthly burned area probability maps and detection date maps are converted to dichotomous (burned-unburned maps) using Markov random fields, which take into account both spatial and temporal relations in the potential burned area maps. A preliminary assessment of our results is performed by comparison with data from the MODIS 1km active fires and the 500m burned area products, taking into account differences in spatial resolution between the two sensors.
Hong, Peilong; Li, Liming; Liu, Jianji; Zhang, Guoquan
2016-03-29
Young's double-slit or two-beam interference is of fundamental importance to understand various interference effects, in which the stationary phase difference between two beams plays the key role in the first-order coherence. Different from the case of first-order coherence, in the high-order optical coherence the statistic behavior of the optical phase will play the key role. In this article, by employing a fundamental interfering configuration with two classical point sources, we showed that the high- order optical coherence between two classical point sources can be actively designed by controlling the statistic behavior of the relative phase difference between two point sources. Synchronous position Nth-order subwavelength interference with an effective wavelength of λ/M was demonstrated, in which λ is the wavelength of point sources and M is an integer not larger than N. Interestingly, we found that the synchronous position Nth-order interference fringe fingerprints the statistic trace of random phase fluctuation of two classical point sources, therefore, it provides an effective way to characterize the statistic properties of phase fluctuation for incoherent light sources.
SINGLE TREE DETECTION FROM AIRBORNE LASER SCANNING DATA USING A MARKED POINT PROCESS BASED METHOD
Directory of Open Access Journals (Sweden)
J. Zhang
2013-05-01
Full Text Available Tree detection and reconstruction is of great interest in large-scale city modelling. In this paper, we present a marked point process model to detect single trees from airborne laser scanning (ALS data. We consider single trees in ALS recovered canopy height model (CHM as a realization of point process of circles. Unlike traditional marked point process, we sample the model in a constraint configuration space by making use of image process techniques. A Gibbs energy is defined on the model, containing a data term which judge the fitness of the model with respect to the data, and prior term which incorporate the prior knowledge of object layouts. We search the optimal configuration through a steepest gradient descent algorithm. The presented hybrid framework was test on three forest plots and experiments show the effectiveness of the proposed method.
Point Estimation Method of Electromagnetic Flowmeters Life Based on Randomly Censored Failure Data
Directory of Open Access Journals (Sweden)
Zhen Zhou
2014-08-01
Full Text Available This paper analyzes the characteristics of the enterprise after-sale service records for field failure data, and summarizes the types of field data. Maximum likelihood estimation method and the least squares method are presented for the complexity and difficulty of field failure data processing, and Monte Carlo simulation method is proposed. Monte Carlo simulation, the relatively simple calculation method, is an effective method, whose result is closed to that of the other two methods. Through the after-sale service records analysis of a specific electromagnetic flowmeter enterprises, this paper illustrates the effectiveness of field failure data processing methods.
Second-order analysis of structured inhomogeneous spatio-temporal point processes
DEFF Research Database (Denmark)
Møller, Jesper; Ghorbani, Mohammad
Statistical methodology for spatio-temporal point processes is in its infancy. We consider second-order analysis based on pair correlation functions and K-functions for first general inhomogeneous spatio-temporal point processes and second inhomogeneous spatio-temporal Cox processes. Assuming...... spatio-temporal separability of the intensity function, we clarify different meanings of second-order spatio-temporal separability. One is second-order spatio-temporal independence and relates e.g. to log-Gaussian Cox processes with an additive covariance structure of the underlying spatio......-temporal Gaussian process. Another concerns shot-noise Cox processes with a separable spatio-temporal covariance density. We propose diagnostic procedures for checking hypotheses of second-order spatio-temporal separability, which we apply on simulated and real data (the UK 2001 epidemic foot and mouth disease data)....
Truccolo, Wilson
2016-11-01
This review presents a perspective on capturing collective dynamics in recorded neuronal ensembles based on multivariate point process models, inference of low-dimensional dynamics and coarse graining of spatiotemporal measurements. A general probabilistic framework for continuous time point processes reviewed, with an emphasis on multivariate nonlinear Hawkes processes with exogenous inputs. A point process generalized linear model (PP-GLM) framework for the estimation of discrete time multivariate nonlinear Hawkes processes is described. The approach is illustrated with the modeling of collective dynamics in neocortical neuronal ensembles recorded in human and non-human primates, and prediction of single-neuron spiking. A complementary approach to capture collective dynamics based on low-dimensional dynamics ("order parameters") inferred via latent state-space models with point process observations is presented. The approach is illustrated by inferring and decoding low-dimensional dynamics in primate motor cortex during naturalistic reach and grasp movements. Finally, we briefly review hypothesis tests based on conditional inference and spatiotemporal coarse graining for assessing collective dynamics in recorded neuronal ensembles. Published by Elsevier Ltd.
Directory of Open Access Journals (Sweden)
Bartłomiej Kraszewski
2015-06-01
Full Text Available The article presents the results of research on the effect that radiometric quality of point cloud RGB attributes have on color-based segmentation. In the research, a point cloud with a resolution of 5 mm, received from FAROARO Photon 120 scanner, described the fragment of an office’s room and color images were taken by various digital cameras. The images were acquired by SLR Nikon D3X, and SLR Canon D200 integrated with the laser scanner, compact camera Panasonic TZ-30 and a mobile phone digital camera. Color information from images was spatially related to point cloud in FAROARO Scene software. The color-based segmentation of testing data was performed with the use of a developed application named “RGB Segmentation”. The application was based on public Point Cloud Libraries (PCL and allowed to extract subsets of points fulfilling the criteria of segmentation from the source point cloud using region growing method.Using the developed application, the segmentation of four tested point clouds containing different RGB attributes from various images was performed. Evaluation of segmentation process was performed based on comparison of segments acquired using the developed application and extracted manually by an operator. The following items were compared: the number of obtained segments, the number of correctly identified objects and the correctness of segmentation process. The best correctness of segmentation and most identified objects were obtained using the data with RGB attribute from Nikon D3X images. Based on the results it was found that quality of RGB attributes of point cloud had impact only on the number of identified objects. In case of correctness of the segmentation, as well as its error no apparent relationship between the quality of color information and the result of the process was found.[b]Keywords[/b]: terrestrial laser scanning, color-based segmentation, RGB attribute, region growing method, digital images, points cloud
International Nuclear Information System (INIS)
Holmberg, J.
1997-04-01
The thesis models risk management as an optimal control problem for a stochastic process. The approach classes the decisions made by management into three categories according to the control methods of a point process: (1) planned process lifetime, (2) modification of the design, and (3) operational decisions. The approach is used for optimization of plant shutdown criteria and surveillance test strategies of a hypothetical nuclear power plant
Energy Technology Data Exchange (ETDEWEB)
Holmberg, J [VTT Automation, Espoo (Finland)
1997-04-01
The thesis models risk management as an optimal control problem for a stochastic process. The approach classes the decisions made by management into three categories according to the control methods of a point process: (1) planned process lifetime, (2) modification of the design, and (3) operational decisions. The approach is used for optimization of plant shutdown criteria and surveillance test strategies of a hypothetical nuclear power plant. 62 refs. The thesis includes also five previous publications by author.
An empirical test of pseudo random number generators by means of an exponential decaying process
International Nuclear Information System (INIS)
Coronel B, H.F.; Hernandez M, A.R.; Jimenez M, M.A.; Mora F, L.E.
2007-01-01
Empirical tests for pseudo random number generators based on the use of processes or physical models have been successfully used and are considered as complementary to theoretical tests of randomness. In this work a statistical methodology for evaluating the quality of pseudo random number generators is presented. The method is illustrated in the context of the so-called exponential decay process, using some pseudo random number generators commonly used in physics. (Author)
Apparatus and method for implementing power saving techniques when processing floating point values
Kim, Young Moon; Park, Sang Phill
2017-10-03
An apparatus and method are described for reducing power when reading and writing graphics data. For example, one embodiment of an apparatus comprises: a graphics processor unit (GPU) to process graphics data including floating point data; a set of registers, at least one of the registers of the set partitioned to store the floating point data; and encode/decode logic to reduce a number of binary 1 values being read from the at least one register by causing a specified set of bit positions within the floating point data to be read out as 0s rather than 1s.
Dregan, Alex; van Staa, Tjeerd P; McDermott, Lisa; McCann, Gerard; Ashworth, Mark; Charlton, Judith; Wolfe, Charles D A; Rudd, Anthony; Yardley, Lucy; Gulliford, Martin C; Trial Steering Committee
2014-07-01
The aim of this study was to evaluate whether the remote introduction of electronic decision support tools into family practices improves risk factor control after first stroke. This study also aimed to develop methods to implement cluster randomized trials in stroke using electronic health records. Family practices were recruited from the UK Clinical Practice Research Datalink and allocated to intervention and control trial arms by minimization. Remotely installed, electronic decision support tools promoted intensified secondary prevention for 12 months with last measure of systolic blood pressure as the primary outcome. Outcome data from electronic health records were analyzed using marginal models. There were 106 Clinical Practice Research Datalink family practices allocated (intervention, 53; control, 53), with 11 391 (control, 5516; intervention, 5875) participants with acute stroke ever diagnosed. Participants at trial practices had similar characteristics as 47,887 patients with stroke at nontrial practices. During the intervention period, blood pressure values were recorded in the electronic health records for 90% and cholesterol values for 84% of participants. After intervention, the latest mean systolic blood pressure was 131.7 (SD, 16.8) mm Hg in the control trial arm and 131.4 (16.7) mm Hg in the intervention trial arm, and adjusted mean difference was -0.56 mm Hg (95% confidence interval, -1.38 to 0.26; P=0.183). The financial cost of the trial was approximately US $22 per participant, or US $2400 per family practice allocated. Large pragmatic intervention studies may be implemented at low cost by using electronic health records. The intervention used in this trial was not found to be effective, and further research is needed to develop more effective intervention strategies. http://www.controlled-trials.com. Current Controlled Trials identifier: ISRCTN35701810. © 2014 American Heart Association, Inc.
Effect of processing conditions on oil point pressure of moringa oleifera seed.
Aviara, N A; Musa, W B; Owolarafe, O K; Ogunsina, B S; Oluwole, F A
2015-07-01
Seed oil expression is an important economic venture in rural Nigeria. The traditional techniques of carrying out the operation is not only energy sapping and time consuming but also wasteful. In order to reduce the tedium involved in the expression of oil from moringa oleifera seed and develop efficient equipment for carrying out the operation, the oil point pressure of the seed was determined under different processing conditions using a laboratory press. The processing conditions employed were moisture content (4.78, 6.00, 8.00 and 10.00 % wet basis), heating temperature (50, 70, 85 and 100 °C) and heating time (15, 20, 25 and 30 min). Results showed that the oil point pressure increased with increase in seed moisture content, but decreased with increase in heating temperature and heating time within the above ranges. Highest oil point pressure value of 1.1239 MPa was obtained at the processing conditions of 10.00 % moisture content, 50 °C heating temperature and 15 min heating time. The lowest oil point pressure obtained was 0.3164 MPa and it occurred at the moisture content of 4.78 %, heating temperature of 100 °C and heating time of 30 min. Analysis of Variance (ANOVA) showed that all the processing variables and their interactions had significant effect on the oil point pressure of moringa oleifera seed at 1 % level of significance. This was further demonstrated using Response Surface Methodology (RSM). Tukey's test and Duncan's Multiple Range Analysis successfully separated the means and a multiple regression equation was used to express the relationship existing between the oil point pressure of moringa oleifera seed and its moisture content, processing temperature, heating time and their interactions. The model yielded coefficients that enabled the oil point pressure of the seed to be predicted with very high coefficient of determination.
Exact two-point resistance, and the simple random walk on the complete graph minus N edges
International Nuclear Information System (INIS)
Chair, Noureddine
2012-01-01
An analytical approach is developed to obtain the exact expressions for the two-point resistance and the total effective resistance of the complete graph minus N edges of the opposite vertices. These expressions are written in terms of certain numbers that we introduce, which we call the Bejaia and the Pisa numbers; these numbers are the natural generalizations of the bisected Fibonacci and Lucas numbers. The correspondence between random walks and the resistor networks is then used to obtain the exact expressions for the first passage and mean first passage times on this graph. - Highlights: ► We obtain exact formulas for the two-point resistance of the complete graph minus N edges. ► We obtain also the total effective resistance of this graph. ► We modified Schwatt’s formula on trigonometrical power sum to suit our computations. ► We introduced the generalized bisected Fibonacci and Lucas numbers: the Bejaia and the Pisa numbers. ► The first passage and mean first passage times of the random walks have exact expressions.
Directory of Open Access Journals (Sweden)
Eric J Brunner
2008-08-01
Full Text Available Raised C-reactive protein (CRP is a risk factor for type 2 diabetes. According to the Mendelian randomization method, the association is likely to be causal if genetic variants that affect CRP level are associated with markers of diabetes development and diabetes. Our objective was to examine the nature of the association between CRP phenotype and diabetes development using CRP haplotypes as instrumental variables.We genotyped three tagging SNPs (CRP + 2302G > A; CRP + 1444T > C; CRP + 4899T > G in the CRP gene and measured serum CRP in 5,274 men and women at mean ages 49 and 61 y (Whitehall II Study. Homeostasis model assessment-insulin resistance (HOMA-IR and hemoglobin A1c (HbA1c were measured at age 61 y. Diabetes was ascertained by glucose tolerance test and self-report. Common major haplotypes were strongly associated with serum CRP levels, but unrelated to obesity, blood pressure, and socioeconomic position, which may confound the association between CRP and diabetes risk. Serum CRP was associated with these potential confounding factors. After adjustment for age and sex, baseline serum CRP was associated with incident diabetes (hazard ratio = 1.39 [95% confidence interval 1.29-1.51], HOMA-IR, and HbA1c, but the associations were considerably attenuated on adjustment for potential confounding factors. In contrast, CRP haplotypes were not associated with HOMA-IR or HbA1c (p = 0.52-0.92. The associations of CRP with HOMA-IR and HbA1c were all null when examined using instrumental variables analysis, with genetic variants as the instrument for serum CRP. Instrumental variables estimates differed from the directly observed associations (p = 0.007-0.11. Pooled analysis of CRP haplotypes and diabetes in Whitehall II and Northwick Park Heart Study II produced null findings (p = 0.25-0.88. Analyses based on the Wellcome Trust Case Control Consortium (1,923 diabetes cases, 2,932 controls using three SNPs in tight linkage disequilibrium with our
Using a Virtual Experiment to Analyze Infiltration Process from Point to Grid-cell Size Scale
Barrios, M. I.
2013-12-01
The hydrological science requires the emergence of a consistent theoretical corpus driving the relationships between dominant physical processes at different spatial and temporal scales. However, the strong spatial heterogeneities and non-linearities of these processes make difficult the development of multiscale conceptualizations. Therefore, scaling understanding is a key issue to advance this science. This work is focused on the use of virtual experiments to address the scaling of vertical infiltration from a physically based model at point scale to a simplified physically meaningful modeling approach at grid-cell scale. Numerical simulations have the advantage of deal with a wide range of boundary and initial conditions against field experimentation. The aim of the work was to show the utility of numerical simulations to discover relationships between the hydrological parameters at both scales, and to use this synthetic experience as a media to teach the complex nature of this hydrological process. The Green-Ampt model was used to represent vertical infiltration at point scale; and a conceptual storage model was employed to simulate the infiltration process at the grid-cell scale. Lognormal and beta probability distribution functions were assumed to represent the heterogeneity of soil hydraulic parameters at point scale. The linkages between point scale parameters and the grid-cell scale parameters were established by inverse simulations based on the mass balance equation and the averaging of the flow at the point scale. Results have shown numerical stability issues for particular conditions and have revealed the complex nature of the non-linear relationships between models' parameters at both scales and indicate that the parameterization of point scale processes at the coarser scale is governed by the amplification of non-linear effects. The findings of these simulations have been used by the students to identify potential research questions on scale issues
Putting to point the production process of iodine-131 by dry distillation (Preoperational tests)
International Nuclear Information System (INIS)
Alanis M, J.
2002-12-01
With the purpose of putting to point the process of production of 131 I, one of the objectives of carrying out the realization of operational tests of the production process of iodine-131, it was of verifying the operation of each one of the following components: heating systems, vacuum system, mechanical system and peripheral equipment that are part of the production process of iodine-131, another of the objectives, was settling down the optimal parameters that were applied in each process during the obtaining of iodine-131, it is necessary to point out that this objective is very important, since the components of the equipment are new and its behavior during the process is different to the equipment where its were carried out the experimental studies. (Author)
Coca, Steven G; Zabetian, Azadeh; Ferket, Bart S; Zhou, Jing; Testani, Jeffrey M; Garg, Amit X; Parikh, Chirag R
2016-08-01
Observational studies have shown that acute change in kidney function (specifically, AKI) is a strong risk factor for poor outcomes. Thus, the outcome of acute change in serum creatinine level, regardless of underlying biology or etiology, is frequently used in clinical trials as both efficacy and safety end points. We performed a meta-analysis of clinical trials to quantify the relationship between positive or negative short-term effects of interventions on change in serum creatinine level and more meaningful clinical outcomes. After a thorough literature search, we included 14 randomized trials of interventions that altered risk for an acute increase in serum creatinine level and had reported between-group differences in CKD and/or mortality rate ≥3 months after randomization. Seven trials assessed interventions that, compared with placebo, increased risk of acute elevation in serum creatinine level (pooled relative risk, 1.52; 95% confidence interval, 1.22 to 1.89), and seven trials assessed interventions that, compared with placebo, reduced risk of acute elevation in serum creatinine level (pooled relative risk, 0.57; 95% confidence interval, 0.44 to 0.74). However, pooled risks for CKD and mortality associated with interventions did not differ from those with placebo in either group. In conclusion, several interventions that affect risk of acute, mild to moderate, often temporary elevation in serum creatinine level in placebo-controlled randomized trials showed no appreciable effect on CKD or mortality months later, raising questions about the value of using small to moderate changes in serum creatinine level as end points in clinical trials. Copyright © 2016 by the American Society of Nephrology.
Hazard analysis and critical control point (HACCP) for an ultrasound food processing operation.
Chemat, Farid; Hoarau, Nicolas
2004-05-01
Emerging technologies, such as ultrasound (US), used for food and drink production often cause hazards for product safety. Classical quality control methods are inadequate to control these hazards. Hazard analysis of critical control points (HACCP) is the most secure and cost-effective method for controlling possible product contamination or cross-contamination, due to physical or chemical hazard during production. The following case study on the application of HACCP to an US food-processing operation demonstrates how the hazards at the critical control points of the process are effectively controlled through the implementation of HACCP.
The application of prototype point processes for the summary and description of California wildfires
Nichols, K.; Schoenberg, F.P.; Keeley, J.E.; Bray, A.; Diez, D.
2011-01-01
A method for summarizing repeated realizations of a space-time marked point process, known as prototyping, is discussed and applied to catalogues of wildfires in California. Prototype summaries are constructed for varying time intervals using California wildfire data from 1990 to 2006. Previous work on prototypes for temporal and space-time point processes is extended here to include methods for computing prototypes with marks and the incorporation of prototype summaries into hierarchical clustering algorithms, the latter of which is used to delineate fire seasons in California. Other results include summaries of patterns in the spatial-temporal distribution of wildfires within each wildfire season. ?? 2011 Blackwell Publishing Ltd.
Mass measurement on the rp-process waiting point {sup 72}Kr
Energy Technology Data Exchange (ETDEWEB)
Rodriguez, D. [Gesellschaft fuer Schwerionenforschung mbH, Darmstadt (Germany); Kolhinen, V.S. [Jyvaeskylae Univ. (Finland); Audi, G. [CSNSM-IN2P3-Centre National de la Recherche Scientifique (CNRS), 91 - Orsay (FR)] [and others
2004-06-01
The mass of one of the three major waiting points in the astrophysical rp-process {sup 72}Kr was measured for the first time with the Penning trap mass spectrometer ISOLTRAP. The measurement yielded a relative mass uncertainty of {delta}m/m=1.2 x 10{sup -7} ({delta}m=8 keV). Other Kr isotopes, also needed for astrophysical calculations, were measured with more than one order of magnitude improved accuracy. We use the ISOLTRAP masses of{sup 72-74}Kr to reanalyze the role of the {sup 72}Kr waiting point in the rp-process during X-ray bursts. (orig.)
Efficient LIDAR Point Cloud Data Managing and Processing in a Hadoop-Based Distributed Framework
Wang, C.; Hu, F.; Sha, D.; Han, X.
2017-10-01
Light Detection and Ranging (LiDAR) is one of the most promising technologies in surveying and mapping city management, forestry, object recognition, computer vision engineer and others. However, it is challenging to efficiently storage, query and analyze the high-resolution 3D LiDAR data due to its volume and complexity. In order to improve the productivity of Lidar data processing, this study proposes a Hadoop-based framework to efficiently manage and process LiDAR data in a distributed and parallel manner, which takes advantage of Hadoop's storage and computing ability. At the same time, the Point Cloud Library (PCL), an open-source project for 2D/3D image and point cloud processing, is integrated with HDFS and MapReduce to conduct the Lidar data analysis algorithms provided by PCL in a parallel fashion. The experiment results show that the proposed framework can efficiently manage and process big LiDAR data.
End point detection in ion milling processes by sputter-induced optical emission spectroscopy
International Nuclear Information System (INIS)
Lu, C.; Dorian, M.; Tabei, M.; Elsea, A.
1984-01-01
The characteristic optical emission from the sputtered material during ion milling processes can provide an unambiguous indication of the presence of the specific etched species. By monitoring the intensity of a representative emission line, the etching process can be precisely terminated at an interface. Enhancement of the etching end point is possible by using a dual-channel photodetection system operating in a ratio or difference mode. The installation of the optical detection system to an existing etching chamber has been greatly facilitated by the use of optical fibers. Using a commercial ion milling system, experimental data for a number of etching processes have been obtained. The result demonstrates that sputter-induced optical emission spectroscopy offers many advantages over other techniques in detecting the etching end point of ion milling processes
Directory of Open Access Journals (Sweden)
Burton Douglas C
2010-07-01
Full Text Available Abstract Background The use of thoracic pedicle screws in spinal deformity, trauma, and tumor reconstruction is becoming more common. Unsuccessful screw placement may require salvage techniques utilizing transverse process hooks. The effect of different starting point placement techniques on the strength of the transverse process has not previously been reported. The purpose of this paper is to determine the biomechanical properties of the thoracic transverse process following various pedicle screw starting point placement techniques. Methods Forty-seven fresh-frozen human cadaveric thoracic vertebrae from T2 to T9 were disarticulated and matched by bone mineral density (BMD and transverse process (TP cross-sectional area. Specimens were randomized to one of four groups: A, control, and three others based on thoracic pedicle screw placement technique; B, straightforward; C, funnel; and D, in-out-in. Initial cortical bone removal for pedicle screw placement was made using a burr at the location on the transverse process or transverse process-laminar junction as published in the original description of each technique. The transverse process was tested measuring load-to-failure simulating a hook in compression mode. Analysis of covariance and Pearson correlation coefficients were used to examine the data. Results Technique was a significant predictor of load-to-failure (P = 0.0007. The least squares mean (LS mean load-to-failure of group A (control was 377 N, group B (straightforward 355 N, group C (funnel 229 N, and group D (in-out-in 301 N. Significant differences were noted between groups A and C, A and D, B and C, and C and D. BMD (0.925 g/cm2 [range, 0.624-1.301 g/cm2] was also a significant predictor of load-to-failure, for all specimens grouped together (P P 0.05. Level and side tested were not found to significantly correlate with load-to-failure. Conclusions The residual coronal plane compressive strength of the thoracic transverse process
Analysis of the stochastic channel model by Saleh & Valenzuela via the theory of point processes
DEFF Research Database (Denmark)
Jakobsen, Morten Lomholt; Pedersen, Troels; Fleury, Bernard Henri
2012-01-01
and underlying features, like the intensity function of the component delays and the delaypower intensity. The flexibility and clarity of the mathematical instruments utilized to obtain these results lead us to conjecture that the theory of spatial point processes provides a unifying mathematical framework...
AKaplan-Meier estimators of distance distributions for spatial point processes
Baddeley, A.J.; Gill, R.D.
1997-01-01
When a spatial point process is observed through a bounded window, edge effects hamper the estimation of characteristics such as the empty space function $F$, the nearest neighbour distance distribution $G$, and the reduced second order moment function $K$. Here we propose and study product-limit
Two step estimation for Neyman-Scott point process with inhomogeneous cluster centers
Czech Academy of Sciences Publication Activity Database
Mrkvička, T.; Muška, Milan; Kubečka, Jan
2014-01-01
Roč. 24, č. 1 (2014), s. 91-100 ISSN 0960-3174 R&D Projects: GA ČR(CZ) GA206/07/1392 Institutional support: RVO:60077344 Keywords : bayesian method * clustering * inhomogeneous point process Subject RIV: EH - Ecology, Behaviour Impact factor: 1.623, year: 2014
Dense range images from sparse point clouds using multi-scale processing
Do, Q.L.; Ma, L.; With, de P.H.N.
2013-01-01
Multi-modal data processing based on visual and depth/range images has become relevant in computer vision for 3D reconstruction applications such as city modeling, robot navigation etc. In this paper, we generate highaccuracy dense range images from sparse point clouds to facilitate such
Fast covariance estimation for innovations computed from a spatial Gibbs point process
DEFF Research Database (Denmark)
Coeurjolly, Jean-Francois; Rubak, Ege
In this paper, we derive an exact formula for the covariance of two innovations computed from a spatial Gibbs point process and suggest a fast method for estimating this covariance. We show how this methodology can be used to estimate the asymptotic covariance matrix of the maximum pseudo...
A Systematic Approach to Process Evaluation in the Central Oklahoma Turning Point (COTP) Partnership
Tolma, Eleni L.; Cheney, Marshall K.; Chrislip, David D.; Blankenship, Derek; Troup, Pam; Hann, Neil
2011-01-01
Formation is an important stage of partnership development. Purpose: To describe the systematic approach to process evaluation of a Turning Point initiative in central Oklahoma during the formation stage. The nine-month collaborative effort aimed to develop an action plan to promote health. Methods: A sound planning framework was used in the…
A randomized controlled trial of an electronic informed consent process.
Rothwell, Erin; Wong, Bob; Rose, Nancy C; Anderson, Rebecca; Fedor, Beth; Stark, Louisa A; Botkin, Jeffrey R
2014-12-01
A pilot study assessed an electronic informed consent model within a randomized controlled trial (RCT). Participants who were recruited for the parent RCT project were randomly selected and randomized to either an electronic consent group (n = 32) or a simplified paper-based consent group (n = 30). Results from the electronic consent group reported significantly higher understanding of the purpose of the study, alternatives to participation, and who to contact if they had questions or concerns about the study. However, participants in the paper-based control group reported higher mean scores on some survey items. This research suggests that an electronic informed consent presentation may improve participant understanding for some aspects of a research study. © The Author(s) 2014.
International Nuclear Information System (INIS)
Behringer, K.
1991-02-01
In a recent paper by Behringer et al. (1990), the Wiener-Hermite Functional (WHF) method has been applied to point reactor kinetics excited by Gaussian random reactivity noise under stationary conditions, in order to calculate the neutron steady-state value and the neutron power spectral density (PSD) in a second-order (WHF-2) approximation. For simplicity, delayed neutrons and any feedback effects have been disregarded. The present study is a straightforward continuation of the previous one, treating the problem more generally by including any number of delayed neutron groups. For the case of white reactivity noise, the accuracy of the approach is determined by comparison with the exact solution available from the Fokker-Planck method. In the numerical comparisons, the first-oder (WHF-1) approximation of the PSD is also considered. (author) 4 figs., 10 refs
International Nuclear Information System (INIS)
Behringer, K.; Pineyro, J.; Mennig, J.
1990-06-01
The Wiener-Hermite functional (WHF) method has been applied to the point reactor kinetic equation excited by Gaussian random reactivity noise under stationary conditions. Delayed neutrons and any feedback effects are disregarded. The neutron steady-state value and the power spectral density (PSD) of the neutron flux have been calculated in a second order (WHF-2) approximation. Two cases are considered: in the first case, the noise source is low-pass white noise. In both cases the WHF-2 approximation of the neutron PSDs leads to relatively simple analytical expressions. The accuracy of the approach is determined by comparison with exact solutions of the problem. The investigations show that the WHF method is a powerful approximative tool for studying the nonlinear effects in the stochastic differential equation. (author) 5 figs., 29 refs
A Combined Control Chart for Identifying Out–Of–Control Points in Multivariate Processes
Directory of Open Access Journals (Sweden)
Marroquín–Prado E.
2010-10-01
Full Text Available The Hotelling's T2 control chart is widely used to identify out–of–control signals in multivariate processes. However, this chart is not sensitive to small shifts in the process mean vec tor. In this work we propose a control chart to identify out–of–control signals. The proposed chart is a combination of Hotelling's T2 chart, M chart proposed by Hayter et al. (1994 and a new chart based on Principal Components. The combination of these charts identifies any type and size of change in the process mean vector. Us ing simulation and the Average Run Length (ARL, the performance of the proposed control chart is evaluated. The ARL means the average points within control before an out–of–control point is detected, The results of the simulation show that the proposed chart is more sensitive that each one of the three charts individually
International Nuclear Information System (INIS)
Verma, K.; MacNeil, C.; Odar, S.
1996-01-01
The secondary sides of all four steam generators at the Point Lepreau Nuclear Generating Stations were cleaned during the 1995 annual outage run-down using the Siemens high temperature chemical cleaning process. Traditionally all secondary side chemical cleaning exercises in CANDU as well as the other nuclear power stations in North America have been conducted using a process developed in conjunction with the Electric Power Research Institute (EPRI). The Siemens high temperature process was applied for the first time in North America at the Point Lepreau Nuclear Generating Station (PLGS). The paper discusses experiences related to the pre and post award chemical cleaning activities, chemical cleaning application, post cleaning inspection results and waste handling activities. (author)
A Randomization Procedure for "Trickle-Process" Evaluations
Goldman, Jerry
1977-01-01
This note suggests a solution to the problem of achieving randomization in experimental settings where units deemed eligible for treatment "trickle in," that is, appear at any time. The solution permits replication of the experiment in order to test for time-dependent effects. (Author/CTM)
Hierarchical random cellular neural networks for system-level brain-like signal processing.
Kozma, Robert; Puljic, Marko
2013-09-01
Sensory information processing and cognition in brains are modeled using dynamic systems theory. The brain's dynamic state is described by a trajectory evolving in a high-dimensional state space. We introduce a hierarchy of random cellular automata as the mathematical tools to describe the spatio-temporal dynamics of the cortex. The corresponding brain model is called neuropercolation which has distinct advantages compared to traditional models using differential equations, especially in describing spatio-temporal discontinuities in the form of phase transitions. Phase transitions demarcate singularities in brain operations at critical conditions, which are viewed as hallmarks of higher cognition and awareness experience. The introduced Monte-Carlo simulations obtained by parallel computing point to the importance of computer implementations using very large-scale integration (VLSI) and analog platforms. Copyright © 2013 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
George A. Kelley
2011-01-01
Full Text Available Fibromyalgia is a major public health problem affecting an estimated 200 to 400 million people worldwide. The purpose of this study was to use the meta-analytic approach to determine the efficacy and effectiveness of randomized controlled exercise intervention trials (aerobic, strength training, or both on tender points (TPs in adults with fibromyalgia. Using random effects models and 95% confidence intervals (CI, a statistically significant reduction in TPs was observed based on per-protocol analyses (8 studies representing 322 participants but not intention-to-treat analyses (5 studies representing 338 participants (per-protocol, , −0.68, 95% CI, −1.16, −0.20; intention-to-treat, , −0.24, 95% CI, −0.62, 0.15. Changes were equivalent to relative reductions of 10.9% and 6.9%, respectively, for per-protocol and intention-to-treat analyses. It was concluded that exercise is efficacious for reducing TPs in women with FM. However, a need exists for additional well-designed and reported studies on this topic.
Directory of Open Access Journals (Sweden)
Chao Hsing Yeh
2014-01-01
Full Text Available This prospective, randomized clinical trial (RCT pilot study was designed to (1 assess the feasibility and tolerability of an easily administered, auricular point acupressure (APA intervention and (2 provide an initial assessment of effect size as compared to a sham treatment. Thirty-seven subjects were randomized to receive either the real or sham APA treatment. All participants were treated once a week for 4 weeks. Self-report measures were obtained at baseline, weekly during treatment, at end-of-intervention (EOI, and at a 1-month follow-up. A dropout rate of 26% in the real APA group and 50% in the sham group was observed. The reduction in worst pain from baseline to EOI was 41% for the real and 5% for the sham group with a Cohen’s effect size of 1.22 P<0.00. Disability scores on the Roland Morris Disability Questionnaire (RMDQ decreased in the real group by 29% and were unchanged in the sham group (+3% P<0.00. Given the high dropout rate, results must be interpreted with caution; nevertheless, our results suggest that APA may provide an inexpensive and effective complementary approach for the management of back pain in older adults, and further study is warranted.
Bayesian inference for multivariate point processes observed at sparsely distributed times
DEFF Research Database (Denmark)
Rasmussen, Jakob Gulddahl; Møller, Jesper; Aukema, B.H.
We consider statistical and computational aspects of simulation-based Bayesian inference for a multivariate point process which is only observed at sparsely distributed times. For specicity we consider a particular data set which has earlier been analyzed by a discrete time model involving unknown...... normalizing constants. We discuss the advantages and disadvantages of using continuous time processes compared to discrete time processes in the setting of the present paper as well as other spatial-temporal situations. Keywords: Bark beetle, conditional intensity, forest entomology, Markov chain Monte Carlo...
DEFF Research Database (Denmark)
Bey, Niki
2000-01-01
to three essential assessment steps, the method enables rough environmental evaluations and supports in this way material- and process-related decision-making in the early stages of design. In its overall structure, the Oil Point Method is related to Life Cycle Assessment - except for two main differences...... of environmental evaluation and only approximate information about the product and its life cycle. This dissertation addresses this challenge in presenting a method, which is tailored to these requirements of designers - the Oil Point Method (OPM). In providing environmental key information and confining itself...
Spatial point process analysis for a plant community with high biodiversity
DEFF Research Database (Denmark)
Illian, Janine; Møller, Jesper; Waagepetersen, Rasmus Plenge
A complex multivariate spatial point pattern for a plant community with high biodiversity is modelled using a hierarchical multivariate point process model. In the model, interactions between plants with different post-fire regeneration strategies are of key interest. We consider initially...... a maximum likelihood approach to inference where problems arise due to unknown interaction radii for the plants. We next demonstrate that a Bayesian approach provides a flexible framework for incorporating prior information concerning the interaction radii. From an ecological perspective, we are able both...
Liu, Zhangjun; Liu, Zenghui
2018-06-01
This paper develops a hybrid approach of spectral representation and random function for simulating stationary stochastic vector processes. In the proposed approach, the high-dimensional random variables, included in the original spectral representation (OSR) formula, could be effectively reduced to only two elementary random variables by introducing the random functions that serve as random constraints. Based on this, a satisfactory simulation accuracy can be guaranteed by selecting a small representative point set of the elementary random variables. The probability information of the stochastic excitations can be fully emerged through just several hundred of sample functions generated by the proposed approach. Therefore, combined with the probability density evolution method (PDEM), it could be able to implement dynamic response analysis and reliability assessment of engineering structures. For illustrative purposes, a stochastic turbulence wind velocity field acting on a frame-shear-wall structure is simulated by constructing three types of random functions to demonstrate the accuracy and efficiency of the proposed approach. Careful and in-depth studies concerning the probability density evolution analysis of the wind-induced structure have been conducted so as to better illustrate the application prospects of the proposed approach. Numerical examples also show that the proposed approach possesses a good robustness.
Directory of Open Access Journals (Sweden)
Hussain Shareef
2017-01-01
Full Text Available Many maximum power point tracking (MPPT algorithms have been developed in recent years to maximize the produced PV energy. These algorithms are not sufficiently robust because of fast-changing environmental conditions, efficiency, accuracy at steady-state value, and dynamics of the tracking algorithm. Thus, this paper proposes a new random forest (RF model to improve MPPT performance. The RF model has the ability to capture the nonlinear association of patterns between predictors, such as irradiance and temperature, to determine accurate maximum power point. A RF-based tracker is designed for 25 SolarTIFSTF-120P6 PV modules, with the capacity of 3 kW peak using two high-speed sensors. For this purpose, a complete PV system is modeled using 300,000 data samples and simulated using the MATLAB/SIMULINK package. The proposed RF-based MPPT is then tested under actual environmental conditions for 24 days to validate the accuracy and dynamic response. The response of the RF-based MPPT model is also compared with that of the artificial neural network and adaptive neurofuzzy inference system algorithms for further validation. The results show that the proposed MPPT technique gives significant improvement compared with that of other techniques. In addition, the RF model passes the Bland–Altman test, with more than 95 percent acceptability.
Migliorati, Giovanni
2015-08-28
We study the accuracy of the discrete least-squares approximation on a finite dimensional space of a real-valued target function from noisy pointwise evaluations at independent random points distributed according to a given sampling probability measure. The convergence estimates are given in mean-square sense with respect to the sampling measure. The noise may be correlated with the location of the evaluation and may have nonzero mean (offset). We consider both cases of bounded or square-integrable noise / offset. We prove conditions between the number of sampling points and the dimension of the underlying approximation space that ensure a stable and accurate approximation. Particular focus is on deriving estimates in probability within a given confidence level. We analyze how the best approximation error and the noise terms affect the convergence rate and the overall confidence level achieved by the convergence estimate. The proofs of our convergence estimates in probability use arguments from the theory of large deviations to bound the noise term. Finally we address the particular case of multivariate polynomial approximation spaces with any density in the beta family, including uniform and Chebyshev.
Shareef, Hussain; Mutlag, Ammar Hussein; Mohamed, Azah
2017-01-01
Many maximum power point tracking (MPPT) algorithms have been developed in recent years to maximize the produced PV energy. These algorithms are not sufficiently robust because of fast-changing environmental conditions, efficiency, accuracy at steady-state value, and dynamics of the tracking algorithm. Thus, this paper proposes a new random forest (RF) model to improve MPPT performance. The RF model has the ability to capture the nonlinear association of patterns between predictors, such as irradiance and temperature, to determine accurate maximum power point. A RF-based tracker is designed for 25 SolarTIFSTF-120P6 PV modules, with the capacity of 3 kW peak using two high-speed sensors. For this purpose, a complete PV system is modeled using 300,000 data samples and simulated using the MATLAB/SIMULINK package. The proposed RF-based MPPT is then tested under actual environmental conditions for 24 days to validate the accuracy and dynamic response. The response of the RF-based MPPT model is also compared with that of the artificial neural network and adaptive neurofuzzy inference system algorithms for further validation. The results show that the proposed MPPT technique gives significant improvement compared with that of other techniques. In addition, the RF model passes the Bland-Altman test, with more than 95 percent acceptability.
Analysis of residual stress state in sheet metal parts processed by single point incremental forming
Maaß, F.; Gies, S.; Dobecki, M.; Brömmelhoff, K.; Tekkaya, A. E.; Reimers, W.
2018-05-01
The mechanical properties of formed metal components are highly affected by the prevailing residual stress state. A selective induction of residual compressive stresses in the component, can improve the product properties such as the fatigue strength. By means of single point incremental forming (SPIF), the residual stress state can be influenced by adjusting the process parameters during the manufacturing process. To achieve a fundamental understanding of the residual stress formation caused by the SPIF process, a valid numerical process model is essential. Within the scope of this paper the significance of kinematic hardening effects on the determined residual stress state is presented based on numerical simulations. The effect of the unclamping step after the manufacturing process is also analyzed. An average deviation of the residual stress amplitudes in the clamped and unclamped condition of 18 % reveals, that the unclamping step needs to be considered to reach a high numerical prediction quality.
Prospects for direct neutron capture measurements on s-process branching point isotopes
Energy Technology Data Exchange (ETDEWEB)
Guerrero, C.; Lerendegui-Marco, J.; Quesada, J.M. [Universidad de Sevilla, Dept. de Fisica Atomica, Molecular y Nuclear, Sevilla (Spain); Domingo-Pardo, C. [CSIC-Universidad de Valencia, Instituto de Fisica Corpuscular, Valencia (Spain); Kaeppeler, F. [Karlsruhe Institute of Technology, Institut fuer Kernphysik, Karlsruhe (Germany); Palomo, F.R. [Universidad de Sevilla, Dept. de Ingenieria Electronica, Sevilla (Spain); Reifarth, R. [Goethe-Universitaet Frankfurt am Main, Frankfurt am Main (Germany)
2017-05-15
The neutron capture cross sections of several unstable key isotopes acting as branching points in the s-process are crucial for stellar nucleosynthesis studies, but they are very challenging to measure directly due to the difficult production of sufficient sample material, the high activity of the resulting samples, and the actual (n, γ) measurement, where high neutron fluxes and effective background rejection capabilities are required. At present there are about 21 relevant s-process branching point isotopes whose cross section could not be measured yet over the neutron energy range of interest for astrophysics. However, the situation is changing with some very recent developments and upcoming technologies. This work introduces three techniques that will change the current paradigm in the field: the use of γ-ray imaging techniques in (n, γ) experiments, the production of moderated neutron beams using high-power lasers, and double capture experiments in Maxwellian neutron beams. (orig.)
Valenza, Gaetano; Citi, Luca; Barbieri, Riccardo
2013-01-01
We report an exemplary study of instantaneous assessment of cardiovascular dynamics performed using point-process nonlinear models based on Laguerre expansion of the linear and nonlinear Wiener-Volterra kernels. As quantifiers, instantaneous measures such as high order spectral features and Lyapunov exponents can be estimated from a quadratic and cubic autoregressive formulation of the model first order moment, respectively. Here, these measures are evaluated on heartbeat series coming from 16 healthy subjects and 14 patients with Congestive Hearth Failure (CHF). Data were gathered from the on-line repository PhysioBank, which has been taken as landmark for testing nonlinear indices. Results show that the proposed nonlinear Laguerre-Volterra point-process methods are able to track the nonlinear and complex cardiovascular dynamics, distinguishing significantly between CHF and healthy heartbeat series.
A business process model as a starting point for tight cooperation among organizations
Directory of Open Access Journals (Sweden)
O. Mysliveček
2006-01-01
Full Text Available Outsourcing and other kinds of tight cooperation among organizations are more and more necessary for success on all markets (markets of high technology products are particularly influenced. Thus it is important for companies to be able to effectively set up all kinds of cooperation. A business process model (BPM is a suitable starting point for this future cooperation. In this paper the process of setting up such cooperation is outlined, as well as why it is important for business success.
Weak interaction rates for Kr and Sr waiting-point nuclei under rp-process conditions
International Nuclear Information System (INIS)
Sarriguren, P.
2009-01-01
Weak interaction rates are studied in neutron deficient Kr and Sr waiting-point isotopes in ranges of densities and temperatures relevant for the rp process. The nuclear structure is described within a microscopic model (deformed QRPA) that reproduces not only the half-lives but also the Gamow-Teller strength distributions recently measured. The various sensitivities of the decay rates to both density and temperature are discussed. Continuum electron capture is shown to contribute significantly to the weak rates at rp-process conditions.
Do MENA stock market returns follow a random walk process?
Directory of Open Access Journals (Sweden)
Salim Lahmiri
2013-01-01
Full Text Available In this research, three variance ratio tests: the standard variance ratio test, the wild bootstrap multiple variance ratio test, and the non-parametric rank scores test are adopted to test the random walk hypothesis (RWH of stock markets in Middle East and North Africa (MENA region using most recent data from January 2010 to September 2012. The empirical results obtained by all three econometric tests show that the RWH is strongly rejected for Kuwait, Tunisia, and Morocco. However, the standard variance ratio test and the wild bootstrap multiple variance ratio test reject the null hypothesis of random walk in Jordan and KSA, while non-parametric rank scores test do not. We may conclude that Jordan and KSA stock market are weak efficient. In sum, the empirical results suggest that return series in Kuwait, Tunisia, and Morocco are predictable. In other words, predictable patterns that can be exploited in these markets still exit. Therefore, investors may make profits in such less efficient markets.
Directory of Open Access Journals (Sweden)
He C
2017-08-01
Full Text Available Chunhui He,1,* Hua Ma2,* 1Internal Medicine of Traditional Chinese Medicine, 2Medical Image Center, The First Affiliated Hospital of Xinjiang Medical University, Wulumuqi, People’s Republic of China *These authors contributed equally to this work Background: Plantar heel pain can be managed with dry needling of myofascial trigger points (MTrPs; however, whether MTrP needling is effective remains controversial. Thus, we conducted this meta-analysis to evaluate the effect of MTrP needling in patients with plantar heel pain. Materials and methods: PubMed, Embase, Web of Science, SinoMed (Chinese BioMedical Literature Service System, People’s Republic of China, and CNKI (National Knowledge Infrastructure, People’s Republic of China databases were systematically reviewed for randomized controlled trials (RCTs that assessed the effects of MTrP needling. Pooled weighted mean difference (WMD with 95% CIs were calculated for change in visual analog scale (VAS score, and pooled risk ratio (RR with 95% CIs were calculated for success rate for pain and incidence of adverse events. A fixed-effects model or random-effects model was used to pool the estimates, depending on the heterogeneity among the included studies. Results: Extensive literature search yielded 1,941 articles, of which only seven RCTs met the inclusion criteria and were included in this meta-analysis. The pooled results showed that MTrP needling significantly reduced the VAS score (WMD =–15.50, 95% CI: –19.48, –11.53; P<0.001 compared with control, but it had a similar success rate for pain with control (risk ratio [RR] =1.15, 95% CI: 0.87, 1.51; P=0.320. Moreover, MTrP needling was associated with a similar incidence of adverse events with control (RR =1.89, 95% CI: 0.38, 9.39; P=0.438. Conclusion: MTrP needling effectively reduced the heel pain due to plantar fasciitis. However, considering the potential limitations in this study, more large-scale, adequately powered, good
Lombardo, Luigi
2018-02-13
We develop a stochastic modeling approach based on spatial point processes of log-Gaussian Cox type for a collection of around 5000 landslide events provoked by a precipitation trigger in Sicily, Italy. Through the embedding into a hierarchical Bayesian estimation framework, we can use the integrated nested Laplace approximation methodology to make inference and obtain the posterior estimates of spatially distributed covariate and random effects. Several mapping units are useful to partition a given study area in landslide prediction studies. These units hierarchically subdivide the geographic space from the highest grid-based resolution to the stronger morphodynamic-oriented slope units. Here we integrate both mapping units into a single hierarchical model, by treating the landslide triggering locations as a random point pattern. This approach diverges fundamentally from the unanimously used presence–absence structure for areal units since we focus on modeling the expected landslide count jointly within the two mapping units. Predicting this landslide intensity provides more detailed and complete information as compared to the classically used susceptibility mapping approach based on relative probabilities. To illustrate the model’s versatility, we compute absolute probability maps of landslide occurrences and check their predictive power over space. While the landslide community typically produces spatial predictive models for landslides only in the sense that covariates are spatially distributed, no actual spatial dependence has been explicitly integrated so far. Our novel approach features a spatial latent effect defined at the slope unit level, allowing us to assess the spatial influence that remains unexplained by the covariates in the model. For rainfall-induced landslides in regions where the raingauge network is not sufficient to capture the spatial distribution of the triggering precipitation event, this latent effect provides valuable imaging support
PARALLEL PROCESSING OF BIG POINT CLOUDS USING Z-ORDER-BASED PARTITIONING
Directory of Open Access Journals (Sweden)
C. Alis
2016-06-01
Full Text Available As laser scanning technology improves and costs are coming down, the amount of point cloud data being generated can be prohibitively difficult and expensive to process on a single machine. This data explosion is not only limited to point cloud data. Voluminous amounts of high-dimensionality and quickly accumulating data, collectively known as Big Data, such as those generated by social media, Internet of Things devices and commercial transactions, are becoming more prevalent as well. New computing paradigms and frameworks are being developed to efficiently handle the processing of Big Data, many of which utilize a compute cluster composed of several commodity grade machines to process chunks of data in parallel. A central concept in many of these frameworks is data locality. By its nature, Big Data is large enough that the entire dataset would not fit on the memory and hard drives of a single node hence replicating the entire dataset to each worker node is impractical. The data must then be partitioned across worker nodes in a manner that minimises data transfer across the network. This is a challenge for point cloud data because there exist different ways to partition data and they may require data transfer. We propose a partitioning based on Z-order which is a form of locality-sensitive hashing. The Z-order or Morton code is computed by dividing each dimension to form a grid then interleaving the binary representation of each dimension. For example, the Z-order code for the grid square with coordinates (x = 1 = 012, y = 3 = 112 is 10112 = 11. The number of points in each partition is controlled by the number of bits per dimension: the more bits, the fewer the points. The number of bits per dimension also controls the level of detail with more bits yielding finer partitioning. We present this partitioning method by implementing it on Apache Spark and investigating how different parameters affect the accuracy and running time of the k nearest
Parallel Processing of Big Point Clouds Using Z-Order Partitioning
Alis, C.; Boehm, J.; Liu, K.
2016-06-01
As laser scanning technology improves and costs are coming down, the amount of point cloud data being generated can be prohibitively difficult and expensive to process on a single machine. This data explosion is not only limited to point cloud data. Voluminous amounts of high-dimensionality and quickly accumulating data, collectively known as Big Data, such as those generated by social media, Internet of Things devices and commercial transactions, are becoming more prevalent as well. New computing paradigms and frameworks are being developed to efficiently handle the processing of Big Data, many of which utilize a compute cluster composed of several commodity grade machines to process chunks of data in parallel. A central concept in many of these frameworks is data locality. By its nature, Big Data is large enough that the entire dataset would not fit on the memory and hard drives of a single node hence replicating the entire dataset to each worker node is impractical. The data must then be partitioned across worker nodes in a manner that minimises data transfer across the network. This is a challenge for point cloud data because there exist different ways to partition data and they may require data transfer. We propose a partitioning based on Z-order which is a form of locality-sensitive hashing. The Z-order or Morton code is computed by dividing each dimension to form a grid then interleaving the binary representation of each dimension. For example, the Z-order code for the grid square with coordinates (x = 1 = 012, y = 3 = 112) is 10112 = 11. The number of points in each partition is controlled by the number of bits per dimension: the more bits, the fewer the points. The number of bits per dimension also controls the level of detail with more bits yielding finer partitioning. We present this partitioning method by implementing it on Apache Spark and investigating how different parameters affect the accuracy and running time of the k nearest neighbour algorithm
Nezhadhaghighi, Mohsen Ghasemi
2017-08-01
Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ-stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α. We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ-stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.
Nezhadhaghighi, Mohsen Ghasemi
2017-08-01
Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ -stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α . We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ -stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.
Developing a Business Intelligence Process for a Training Module in SharePoint 2010
Schmidtchen, Bryce; Solano, Wanda M.; Albasini, Colby
2015-01-01
Prior to this project, training information for the employees of the National Center for Critical Processing and Storage (NCCIPS) was stored in an array of unrelated spreadsheets and SharePoint lists that had to be manually updated. By developing a content management system through a web application platform named SharePoint, this training system is now highly automated and provides a much less intensive method of storing training data and scheduling training courses. This system was developed by using SharePoint Designer and laying out the data structure for the interaction between different lists of data about the employees. The automation of data population inside of the lists was accomplished by implementing SharePoint workflows which essentially lay out the logic for how data is connected and calculated between certain lists. The resulting training system is constructed from a combination of five lists of data with a single list acting as the user-friendly interface. This interface is populated with the courses required for each employee and includes past and future information about course requirements. The employees of NCCIPS now have the ability to view, log, and schedule their training information and courses with much more ease. This system will relieve a significant amount of manual input and serve as a powerful informational resource for the employees of NCCIPS in the future.
Fleckenstein, Johannes; Lill, Christian; Lüdtke, Rainer; Gleditsch, Jochen; Rasp, Gerd; Irnich, Dominik
2009-09-01
One out of 4 patients visiting a general practitioner reports of a sore throat associated with pain on swallowing. This study was established to examine the immediate pain alleviating effect of a single point acupuncture treatment applied to the large intestine meridian of patients with sore throat. Sixty patients with acute tonsillitis and pharyngitis were enrolled in this randomized placebo-controlled trial. They either received acupuncture, or sham laser acupuncture, directed to the large intestine meridian section between acupuncture points LI 8 and LI 10. The main outcome measure was the change of pain intensity on swallowing a sip of water evaluated by a visual analog scale 15 minutes after treatment. A credibility assessment regarding the respective treatment was performed. The pain intensity for the acupuncture group before and immediately after therapy was 5.6+/-2.8 and 3.0+/-3.0, and for the sham group 5.6+/-2.5 and 3.8+/-2.5, respectively. Despite the articulation of a more pronounced improvement among the acupuncture group, there was no significant difference between groups (Delta=0.9, confidence interval: -0.2-2.0; P=0.12; analysis of covariance). Patients' satisfaction was high in both treatment groups. The study was prematurely terminated due to a subsequent lack of suitable patients. A single acupuncture treatment applied to a selected area of the large intestine meridian was no more effective in the alleviation of pain associated with clinical sore throat than sham laser acupuncture applied to the same area. Hence, clinically relevant improvement could be achieved. Pain alleviation might partly be due to the intense palpation of the large intestine meridian. The benefit of a comprehensive acupuncture treatment protocol in this condition should be subject to further trials.
EFFICIENT LIDAR POINT CLOUD DATA MANAGING AND PROCESSING IN A HADOOP-BASED DISTRIBUTED FRAMEWORK
Directory of Open Access Journals (Sweden)
C. Wang
2017-10-01
Full Text Available Light Detection and Ranging (LiDAR is one of the most promising technologies in surveying and mapping，city management, forestry, object recognition, computer vision engineer and others. However, it is challenging to efficiently storage, query and analyze the high-resolution 3D LiDAR data due to its volume and complexity. In order to improve the productivity of Lidar data processing, this study proposes a Hadoop-based framework to efficiently manage and process LiDAR data in a distributed and parallel manner, which takes advantage of Hadoop’s storage and computing ability. At the same time, the Point Cloud Library (PCL, an open-source project for 2D/3D image and point cloud processing, is integrated with HDFS and MapReduce to conduct the Lidar data analysis algorithms provided by PCL in a parallel fashion. The experiment results show that the proposed framework can efficiently manage and process big LiDAR data.
Lucarini, Valerio
2009-01-01
We perturb the simple cubic (SC), body-centered cubic (BCC), and face-centered cubic (FCC) structures with a spatial Gaussian noise whose adimensional strength is controlled by the parameter α and analyze the statistical properties of the cells of the resulting Voronoi tessellations using an ensemble approach. We concentrate on topological properties of the cells, such as the number of faces, and on metric properties of the cells, such as the area, volume and the isoperimetric quotient. The topological properties of the Voronoi tessellations of the SC and FCC crystals are unstable with respect to the introduction of noise, because the corresponding polyhedra are geometrically degenerate, whereas the tessellation of the BCC crystal is topologically stable even against noise of small but finite intensity. Whereas the average volume of the cells is the intensity parameter of the system and does not depend on the noise, the average area of the cells has a rather interesting behavior with respect to noise intensity. For weak noise, the mean area of the Voronoi tessellations corresponding to perturbed BCC and FCC perturbed increases quadratically with the noise intensity. In the case of perturbed SCC crystals, there is an optimal amount of noise that minimizes the mean area of the cells. Already for a moderate amount of noise ( α>0.5), the statistical properties of the three perturbed tessellations are indistinguishable, and for intense noise ( α>2), results converge to those of the Poisson-Voronoi tessellation. Notably, 2-parameter gamma distributions constitute an excellent model for the empirical pdf of all considered topological and metric properties. By analyzing jointly the statistical properties of the area and of the volume of the cells, we discover that also the cells shape, measured by the isoperimetric quotient, fluctuates. The Voronoi tessellations of the BCC and of the FCC structures result to be local maxima for the isoperimetric quotient among space-filling tessellations, which suggests a weaker form of the recently disproved Kelvin conjecture. Moreover, whereas the size of the isoperimetric quotient fluctuations go to zero linearly with noise in the SC and BCC case, the decrease is quadratic in the FCC case. Correspondingly, anomalous scaling relations with exponents larger than 3/2 are observed between the area and the volumes of the cells for all cases considered, and, except for the FCC structure, also for infinitesimal noise. In the Poisson-Voronoi limit, the exponent is ˜1.67. The anomaly in the scaling indicates that large cells preferentially feature large isoperimetric quotients. The FCC structure, in spite of being topologically unstable, results to be the most stable against noise when the shape—as measured by the isoperimetric quotient—of the cells is considered. These scaling relations apply only for a finite range and should be taken as descriptive of the bulk statistical properties of the cells. As the number of faces is strongly correlated with the sphericity (cells with more faces are bulkier), the anomalous scaling is heavily reduced when we perform power law fits separately on cells with a specific number of faces.
Lucarini, Valerio
2008-01-01
We perturb the SC, BCC, and FCC crystal structures with a spatial Gaussian noise whose adimensional strength is controlled by the parameter a, and analyze the topological and metrical properties of the resulting Voronoi Tessellations (VT). The topological properties of the VT of the SC and FCC crystals are unstable with respect to the introduction of noise, because the corresponding polyhedra are geometrically degenerate, whereas the tessellation of the BCC crystal is topologically stable eve...
Citraresmi, A. D. P.; Wahyuni, E. E.
2018-03-01
The aim of this study was to inspect the implementation of Hazard Analysis and Critical Control Point (HACCP) for identification and prevention of potential hazards in the production process of dried anchovy at PT. Kelola Mina Laut (KML), Lobuk unit, Sumenep. Cold storage process is needed in each anchovy processing step in order to maintain its physical and chemical condition. In addition, the implementation of quality assurance system should be undertaken to maintain product quality. The research was conducted using a survey method, by following the whole process of making anchovy from the receiving raw materials to the packaging of final product. The method of data analysis used was descriptive analysis method. Implementation of HACCP at PT. KML, Lobuk unit, Sumenep was conducted by applying Pre Requisite Programs (PRP) and preparation stage consisting of 5 initial stages and 7 principles of HACCP. The results showed that CCP was found in boiling process flow with significant hazard of Listeria monocytogenesis bacteria and final sorting process with significant hazard of foreign material contamination in the product. Actions taken were controlling boiling temperature of 100 – 105°C for 3 - 5 minutes and training for sorting process employees.
Directory of Open Access Journals (Sweden)
Saidi Badreddine
2016-01-01
Full Text Available The single point incremental forming process is well-known to be perfectly suited for prototyping and small series. One of its fields of applicability is the medicine area for the forming of titanium prostheses or titanium medical implants. However this process is not yet very industrialized, mainly due its geometrical inaccuracy, its not homogeneous thickness distribution& Moreover considerable forces can occur. They must be controlled in order to preserve the tooling. In this paper, a numerical approach is proposed in order to minimize the maximum force achieved during the incremental forming of titanium sheets and to maximize the minimal thickness. A surface response methodology is used to find the optimal values of two input parameters of the process, the punch diameter and the vertical step size of the tool path.
Marked point process framework for living probabilistic safety assessment and risk follow-up
International Nuclear Information System (INIS)
Arjas, Elja; Holmberg, Jan
1995-01-01
We construct a model for living probabilistic safety assessment (PSA) by applying the general framework of marked point processes. The framework provides a theoretically rigorous approach for considering risk follow-up of posterior hazards. In risk follow-up, the hazard of core damage is evaluated synthetically at time points in the past, by using some observed events as logged history and combining it with re-evaluated potential hazards. There are several alternatives for doing this, of which we consider three here, calling them initiating event approach, hazard rate approach, and safety system approach. In addition, for a comparison, we consider a core damage hazard arising in risk monitoring. Each of these four definitions draws attention to a particular aspect in risk assessment, and this is reflected in the behaviour of the consequent risk importance measures. Several alternative measures are again considered. The concepts and definitions are illustrated by a numerical example
Quality control for electron beam processing of polymeric materials by end-point analysis
International Nuclear Information System (INIS)
DeGraff, E.; McLaughlin, W.L.
1981-01-01
Properties of certain plastics, e.g. polytetrafluoroethylene, polyethylene, ethylene vinyl acetate copolymer, can be modified selectively by ionizing radiation. One of the advantages of this treatment over chemical methods is better control of the process and the end-product properties. The most convenient method of dosimetry for monitoring quality control is post-irradiation evaluation of the plastic itself, e.g., melt index and melt point determination. It is shown that by proper calibration in terms of total dose and sufficiently reproducible radiation effects, such product test methods provide convenient and meaningful analyses. Other appropriate standardized analytical methods include stress-crack resistance, stress-strain-to-fracture testing and solubility determination. Standard routine dosimetry over the dose and dose rate ranges of interest confirm that measured product end points can be correlated with calibrated values of absorbed dose in the product within uncertainty limits of the measurements. (author)
Studies in astronomical time series analysis: Modeling random processes in the time domain
Scargle, J. D.
1979-01-01
Random process models phased in the time domain are used to analyze astrophysical time series data produced by random processes. A moving average (MA) model represents the data as a sequence of pulses occurring randomly in time, with random amplitudes. An autoregressive (AR) model represents the correlations in the process in terms of a linear function of past values. The best AR model is determined from sampled data and transformed to an MA for interpretation. The randomness of the pulse amplitudes is maximized by a FORTRAN algorithm which is relatively stable numerically. Results of test cases are given to study the effects of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the optical light curve of the quasar 3C 273 is given.
Nuclear binding around the RP-process waiting points $^{68}$Se and $^{72}$Kr
2002-01-01
Encouraged by the success of mass determinations of nuclei close to the Z=N line performed at ISOLTRAP during the year 2000 and of the recent decay spectroscopy studies on neutron-deficient Kr isotopes (IS351 collaboration), we aim to measure masses and proton separation energies of the bottleneck nuclei defining the flow of the astrophysical rp-process beyond A$\\sim$70. In detail, the program includes mass measurements of the rp-process waiting point nuclei $^{68}$Se and $^{72}$Kr and determination of proton separation energies of the proton-unbound $^{69}$Br and $^{73}$Rb via $\\beta$-decays of $^{69}$Kr and $^{73}$Sr, respectively. The aim of the project is to complete the experimental database for astrophysical network calculations and for the liquid-drop type of mass models typically used in the modelling of the astrophysical rp process in the region. The first beamtime is scheduled for the August 2001 and the aim is to measure the absolute mass of the waiting-point nucleus $^{72}$Kr.
Assessment of Peer Mediation Process from Conflicting Students’ Point of Views
Directory of Open Access Journals (Sweden)
Fulya TÜRK
2016-12-01
Full Text Available The purpose of this study was to analyze peer mediation process that was applied in a high school on conflicting students’ point of views. This research was carried out in a high school in Denizli. After ten sessions of training in peer mediation, peer mediators mediated peers’ real conflicts. In the research, 41 students (28 girls, 13 boys who got help at least once were interviewed as a party to the conflict. Through semistructured interviews with conflicting students, the mediation process has been evaluated through the point of views of students. Eight questions were asked about the conflicting parties. Verbal data obtained from interviews were analyzed using the content analysis. When conflicting students’ opinions and experiences about peer mediation were analyzed, it is seen that they were satisfied regarding the process, they have resolved their conflicts in a constructive and peaceful way, their friendship has been continuing as before. All of these results also indicate that peer mediation is an effective method of resolving student conflicts constructively
Clusterless Decoding of Position From Multiunit Activity Using A Marked Point Process Filter
Deng, Xinyi; Liu, Daniel F.; Kay, Kenneth; Frank, Loren M.; Eden, Uri T.
2016-01-01
Point process filters have been applied successfully to decode neural signals and track neural dynamics. Traditionally, these methods assume that multiunit spiking activity has already been correctly spike-sorted. As a result, these methods are not appropriate for situations where sorting cannot be performed with high precision such as real-time decoding for brain-computer interfaces. As the unsupervised spike-sorting problem remains unsolved, we took an alternative approach that takes advantage of recent insights about clusterless decoding. Here we present a new point process decoding algorithm that does not require multiunit signals to be sorted into individual units. We use the theory of marked point processes to construct a function that characterizes the relationship between a covariate of interest (in this case, the location of a rat on a track) and features of the spike waveforms. In our example, we use tetrode recordings, and the marks represent a four-dimensional vector of the maximum amplitudes of the spike waveform on each of the four electrodes. In general, the marks may represent any features of the spike waveform. We then use Bayes’ rule to estimate spatial location from hippocampal neural activity. We validate our approach with a simulation study and with experimental data recorded in the hippocampus of a rat moving through a linear environment. Our decoding algorithm accurately reconstructs the rat’s position from unsorted multiunit spiking activity. We then compare the quality of our decoding algorithm to that of a traditional spike-sorting and decoding algorithm. Our analyses show that the proposed decoding algorithm performs equivalently or better than algorithms based on sorted single-unit activity. These results provide a path toward accurate real-time decoding of spiking patterns that could be used to carry out content-specific manipulations of population activity in hippocampus or elsewhere in the brain. PMID:25973549
Art Therapy and Cognitive Processing Therapy for Combat-Related PTSD: A Randomized Controlled Trial
Campbell, Melissa; Decker, Kathleen P.; Kruk, Kerry; Deaver, Sarah P.
2016-01-01
This randomized controlled trial was designed to determine if art therapy in conjunction with Cognitive Processing Therapy (CPT) was more effective for reducing symptoms of combat posttraumatic stress disorder (PTSD) than CPT alone. Veterans (N = 11) were randomized to receive either individual CPT, or individual CPT in conjunction with individual…
On the estimation of the spherical contact distribution Hs(y) for spatial point processes
International Nuclear Information System (INIS)
Doguwa, S.I.
1990-08-01
RIPLEY (1977, Journal of the Royal Statistical Society, B39 172-212) proposed an estimator for the spherical contact distribution H s (s), of a spatial point process observed in a bounded planar region. However, this estimator is not defined for some distances of interest, in this bounded region. A new estimator for H s (y), is proposed for use with regular grid of sampling locations. This new estimator is defined for all distances of interest. It also appears to have a smaller bias and a smaller mean squared error than the previously suggested alternative. (author). 11 refs, 4 figs, 1 tab
Analysing the distribution of synaptic vesicles using a spatial point process model
DEFF Research Database (Denmark)
Khanmohammadi, Mahdieh; Waagepetersen, Rasmus; Nava, Nicoletta
2014-01-01
functionality by statistically modelling the distribution of the synaptic vesicles in two groups of rats: a control group subjected to sham stress and a stressed group subjected to a single acute foot-shock (FS)-stress episode. We hypothesize that the synaptic vesicles have different spatial distributions...... in the two groups. The spatial distributions are modelled using spatial point process models with an inhomogeneous conditional intensity and repulsive pairwise interactions. Our results verify the hypothesis that the two groups have different spatial distributions....
Structure and Randomness of Continuous-Time, Discrete-Event Processes
Marzen, Sarah E.; Crutchfield, James P.
2017-10-01
Loosely speaking, the Shannon entropy rate is used to gauge a stochastic process' intrinsic randomness; the statistical complexity gives the cost of predicting the process. We calculate, for the first time, the entropy rate and statistical complexity of stochastic processes generated by finite unifilar hidden semi-Markov models—memoryful, state-dependent versions of renewal processes. Calculating these quantities requires introducing novel mathematical objects (ɛ -machines of hidden semi-Markov processes) and new information-theoretic methods to stochastic processes.
The Impact of the Delivery of Prepared Power Point Presentations on the Learning Process
Directory of Open Access Journals (Sweden)
Auksė Marmienė
2011-04-01
Full Text Available This article describes the process of the preparation and delivery of Power Point presentations and how it can be used by teachers as a resource for classroom teaching. The advantages of this classroom activity covering some of the problems and providing a few suggestions for dealing with those difficulties are also outlined. The major objective of the present paper is to investigate the students ability to choose the material and the content of Power Point presentations on professional topics via the Internet as well as the ability to prepare and deliver the presentation in front of the audience. The factors which determine the choice of the presentation subject are also analysed in this paper. After the delivery students were requested to self- and peer-assess the difficulties they faced in preparation and performance of the presentations by writing the reports. Learners’ attitudes to the choice of the topic of Power Point presentations were surveyed by administering a self-assessment questionnaire.
Process for quality assurance of welded joints for electrical resistance point welding
International Nuclear Information System (INIS)
Schaefer, R.; Singh, S.
1977-01-01
In order to guarantee the reproducibility of welded joints of even quality (above all in the metal working industry), it is proposed that before starting resistance point welding, a preheating current should be allowed to flow at the site of the weld. A given reduction of the total resistance at the site of the weld should effect the time when the preheating current is switched over to welding current. This value is always predetermined empirically. Further possibilities of controlling the welding process are described, where the measurement of thermal expansion of the parts is used. A standard welding time is given. The rated course of electrode movement during the process can be predicted and a running comparison of nominal and actual values can be carried out. (RW) [de
Implementation of 5S tools as a starting point in business process reengineering
Directory of Open Access Journals (Sweden)
Vorkapić Miloš 0000-0002-3463-8665
2017-01-01
Full Text Available The paper deals with the analysis of elements which represent a starting point in implementation of a business process reengineering. We have used Lean tools through the analysis of 5S model in our research. On the example of finalization of the finished transmitter in IHMT-CMT production, 5S tools were implemented with a focus on Quality elements although the theory shows that BPR and TQM are two opposite activities in an enterprise. We wanted to distinguish the significance of employees’ self-discipline which helps the process of product finalization to develop in time and without waste and losses. In addition, the employees keep their work place clean, tidy and functional.
W.A. Moojen (Wouter); M.P. Arts (Mark); W.C.H. Jacobs (Wilco); E.W. van Zwet (Erik); M.E. van den Akker-van Marle (Elske); B.W. Koes (Bart); C.L.A.M. Vleggeert-Lankamp (Carmen); W.C. Peul (Wilco)
2013-01-01
markdownabstractAbstract Objective To assess whether interspinous process device implantation is more effective in the short term than conventional surgical decompression for patients with intermittent neurogenic claudication due to lumbar spinal stenosis. Design Randomized controlled
Energy Technology Data Exchange (ETDEWEB)
Mugendiran, V.; Gnanavelbabu, A. [Anna University, Chennai, Tamilnadu (India)
2017-06-15
In this study, a surface based strain measurement was used to determine the formability of the sheet metal. A strain measurement may employ manual calculation of plastic strains based on the reference circle and the deformed circle. The manual calculation method has a greater margin of error in the practical applications. In this paper, an attempt has been made to compare the formability by implementing three different theoretical approaches: Namely conventional method, least square method and digital based strain measurements. As the sheet metal was formed by a single point incremental process the etched circles get deformed into elliptical shapes approximately, image acquisition has been done before and after forming. The plastic strains of the deformed circle grids are calculated based on the non- deformed reference. The coordinates of the deformed circles are measured by various image processing steps. Finally the strains obtained from the deformed circle are used to plot the forming limit diagram. To evaluate the accuracy of the system, the conventional, least square and digital based method of prediction of the forming limit diagram was compared. Conventional method and least square method have marginal error when compared with digital based processing method. Measurement of strain based on image processing agrees well and can be used to improve the accuracy and to reduce the measurement error in prediction of forming limit diagram.
International Nuclear Information System (INIS)
Mugendiran, V.; Gnanavelbabu, A.
2017-01-01
In this study, a surface based strain measurement was used to determine the formability of the sheet metal. A strain measurement may employ manual calculation of plastic strains based on the reference circle and the deformed circle. The manual calculation method has a greater margin of error in the practical applications. In this paper, an attempt has been made to compare the formability by implementing three different theoretical approaches: Namely conventional method, least square method and digital based strain measurements. As the sheet metal was formed by a single point incremental process the etched circles get deformed into elliptical shapes approximately, image acquisition has been done before and after forming. The plastic strains of the deformed circle grids are calculated based on the non- deformed reference. The coordinates of the deformed circles are measured by various image processing steps. Finally the strains obtained from the deformed circle are used to plot the forming limit diagram. To evaluate the accuracy of the system, the conventional, least square and digital based method of prediction of the forming limit diagram was compared. Conventional method and least square method have marginal error when compared with digital based processing method. Measurement of strain based on image processing agrees well and can be used to improve the accuracy and to reduce the measurement error in prediction of forming limit diagram.
Hempel, Dorothea; Haunhorst, Stephanie; Sinnathurai, Sivajini; Seibel, Armin; Recker, Florian; Heringer, Frank; Michels, Guido; Breitkreutz, Raoul
2016-12-01
Point-of-care ultrasound (POC-US) is gaining importance in almost all specialties. E-learning has been used to teach theoretical knowledge and pattern recognition. As social media are universally available, they can be utilized for educational purposes. We wanted to evaluate the utility of the sandwich e-learning approach defined as a pre-course e-learning and a post-course learning activity using Facebook after a one-day point-of-care ultrasound (POC-US) course and its effect on the retention of knowledge. A total of 62 medial students were recruited for this study and randomly assigned to one of four groups. All groups received an identical hands-on training and performed several tests during the study period. The hands-on training was performed in groups of five students per instructor with the students scanning each other. Group 1 had access to pre-course e-learning, but not to post-course e-learning. Instead of a pre-course e-learning, group 2 listened to presentations at the day of the course (classroom teaching) and had access to the post-course learning activity using Facebook. Group 3 had access to both pre- and post-course e-learning (sandwich e-learning) activities, while group 4 listened classroom presentations only (classroom teaching only). Therefore only groups 2 and 3 had access to post-course learning via Facebook by joining a secured group. Posts containing ultrasound pictures and videos were published to this group. The students were asked to "like" the posts to monitor attendance. Knowledge retention was assessed 6 weeks after the course. After 6 weeks, group 3 achieved comparable results when compared to group 2 (82.2 % + -8.2 vs. 84.3 + -8.02) (p = 0.3). Students who participated in the post-course activity were more satisfied with the overall course than students without post-course learning (5.5 vs. 5.3 on a range from 1 to 6). In this study, the sandwich e-learning approach led to equal rates of knowledge retention compared to
Neutron capture at the s-process branching points $^{171}$Tm and $^{204}$Tl
Branching points in the s-process are very special isotopes for which there is a competition between the neutron capture and the subsequent b-decay chain producing the heavy elements beyond Fe. Typically, the knowledge on the associated capture cross sections is very poor due to the difficulty in obtaining enough material of these radioactive isotopes and to measure the cross section of a sample with an intrinsic activity; indeed only 2 out o the 21 ${s}$-process branching points have ever been measured by using the time-of-flight method. In this experiment we aim at measuring for the first time the capture cross sections of $^{171}$Tm and $^{204}$Tl, both of crucial importance for understanding the nucleosynthesis of heavy elements in AGB stars. The combination of both (n,$\\gamma$) measurements on $^{171}$Tm and $^{204}$Tl will allow one to accurately constrain neutron density and the strength of the 13C(α,n) source in low mass AGB stars. Additionally, the cross section of $^{204}$Tl is also of cosmo-chrono...
Students’ Algebraic Thinking Process in Context of Point and Line Properties
Nurrahmi, H.; Suryadi, D.; Fatimah, S.
2017-09-01
Learning of schools algebra is limited to symbols and operating procedures, so students are able to work on problems that only require the ability to operate symbols but unable to generalize a pattern as one of part of algebraic thinking. The purpose of this study is to create a didactic design that facilitates students to do algebraic thinking process through the generalization of patterns, especially in the context of the property of point and line. This study used qualitative method and includes Didactical Design Research (DDR). The result is students are able to make factual, contextual, and symbolic generalization. This happen because the generalization arises based on facts on local terms, then the generalization produced an algebraic formula that was described in the context and perspective of each student. After that, the formula uses the algebraic letter symbol from the symbol t hat uses the students’ language. It can be concluded that the design has facilitated students to do algebraic thinking process through the generalization of patterns, especially in the context of property of the point and line. The impact of this study is this design can use as one of material teaching alternative in learning of school algebra.
Bron, Carel; Wensing, Michel; Franssen, Jo Lm; Oostendorp, Rob Ab
2007-11-05
Shoulder disorders are a common health problem in western societies. Several treatment protocols have been developed for the clinical management of persons with shoulder pain. However available evidence does not support any protocol as being superior over others. Systematic reviews provide some evidence that certain physical therapy interventions (i.e. supervised exercises and mobilisation) are effective in particular shoulder disorders (i.e. rotator cuff disorders, mixed shoulder disorders and adhesive capsulitis), but there is an ongoing need for high quality trials of physical therapy interventions. Usually, physical therapy consists of active exercises intended to strengthen the shoulder muscles as stabilizers of the glenohumeral joint or perform mobilisations to improve restricted mobility of the glenohumeral or adjacent joints (shoulder girdle). It is generally accepted that a-traumatic shoulder problems are the result of impingement of the subacromial structures, such as the bursa or rotator cuff tendons. Myofascial trigger points (MTrPs) in shoulder muscles may also lead to a complex of symptoms that are often seen in patients diagnosed with subacromial impingement or rotator cuff tendinopathy. Little is known about the treatment of MTrPs in patients with shoulder disorders.The primary aim of this study is to investigate whether physical therapy modalities to inactivate MTrPs can reduce symptoms and improve shoulder function in daily activities in a population of chronic a-traumatic shoulder patients when compared to a wait-and-see strategy. In addition we investigate the recurrence rate during a one-year-follow-up period. This paper presents the design for a randomized controlled trial to be conducted between September 2007 - September 2008, evaluating the effectiveness of a physical therapy treatment for non-traumatic shoulder complaints. One hundred subjects are included in this study. All subjects have unilateral shoulder pain for at least six months
Directed motion emerging from two coupled random processes
DEFF Research Database (Denmark)
Ambjörnsson, T.; Lomholt, Michael Andersen; Metzler, R.
2005-01-01
detail, we develop a dynamical description of the process in terms of a (2+1)-variable master equation for the probability of having m monomers on the target side of the membrane with n bound chaperones at time t. Emphasis is put on the calculation of the mean first passage time as a function of total...... dynamics ( and ), we perform the adiabatic elimination of the fast variable n, and find that for a very long polymer , but with a smaller prefactor than for ratchet-like dynamics. We solve the general case numerically as a function of the dimensionless parameters λ, κ and γ, and compare to the three...
Topobathymetric LiDAR point cloud processing and landform classification in a tidal environment
Skovgaard Andersen, Mikkel; Al-Hamdani, Zyad; Steinbacher, Frank; Rolighed Larsen, Laurids; Brandbyge Ernstsen, Verner
2017-04-01
Historically it has been difficult to create high resolution Digital Elevation Models (DEMs) in land-water transition zones due to shallow water depth and often challenging environmental conditions. This gap of information has been reflected as a "white ribbon" with no data in the land-water transition zone. In recent years, the technology of airborne topobathymetric Light Detection and Ranging (LiDAR) has proven capable of filling out the gap by simultaneously capturing topographic and bathymetric elevation information, using only a single green laser. We collected green LiDAR point cloud data in the Knudedyb tidal inlet system in the Danish Wadden Sea in spring 2014. Creating a DEM from a point cloud requires the general processing steps of data filtering, water surface detection and refraction correction. However, there is no transparent and reproducible method for processing green LiDAR data into a DEM, specifically regarding the procedure of water surface detection and modelling. We developed a step-by-step procedure for creating a DEM from raw green LiDAR point cloud data, including a procedure for making a Digital Water Surface Model (DWSM) (see Andersen et al., 2017). Two different classification analyses were applied to the high resolution DEM: A geomorphometric and a morphological classification, respectively. The classification methods were originally developed for a small test area; but in this work, we have used the classification methods to classify the complete Knudedyb tidal inlet system. References Andersen MS, Gergely Á, Al-Hamdani Z, Steinbacher F, Larsen LR, Ernstsen VB (2017). Processing and performance of topobathymetric lidar data for geomorphometric and morphological classification in a high-energy tidal environment. Hydrol. Earth Syst. Sci., 21: 43-63, doi:10.5194/hess-21-43-2017. Acknowledgements This work was funded by the Danish Council for Independent Research | Natural Sciences through the project "Process-based understanding and
Point process modeling and estimation: Advances in the analysis of dynamic neural spiking data
Deng, Xinyi
2016-08-01
A common interest of scientists in many fields is to understand the relationship between the dynamics of a physical system and the occurrences of discrete events within such physical system. Seismologists study the connection between mechanical vibrations of the Earth and the occurrences of earthquakes so that future earthquakes can be better predicted. Astrophysicists study the association between the oscillating energy of celestial regions and the emission of photons to learn the Universe's various objects and their interactions. Neuroscientists study the link between behavior and the millisecond-timescale spike patterns of neurons to understand higher brain functions. Such relationships can often be formulated within the framework of state-space models with point process observations. The basic idea is that the dynamics of the physical systems are driven by the dynamics of some stochastic state variables and the discrete events we observe in an interval are noisy observations with distributions determined by the state variables. This thesis proposes several new methodological developments that advance the framework of state-space models with point process observations at the intersection of statistics and neuroscience. In particular, we develop new methods 1) to characterize the rhythmic spiking activity using history-dependent structure, 2) to model population spike activity using marked point process models, 3) to allow for real-time decision making, and 4) to take into account the need for dimensionality reduction for high-dimensional state and observation processes. We applied these methods to a novel problem of tracking rhythmic dynamics in the spiking of neurons in the subthalamic nucleus of Parkinson's patients with the goal of optimizing placement of deep brain stimulation electrodes. We developed a decoding algorithm that can make decision in real-time (for example, to stimulate the neurons or not) based on various sources of information present in
Directory of Open Access Journals (Sweden)
Misganaw Abebe
2017-11-01
Full Text Available Springback in multi-point dieless forming (MDF is a common problem because of the small deformation and blank holder free boundary condition. Numerical simulations are widely used in sheet metal forming to predict the springback. However, the computational time in using the numerical tools is time costly to find the optimal process parameters value. This study proposes radial basis function (RBF to replace the numerical simulation model by using statistical analyses that are based on a design of experiment (DOE. Punch holding time, blank thickness, and curvature radius are chosen as effective process parameters for determining the springback. The Latin hypercube DOE method facilitates statistical analyses and the extraction of a prediction model in the experimental process parameter domain. Finite element (FE simulation model is conducted in the ABAQUS commercial software to generate the springback responses of the training and testing samples. The genetic algorithm is applied to find the optimal value for reducing and compensating the induced springback for the different blank thicknesses using the developed RBF prediction model. Finally, the RBF numerical result is verified by comparing with the FE simulation result of the optimal process parameters and both results show that the springback is almost negligible from the target shape.
Gerhard, Felipe; Deger, Moritz; Truccolo, Wilson
2017-02-01
Point process generalized linear models (PP-GLMs) provide an important statistical framework for modeling spiking activity in single-neurons and neuronal networks. Stochastic stability is essential when sampling from these models, as done in computational neuroscience to analyze statistical properties of neuronal dynamics and in neuro-engineering to implement closed-loop applications. Here we show, however, that despite passing common goodness-of-fit tests, PP-GLMs estimated from data are often unstable, leading to divergent firing rates. The inclusion of absolute refractory periods is not a satisfactory solution since the activity then typically settles into unphysiological rates. To address these issues, we derive a framework for determining the existence and stability of fixed points of the expected conditional intensity function (CIF) for general PP-GLMs. Specifically, in nonlinear Hawkes PP-GLMs, the CIF is expressed as a function of the previous spike history and exogenous inputs. We use a mean-field quasi-renewal (QR) approximation that decomposes spike history effects into the contribution of the last spike and an average of the CIF over all spike histories prior to the last spike. Fixed points for stationary rates are derived as self-consistent solutions of integral equations. Bifurcation analysis and the number of fixed points predict that the original models can show stable, divergent, and metastable (fragile) dynamics. For fragile models, fluctuations of the single-neuron dynamics predict expected divergence times after which rates approach unphysiologically high values. This metric can be used to estimate the probability of rates to remain physiological for given time periods, e.g., for simulation purposes. We demonstrate the use of the stability framework using simulated single-neuron examples and neurophysiological recordings. Finally, we show how to adapt PP-GLM estimation procedures to guarantee model stability. Overall, our results provide a
The Initial Regression Statistical Characteristics of Intervals Between Zeros of Random Processes
Directory of Open Access Journals (Sweden)
V. K. Hohlov
2014-01-01
Full Text Available The article substantiates the initial regression statistical characteristics of intervals between zeros of realizing random processes, studies their properties allowing the use these features in the autonomous information systems (AIS of near location (NL. Coefficients of the initial regression (CIR to minimize the residual sum of squares of multiple initial regression views are justified on the basis of vector representations associated with a random vector notion of analyzed signal parameters. It is shown that even with no covariance-based private CIR it is possible to predict one random variable through another with respect to the deterministic components. The paper studies dependences of CIR interval sizes between zeros of the narrowband stationary in wide-sense random process with its energy spectrum. Particular CIR for random processes with Gaussian and rectangular energy spectra are obtained. It is shown that the considered CIRs do not depend on the average frequency of spectra, are determined by the relative bandwidth of the energy spectra, and weakly depend on the type of spectrum. CIR properties enable its use as an informative parameter when implementing temporary regression methods of signal processing, invariant to the average rate and variance of the input implementations. We consider estimates of the average energy spectrum frequency of the random stationary process by calculating the length of the time interval corresponding to the specified number of intervals between zeros. It is shown that the relative variance in estimation of the average energy spectrum frequency of stationary random process with increasing relative bandwidth ceases to depend on the last process implementation in processing above ten intervals between zeros. The obtained results can be used in the AIS NL to solve the tasks of detection and signal recognition, when a decision is made in conditions of unknown mathematical expectations on a limited observation
Randomized random walk on a random walk
International Nuclear Information System (INIS)
Lee, P.A.
1983-06-01
This paper discusses generalizations of the model introduced by Kehr and Kunter of the random walk of a particle on a one-dimensional chain which in turn has been constructed by a random walk procedure. The superimposed random walk is randomised in time according to the occurrences of a stochastic point process. The probability of finding the particle in a particular position at a certain instant is obtained explicitly in the transform domain. It is found that the asymptotic behaviour for large time of the mean-square displacement of the particle depends critically on the assumed structure of the basic random walk, giving a diffusion-like term for an asymmetric walk or a square root law if the walk is symmetric. Many results are obtained in closed form for the Poisson process case, and these agree with those given previously by Kehr and Kunter. (author)
Valenza, G; Romigi, A; Citi, L; Placidi, F; Izzi, F; Albanese, M; Scilingo, E P; Marciani, M G; Duggento, A; Guerrisi, M; Toschi, N; Barbieri, R
2016-08-01
Symptoms of temporal lobe epilepsy (TLE) are frequently associated with autonomic dysregulation, whose underlying biological processes are thought to strongly contribute to sudden unexpected death in epilepsy (SUDEP). While abnormal cardiovascular patterns commonly occur during ictal events, putative patterns of autonomic cardiac effects during pre-ictal (PRE) periods (i.e. periods preceding seizures) are still unknown. In this study, we investigated TLE-related heart rate variability (HRV) through instantaneous, nonlinear estimates of cardiovascular oscillations during inter-ictal (INT) and PRE periods. ECG recordings from 12 patients with TLE were processed to extract standard HRV indices, as well as indices of instantaneous HRV complexity (dominant Lyapunov exponent and entropy) and higher-order statistics (bispectra) obtained through definition of inhomogeneous point-process nonlinear models, employing Volterra-Laguerre expansions of linear, quadratic, and cubic kernels. Experimental results demonstrate that the best INT vs. PRE classification performance (balanced accuracy: 73.91%) was achieved only when retaining the time-varying, nonlinear, and non-stationary structure of heartbeat dynamical features. The proposed approach opens novel important avenues in predicting ictal events using information gathered from cardiovascular signals exclusively.
2011-11-01
Using a multidisciplinary team approach, the University of California, San Diego, Health System has been able to significantly reduce average door-to-balloon angioplasty times for patients with the most severe form of heart attacks, beating national recommendations by more than a third. The multidisciplinary team meets monthly to review all cases involving patients with ST-segment-elevation myocardial infarctions (STEMI) to see where process improvements can be made. Using this continuous quality improvement (CQI) process, the health system has reduced average door-to-balloon times from 120 minutes to less than 60 minutes, and administrators are now aiming for further progress. Among the improvements instituted by the multidisciplinary team are the implementation of a "greeter" with enough clinical expertise to quickly pick up on potential STEMI heart attacks as soon as patients walk into the ED, and the purchase of an electrocardiogram (EKG) machine so that evaluations can be done in the triage area. ED staff have prepared "STEMI" packets, including items such as special IV tubing and disposable leads, so that patients headed for the catheterization laboratory are prepared to undergo the procedure soon after arrival. All the clocks and devices used in the ED are synchronized so that analysts can later review how long it took to complete each step of the care process. Points of delay can then be targeted for improvement.
Directory of Open Access Journals (Sweden)
Zhe eChen
2012-02-01
Full Text Available In recent years, time-varying inhomogeneous point process models have been introduced for assessment of instantaneous heartbeat dynamics as well as specific cardiovascular control mechanisms and hemodynamics. Assessment of the model's statistics is established through the Wiener-Volterra theory and a multivariate autoregressive (AR structure. A variety of instantaneous cardiovascular metrics, such as heart rate (HR, heart rate variability (HRV, respiratory sinus arrhythmia (RSA, and baroreceptor-cardiac reflex (baroreflex sensitivity (BRS, are derived within a parametric framework and instantaneously updated with adaptive and local maximum likelihood estimation algorithms. Inclusion of second order nonlinearities, with subsequent bispectral quantification in the frequency domain, further allows for definition of instantaneous metrics of nonlinearity. We here organize a comprehensive review of the devised methods as applied to experimental recordings from healthy subjects during propofol anesthesia. Collective results reveal interesting dynamic trends across the different pharmacological interventions operated within each anesthesia session, confirming the ability of the algorithm to track important changes in cardiorespiratory elicited interactions, and pointing at our mathematical approach as a promising monitoring tool for an accurate, noninvasive assessment in clinical practice.
Energy Technology Data Exchange (ETDEWEB)
Tsybulevski, A.M.; Pearson, M. [Alcoa Industrial Chemicals, 16010 Barker`s Point Lane, Houston, TX (United States); Morgun, L.V.; Filatova, O.E. [All-Russian Research Institute of Natural Gases and Gas Technologies VNIIGAZ, Moscow (Russian Federation); Sharp, M. [Porocel Corporation, Westheimer, Houston, TX (United States)
1996-10-08
The efficiency of 4 samples of alumina catalyst has been studied experimentally in the course of the Claus `tail gas` treating processes at the sulphur sub-dew point (TGTP). The samples were characterized by the same chemical and crystallographic composition, the same volume of micropores, the same surface area and the same catalytic activity but differed appreciably in the volume of macropores. An increase in the effective operation time of the catalysts before breakthrough of unrecoverable sulphur containing compounds, with the increasing macropore volume has been established. A theoretical model of the TGTP has been considered and it has been shown that the increase in the sulphur capacity of the catalysts with a larger volume of macropores is due to an increase in the catalysts efficiency factor and a slower decrease in their diffusive permeability during filling of micropores by sulphur
Quantification of annual wildfire risk; A spatio-temporal point process approach.
Directory of Open Access Journals (Sweden)
Paula Pereira
2013-10-01
Full Text Available Policy responses for local and global firemanagement depend heavily on the proper understanding of the fire extent as well as its spatio-temporal variation across any given study area. Annual fire risk maps are important tools for such policy responses, supporting strategic decisions such as location-allocation of equipment and human resources. Here, we define risk of fire in the narrow sense as the probability of its occurrence without addressing the loss component. In this paper, we study the spatio-temporal point patterns of wildfires and model them by a log Gaussian Cox processes. Themean of predictive distribution of randomintensity function is used in the narrow sense, as the annual fire risk map for next year.
The (n, $\\gamma$) reaction in the s-process branching point $^{59}$Ni
We propose to measure the $^{59}$Ni(n,$\\gamma$)$^{56}$Fe cross section at the neutron time of flight (n TOF) facility with a dedicated chemical vapor deposition (CVD) diamond detector. The (n, ) reaction in the radioactive $^{59}$Ni is of relevance in nuclear astrophysics as it can be seen as a rst branching point in the astrophysical s-process. Its relevance in nuclear technology is especially related to material embrittlement in stainless steel. There is a strong discrepancy between available experimental data and the evaluated nuclear data les for this isotope. The aim of the measurement is to clarify this disagreement. The clear energy separation of the reaction products of neutron induced reactions in $^{59}$Ni makes it a very suitable candidate for a rst cross section measurement with the CVD diamond detector, which should serve in the future for similar measurements at n_TOF.
Aydin, Orhun; Caers, Jef Karel
2017-08-01
Faults are one of the building-blocks for subsurface modeling studies. Incomplete observations of subsurface fault networks lead to uncertainty pertaining to location, geometry and existence of faults. In practice, gaps in incomplete fault network observations are filled based on tectonic knowledge and interpreter's intuition pertaining to fault relationships. Modeling fault network uncertainty with realistic models that represent tectonic knowledge is still a challenge. Although methods that address specific sources of fault network uncertainty and complexities of fault modeling exists, a unifying framework is still lacking. In this paper, we propose a rigorous approach to quantify fault network uncertainty. Fault pattern and intensity information are expressed by means of a marked point process, marked Strauss point process. Fault network information is constrained to fault surface observations (complete or partial) within a Bayesian framework. A structural prior model is defined to quantitatively express fault patterns, geometries and relationships within the Bayesian framework. Structural relationships between faults, in particular fault abutting relations, are represented with a level-set based approach. A Markov Chain Monte Carlo sampler is used to sample posterior fault network realizations that reflect tectonic knowledge and honor fault observations. We apply the methodology to a field study from Nankai Trough & Kumano Basin. The target for uncertainty quantification is a deep site with attenuated seismic data with only partially visible faults and many faults missing from the survey or interpretation. A structural prior model is built from shallow analog sites that are believed to have undergone similar tectonics compared to the site of study. Fault network uncertainty for the field is quantified with fault network realizations that are conditioned to structural rules, tectonic information and partially observed fault surfaces. We show the proposed
Timing of the Crab pulsar III. The slowing down and the nature of the random process
International Nuclear Information System (INIS)
Groth, E.J.
1975-01-01
The Crab pulsar arrival times are analyzed. The data are found to be consistent with a smooth slowing down with a braking index of 2.515+-0.005. Superposed on the smooth slowdown is a random process which has the same second moments as a random walk in the frequency. The strength of the random process is R 2 >=0.53 (+0.24, -0.12) x10 -22 Hz 2 s -1 , where R is the mean rate of steps and 2 > is the second moment of the step amplitude distribution. Neither the braking index nor the strength of the random process shows evidence of statistically significant time variations, although small fluctuations in the braking index and rather large fluctuations in the noise strength cannot be ruled out. There is a possibility that the random process contains a small component with the same second moments as a random walk in the phase. If so, a time scale of 3.5 days is indicated
Post-processing Free Quantum Random Number Generator Based on Avalanche Photodiode Array
International Nuclear Information System (INIS)
Li Yang; Liao Sheng-Kai; Liang Fu-Tian; Shen Qi; Liang Hao; Peng Cheng-Zhi
2016-01-01
Quantum random number generators adopting single photon detection have been restricted due to the non-negligible dead time of avalanche photodiodes (APDs). We propose a new approach based on an APD array to improve the generation rate of random numbers significantly. This method compares the detectors' responses to consecutive optical pulses and generates the random sequence. We implement a demonstration experiment to show its simplicity, compactness and scalability. The generated numbers are proved to be unbiased, post-processing free, ready to use, and their randomness is verified by using the national institute of standard technology statistical test suite. The random bit generation efficiency is as high as 32.8% and the potential generation rate adopting the 32 × 32 APD array is up to tens of Gbits/s. (paper)
Directory of Open Access Journals (Sweden)
Zhiqiang Yang
2016-05-01
Full Text Available Due to the dynamic process of maximum power point tracking (MPPT caused by turbulence and large rotor inertia, variable-speed wind turbines (VSWTs cannot maintain the optimal tip speed ratio (TSR from cut-in wind speed up to the rated speed. Therefore, in order to increase the total captured wind energy, the existing aerodynamic design for VSWT blades, which only focuses on performance improvement at a single TSR, needs to be improved to a multi-point design. In this paper, based on a closed-loop system of VSWTs, including turbulent wind, rotor, drive train and MPPT controller, the distribution of operational TSR and its description based on inflow wind energy are investigated. Moreover, a multi-point method considering the MPPT dynamic process for the aerodynamic optimization of VSWT blades is proposed. In the proposed method, the distribution of operational TSR is obtained through a dynamic simulation of the closed-loop system under a specific turbulent wind, and accordingly the multiple design TSRs and the corresponding weighting coefficients in the objective function are determined. Finally, using the blade of a National Renewable Energy Laboratory (NREL 1.5 MW wind turbine as the baseline, the proposed method is compared with the conventional single-point optimization method using the commercial software Bladed. Simulation results verify the effectiveness of the proposed method.
Efficient tests for equivalence of hidden Markov processes and quantum random walks
U. Faigle; A. Schönhuth (Alexander)
2011-01-01
htmlabstractWhile two hidden Markov process (HMP) resp.~quantum random walk (QRW) parametrizations can differ from one another, the stochastic processes arising from them can be equivalent. Here a polynomial-time algorithm is presented which can determine equivalence of two HMP parametrizations
Insights into mortality patterns and causes of death through a process point of view model.
Anderson, James J; Li, Ting; Sharrow, David J
2017-02-01
Process point of view (POV) models of mortality, such as the Strehler-Mildvan and stochastic vitality models, represent death in terms of the loss of survival capacity through challenges and dissipation. Drawing on hallmarks of aging, we link these concepts to candidate biological mechanisms through a framework that defines death as challenges to vitality where distal factors defined the age-evolution of vitality and proximal factors define the probability distribution of challenges. To illustrate the process POV, we hypothesize that the immune system is a mortality nexus, characterized by two vitality streams: increasing vitality representing immune system development and immunosenescence representing vitality dissipation. Proximal challenges define three mortality partitions: juvenile and adult extrinsic mortalities and intrinsic adult mortality. Model parameters, generated from Swedish mortality data (1751-2010), exhibit biologically meaningful correspondences to economic, health and cause-of-death patterns. The model characterizes the twentieth century epidemiological transition mainly as a reduction in extrinsic mortality resulting from a shift from high magnitude disease challenges on individuals at all vitality levels to low magnitude stress challenges on low vitality individuals. Of secondary importance, intrinsic mortality was described by a gradual reduction in the rate of loss of vitality presumably resulting from reduction in the rate of immunosenescence. Extensions and limitations of a distal/proximal framework for characterizing more explicit causes of death, e.g. the young adult mortality hump or cancer in old age are discussed.
The neutron capture cross section of the ${s}$-process branch point isotope $^{63}$Ni
Neutron capture nucleosynthesis in massive stars plays an important role in Galactic chemical evolution as well as for the analysis of abundance patterns in very old metal-poor halo stars. The so-called weak ${s}$-process component, which is responsible for most of the ${s}$ abundances between Fe and Sr, turned out to be very sensitive to the stellar neutron capture cross sections in this mass region and, in particular, of isotopes near the seed distribution around Fe. In this context, the unstable isotope $^{63}$Ni is of particular interest because it represents the first branching point in the reaction path of the ${s}$-process. We propose to measure this cross section at n_TOF from thermal energies up to 500 keV, covering the entire range of astrophysical interest. These data are needed to replace uncertain theoretical predicitons by first experimental information to understand the consequences of the $^{63}$Ni branching for the abundance pattern of the subsequent isotopes, especially for $^{63}$Cu and $^{...
High-Performance Pseudo-Random Number Generation on Graphics Processing Units
Nandapalan, Nimalan; Brent, Richard P.; Murray, Lawrence M.; Rendell, Alistair
2011-01-01
This work considers the deployment of pseudo-random number generators (PRNGs) on graphics processing units (GPUs), developing an approach based on the xorgens generator to rapidly produce pseudo-random numbers of high statistical quality. The chosen algorithm has configurable state size and period, making it ideal for tuning to the GPU architecture. We present a comparison of both speed and statistical quality with other common parallel, GPU-based PRNGs, demonstrating favourable performance o...
Evolution of the vortex phase diagram in YBa2Cu3O7-δ with random point disorder
International Nuclear Information System (INIS)
Paulius, L. M.; Kwok, W.-K.; Olsson, R. J.; Petrean, A. M.; Tobos, V.; Fendrich, J. A.; Crabtree, G. W.; Burns, C. A.; Ferguson, S.
2000-01-01
We demonstrate the gradual evolution of the first-order vortex melting transition into a continuous transition with the systematic addition of point disorder induced by proton irradiation. The evolution occurs via the decrease of the upper critical point and the increase of the lower critical point. The collapse of the first-order melting transition occurs when the two critical points merge. We compare these results with the effects of electron irradiation on the first-order transition. (c) 2000 The American Physical Society
Rare event simulation for processes generated via stochastic fixed point equations
DEFF Research Database (Denmark)
Collamore, Jeffrey F.; Diao, Guoqing; Vidyashankar, Anand N.
2014-01-01
In a number of applications, particularly in financial and actuarial mathematics, it is of interest to characterize the tail distribution of a random variable V satisfying the distributional equation V=_D f(V), for some random function f. This paper is concerned with computational methods for eva...
Scargle, Jeffrey D.
1990-01-01
While chaos arises only in nonlinear systems, standard linear time series models are nevertheless useful for analyzing data from chaotic processes. This paper introduces such a model, the chaotic moving average. This time-domain model is based on the theorem that any chaotic process can be represented as the convolution of a linear filter with an uncorrelated process called the chaotic innovation. A technique, minimum phase-volume deconvolution, is introduced to estimate the filter and innovation. The algorithm measures the quality of a model using the volume covered by the phase-portrait of the innovation process. Experiments on synthetic data demonstrate that the algorithm accurately recovers the parameters of simple chaotic processes. Though tailored for chaos, the algorithm can detect both chaos and randomness, distinguish them from each other, and separate them if both are present. It can also recover nonminimum-delay pulse shapes in non-Gaussian processes, both random and chaotic.
Reiter, P; Blazhev, A A; Nardelli, S; Voulot, D; Habs, D; Schwerdtfeger, W; Iwanicki, J S
We propose to investigate the nucleus $^{128}$Cd neighbouring the r-process "waiting point" $^{130}$Cd. A possible explanation for the peak in the solar r-abundances at A $\\approx$ 130 is a quenching of the N = 82 shell closure for spherical nuclei below $^{132}$Sn. This explanation seems to be in agreement with recent $\\beta$-decay measurements performed at ISOLDE. In contrast to this picture, a beyond-mean-field approach would explain the anomaly in the excitation energy observed for $^{128}$Cd rather with a quite large quadrupole collectivity. Therefore, we propose to measure the reduced transition strengths B(E2) between ground state and first excited 2$^{+}$-state in $^{128}$Cd applying $\\gamma$-spectroscopy with MINIBALL after "safe" Coulomb excitation of a post-accelerated beam obtained from REX-ISOLDE. Such a measurement came into reach only because of the source developments made in 2006 for experiment IS411, in particular the use of a heated quartz transfer line. The result from the proposed measure...
Process-based coastal erosion modeling for Drew Point (North Slope, Alaska)
Ravens, Thomas M.; Jones, Benjamin M.; Zhang, Jinlin; Arp, Christopher D.; Schmutz, Joel A.
2012-01-01
A predictive, coastal erosion/shoreline change model has been developed for a small coastal segment near Drew Point, Beaufort Sea, Alaska. This coastal setting has experienced a dramatic increase in erosion since the early 2000’s. The bluffs at this site are 3-4 m tall and consist of ice-wedge bounded blocks of fine-grained sediments cemented by ice-rich permafrost and capped with a thin organic layer. The bluffs are typically fronted by a narrow (∼ 5 m wide) beach or none at all. During a storm surge, the sea contacts the base of the bluff and a niche is formed through thermal and mechanical erosion. The niche grows both vertically and laterally and eventually undermines the bluff, leading to block failure or collapse. The fallen block is then eroded both thermally and mechanically by waves and currents, which must occur before a new niche forming episode may begin. The erosion model explicitly accounts for and integrates a number of these processes including: (1) storm surge generation resulting from wind and atmospheric forcing, (2) erosional niche growth resulting from wave-induced turbulent heat transfer and sediment transport (using the Kobayashi niche erosion model), and (3) thermal and mechanical erosion of the fallen block. The model was calibrated with historic shoreline change data for one time period (1979-2002), and validated with a later time period (2002-2007).
A marked point process approach for identifying neural correlates of tics in Tourette Syndrome.
Loza, Carlos A; Shute, Jonathan B; Principe, Jose C; Okun, Michael S; Gunduz, Aysegul
2017-07-01
We propose a novel interpretation of local field potentials (LFP) based on a marked point process (MPP) framework that models relevant neuromodulations as shifted weighted versions of prototypical temporal patterns. Particularly, the MPP samples are categorized according to the well known oscillatory rhythms of the brain in an effort to elucidate spectrally specific behavioral correlates. The result is a transient model for LFP. We exploit data-driven techniques to fully estimate the model parameters with the added feature of exceptional temporal resolution of the resulting events. We utilize the learned features in the alpha and beta bands to assess correlations to tic events in patients with Tourette Syndrome (TS). The final results show stronger coupling between LFP recorded from the centromedian-paraficicular complex of the thalamus and the tic marks, in comparison to electrocorticogram (ECoG) recordings from the hand area of the primary motor cortex (M1) in terms of the area under the curve (AUC) of the receiver operating characteristic (ROC) curve.
Zubair, Mohammad; Nielsen, Eric; Luitjens, Justin; Hammond, Dana
2016-01-01
In the field of computational fluid dynamics, the Navier-Stokes equations are often solved using an unstructuredgrid approach to accommodate geometric complexity. Implicit solution methodologies for such spatial discretizations generally require frequent solution of large tightly-coupled systems of block-sparse linear equations. The multicolor point-implicit solver used in the current work typically requires a significant fraction of the overall application run time. In this work, an efficient implementation of the solver for graphics processing units is proposed. Several factors present unique challenges to achieving an efficient implementation in this environment. These include the variable amount of parallelism available in different kernel calls, indirect memory access patterns, low arithmetic intensity, and the requirement to support variable block sizes. In this work, the solver is reformulated to use standard sparse and dense Basic Linear Algebra Subprograms (BLAS) functions. However, numerical experiments show that the performance of the BLAS functions available in existing CUDA libraries is suboptimal for matrices representative of those encountered in actual simulations. Instead, optimized versions of these functions are developed. Depending on block size, the new implementations show performance gains of up to 7x over the existing CUDA library functions.
Plasmon point spread functions: How do we model plasmon-mediated emission processes?
Willets, Katherine A.
2014-02-01
A major challenge with studying plasmon-mediated emission events is the small size of plasmonic nanoparticles relative to the wavelength of light. Objects smaller than roughly half the wavelength of light will appear as diffraction-limited spots in far-field optical images, presenting a significant experimental challenge for studying plasmonic processes on the nanoscale. Super-resolution imaging has recently been applied to plasmonic nanosystems and allows plasmon-mediated emission to be resolved on the order of ˜5 nm. In super-resolution imaging, a diffraction-limited spot is fit to some model function in order to calculate the position of the emission centroid, which represents the location of the emitter. However, the accuracy of the centroid position strongly depends on how well the fitting function describes the data. This Perspective discusses the commonly used two-dimensional Gaussian fitting function applied to super-resolution imaging of plasmon-mediated emission, then introduces an alternative model based on dipole point spread functions. The two fitting models are compared and contrasted for super-resolution imaging of nanoparticle scattering/luminescence, surface-enhanced Raman scattering, and surface-enhanced fluorescence.
ISRIA statement: ten-point guidelines for an effective process of research impact assessment.
Adam, Paula; Ovseiko, Pavel V; Grant, Jonathan; Graham, Kathryn E A; Boukhris, Omar F; Dowd, Anne-Maree; Balling, Gert V; Christensen, Rikke N; Pollitt, Alexandra; Taylor, Mark; Sued, Omar; Hinrichs-Krapels, Saba; Solans-Domènech, Maite; Chorzempa, Heidi
2018-02-08
As governments, funding agencies and research organisations worldwide seek to maximise both the financial and non-financial returns on investment in research, the way the research process is organised and funded is becoming increasingly under scrutiny. There are growing demands and aspirations to measure research impact (beyond academic publications), to understand how science works, and to optimise its societal and economic impact. In response, a multidisciplinary practice called research impact assessment is rapidly developing. Given that the practice is still in its formative stage, systematised recommendations or accepted standards for practitioners (such as funders and those responsible for managing research projects) across countries or disciplines to guide research impact assessment are not yet available.In this statement, we propose initial guidelines for a rigorous and effective process of research impact assessment applicable to all research disciplines and oriented towards practice. This statement systematises expert knowledge and practitioner experience from designing and delivering the International School on Research Impact Assessment (ISRIA). It brings together insights from over 450 experts and practitioners from 34 countries, who participated in the school during its 5-year run (from 2013 to 2017) and shares a set of core values from the school's learning programme. These insights are distilled into ten-point guidelines, which relate to (1) context, (2) purpose, (3) stakeholders' needs, (4) stakeholder engagement, (5) conceptual frameworks, (6) methods and data sources, (7) indicators and metrics, (8) ethics and conflicts of interest, (9) communication, and (10) community of practice.The guidelines can help practitioners improve and standardise the process of research impact assessment, but they are by no means exhaustive and require evaluation and continuous improvement. The prima facie effectiveness of the guidelines is based on the systematised
International Nuclear Information System (INIS)
Williams, M.M.R.
2007-01-01
Description: Prof. M.M..R Williams has now released three of his legacy books for free distribution: 1 - M.M.R. Williams: The Slowing Down and Thermalization of Neutrons, North-Holland Publishing Company - Amsterdam, 582 pages, 1966. Content: Part I - The Thermal Energy Region: 1. Introduction and Historical Review, 2. The Scattering Kernel, 3. Neutron Thermalization in an Infinite Homogeneous Medium, 4. Neutron Thermalization in Finite Media, 5. The Spatial Dependence of the Energy Spectrum, 6. Reactor Cell Calculations, 7. Synthetic Scattering Kernels. Part II - The Slowing Down Region: 8. Scattering Kernels in the Slowing Down Region, 9. Neutron Slowing Down in an Infinite Homogeneous Medium, 10.Neutron Slowing Down and Diffusion. 2 - M.M.R. Williams: Mathematical Methods in Particle Transport Theory, Butterworths, London, 430 pages, 1971. Content: 1 The General Problem of Particle Transport, 2 The Boltzmann Equation for Gas Atoms and Neutrons, 3 Boundary Conditions, 4 Scattering Kernels, 5 Some Basic Problems in Neutron Transport and Rarefied Gas Dynamics, 6 The Integral Form of the Transport Equation in Plane, Spherical and Cylindrical Geometries, 7 Exact Solutions of Model Problems, 8 Eigenvalue Problems in Transport Theory, 9 Collision Probability Methods, 10 Variational Methods, 11 Polynomial Approximations. 3 - M.M.R. Williams: Random Processes in Nuclear Reactors, Pergamon Press Oxford New York Toronto Sydney, 243 pages, 1974. Content: 1. Historical Survey and General Discussion, 2. Introductory Mathematical Treatment, 3. Applications of the General Theory, 4. Practical Applications of the Probability Distribution, 5. The Langevin Technique, 6. Point Model Power Reactor Noise, 7. The Spatial Variation of Reactor Noise, 8. Random Phenomena in Heterogeneous Reactor Systems, 9. Associated Fluctuation Problems, Appendix: Noise Equivalent Sources. Note to the user: Prof. M.M.R Williams owns the copyright of these books and he authorises the OECD/NEA Data Bank
Point process models for localization and interdependence of punctate cellular structures.
Li, Ying; Majarian, Timothy D; Naik, Armaghan W; Johnson, Gregory R; Murphy, Robert F
2016-07-01
Accurate representations of cellular organization for multiple eukaryotic cell types are required for creating predictive models of dynamic cellular function. To this end, we have previously developed the CellOrganizer platform, an open source system for generative modeling of cellular components from microscopy images. CellOrganizer models capture the inherent heterogeneity in the spatial distribution, size, and quantity of different components among a cell population. Furthermore, CellOrganizer can generate quantitatively realistic synthetic images that reflect the underlying cell population. A current focus of the project is to model the complex, interdependent nature of organelle localization. We built upon previous work on developing multiple non-parametric models of organelles or structures that show punctate patterns. The previous models described the relationships between the subcellular localization of puncta and the positions of cell and nuclear membranes and microtubules. We extend these models to consider the relationship to the endoplasmic reticulum (ER), and to consider the relationship between the positions of different puncta of the same type. Our results do not suggest that the punctate patterns we examined are dependent on ER position or inter- and intra-class proximity. With these results, we built classifiers to update previous assignments of proteins to one of 11 patterns in three distinct cell lines. Our generative models demonstrate the ability to construct statistically accurate representations of puncta localization from simple cellular markers in distinct cell types, capturing the complex phenomena of cellular structure interaction with little human input. This protocol represents a novel approach to vesicular protein annotation, a field that is often neglected in high-throughput microscopy. These results suggest that spatial point process models provide useful insight with respect to the spatial dependence between cellular structures.
Smith, Toni M.; Hjalmarson, Margret A.
2013-01-01
The purpose of this study is to examine prospective mathematics specialists' engagement in an instructional sequence designed to elicit and develop their understandings of random processes. The study was conducted with two different sections of a probability and statistics course for K-8 teachers. Thirty-two teachers participated. Video analyses…
Setting up a randomized clinical trial in the UK: approvals and process.
Greene, Louise Eleanor; Bearn, David R
2013-06-01
Randomized clinical trials are considered the 'gold standard' in primary research for healthcare interventions. However, they can be expensive and time-consuming to set up and require many approvals to be in place before they can begin. This paper outlines how to determine what approvals are required for a trial, the background of each approval and the process for obtaining them.
This randomized, double-blinded, clinical trial assessed the effect of high hydrostatic pressure processing (HPP) on genogroup I.1 human norovirus (HuNoV) inactivation in virus-seeded oysters when ingested by subjects. The safety and efficacy of HPP treatments were assessed in three study phases wi...
International Nuclear Information System (INIS)
Tsallis, C.; Santos, R.J.V. dos
1983-01-01
On conjectural grounds an equation that provides a very good approximation for the critical temperature of the fully-anisotropic homogeneous quenched bond-random q-state Potts ferromagnet in triangular and honeycomb lattices is presented. Almost all the exact particular results presently known for the square, triangular and honeycomb lattices are recovered; the numerical discrepancy is quite small for the few exceptions. Some predictions that we believe to be exact are made explicite as well. (Author) [pt
Bubble point pressures of the selected model system for CatLiq® bio-oil process
DEFF Research Database (Denmark)
Toor, Saqib Sohail; Rosendahl, Lasse; Baig, Muhammad Noman
2010-01-01
. In this work, the bubble point pressures of a selected model mixture (CO2 + H2O + Ethanol + Acetic acid + Octanoic acid) were measured to investigate the phase boundaries of the CatLiq® process. The bubble points were measured in the JEFRI-DBR high pressure PVT phase behavior system. The experimental results......The CatLiq® process is a second generation catalytic liquefaction process for the production of bio-oil from WDGS (Wet Distillers Grains with Solubles) at subcritical conditions (280-350 oC and 225-250 bar) in the presence of a homogeneous alkaline and a heterogeneous Zirconia catalyst...
Time at which the maximum of a random acceleration process is reached
International Nuclear Information System (INIS)
Majumdar, Satya N; Rosso, Alberto; Zoia, Andrea
2010-01-01
We study the random acceleration model, which is perhaps one of the simplest, yet nontrivial, non-Markov stochastic processes, and is key to many applications. For this non-Markov process, we present exact analytical results for the probability density p(t m |T) of the time t m at which the process reaches its maximum, within a fixed time interval [0, T]. We study two different boundary conditions, which correspond to the process representing respectively (i) the integral of a Brownian bridge and (ii) the integral of a free Brownian motion. Our analytical results are also verified by numerical simulations.
Adhikari, S; Biswas, A; Bandyopadhyay, T K; Ghosh, P D
2014-06-01
Pointed gourd (Trichosanthes dioica Roxb.) is an economically important cucurbit and is extensively propagated through vegetative means, viz vine and root cuttings. As the accessions are poorly characterized it is important at the beginning of a breeding programme to discriminate among available genotypes to establish the level of genetic diversity. The genetic diversity of 10 pointed gourd races, referred to as accessions was evaluated. DNA profiling was generated using 10 sequence independent RAPD markers. A total of 58 scorable loci were observed out of which 18 (31.03%) loci were considered polymorphic. Genetic diversity parameters [average and effective number of alleles, Shannon's index, percent polymorphism, Nei's gene diversity, polymorphic information content (PIC)] for RAPD along with UPGMA clustering based on Jaccard's coefficient were estimated. The UPGMA dendogram constructed based on RAPD analysis in 10 pointed gourd accessions were found to be grouped in a single cluster and may represent members of one heterotic group. RAPD analysis showed promise as an effective tool in estimating genetic polymorphism in different accessions of pointed gourd.
Daniels, Marcus G.; Farmer, J. Doyne; Gillemot, László; Iori, Giulia; Smith, Eric
2003-03-01
We model trading and price formation in a market under the assumption that order arrival and cancellations are Poisson random processes. This model makes testable predictions for the most basic properties of markets, such as the diffusion rate of prices (which is the standard measure of financial risk) and the spread and price impact functions (which are the main determinants of transaction cost). Guided by dimensional analysis, simulation, and mean-field theory, we find scaling relations in terms of order flow rates. We show that even under completely random order flow the need to store supply and demand to facilitate trading induces anomalous diffusion and temporal structure in prices.
Pseudo-random number generators for Monte Carlo simulations on ATI Graphics Processing Units
Demchik, Vadim
2011-03-01
Basic uniform pseudo-random number generators are implemented on ATI Graphics Processing Units (GPU). The performance results of the realized generators (multiplicative linear congruential (GGL), XOR-shift (XOR128), RANECU, RANMAR, RANLUX and Mersenne Twister (MT19937)) on CPU and GPU are discussed. The obtained speed up factor is hundreds of times in comparison with CPU. RANLUX generator is found to be the most appropriate for using on GPU in Monte Carlo simulations. The brief review of the pseudo-random number generators used in modern software packages for Monte Carlo simulations in high-energy physics is presented.
Generalized random walk algorithm for the numerical modeling of complex diffusion processes
Vamos, C; Vereecken, H
2003-01-01
A generalized form of the random walk algorithm to simulate diffusion processes is introduced. Unlike the usual approach, at a given time all the particles from a grid node are simultaneously scattered using the Bernoulli repartition. This procedure saves memory and computing time and no restrictions are imposed for the maximum number of particles to be used in simulations. We prove that for simple diffusion the method generalizes the finite difference scheme and gives the same precision for large enough number of particles. As an example, simulations of diffusion in random velocity field are performed and the main features of the stochastic mathematical model are numerically tested.
Generalized random walk algorithm for the numerical modeling of complex diffusion processes
International Nuclear Information System (INIS)
Vamos, Calin; Suciu, Nicolae; Vereecken, Harry
2003-01-01
A generalized form of the random walk algorithm to simulate diffusion processes is introduced. Unlike the usual approach, at a given time all the particles from a grid node are simultaneously scattered using the Bernoulli repartition. This procedure saves memory and computing time and no restrictions are imposed for the maximum number of particles to be used in simulations. We prove that for simple diffusion the method generalizes the finite difference scheme and gives the same precision for large enough number of particles. As an example, simulations of diffusion in random velocity field are performed and the main features of the stochastic mathematical model are numerically tested
Minimal-post-processing 320-Gbps true random bit generation using physical white chaos.
Wang, Anbang; Wang, Longsheng; Li, Pu; Wang, Yuncai
2017-02-20
Chaotic external-cavity semiconductor laser (ECL) is a promising entropy source for generation of high-speed physical random bits or digital keys. The rate and randomness is unfortunately limited by laser relaxation oscillation and external-cavity resonance, and is usually improved by complicated post processing. Here, we propose using a physical broadband white chaos generated by optical heterodyning of two ECLs as entropy source to construct high-speed random bit generation (RBG) with minimal post processing. The optical heterodyne chaos not only has a white spectrum without signature of relaxation oscillation and external-cavity resonance but also has a symmetric amplitude distribution. Thus, after quantization with a multi-bit analog-digital-convertor (ADC), random bits can be obtained by extracting several least significant bits (LSBs) without any other processing. In experiments, a white chaos with a 3-dB bandwidth of 16.7 GHz is generated. Its entropy rate is estimated as 16 Gbps by single-bit quantization which means a spectrum efficiency of 96%. With quantization using an 8-bit ADC, 320-Gbps physical RBG is achieved by directly extracting 4 LSBs at 80-GHz sampling rate.
Directory of Open Access Journals (Sweden)
Flocke Susan A
2012-05-01
Full Text Available Abstract Background Effective clinician-patient communication about health behavior change is one of the most important and most overlooked strategies to promote health and prevent disease. Existing guidelines for specific health behavior counseling have been created and promulgated, but not successfully adopted in primary care practice. Building on work focused on creating effective clinician strategies for prompting health behavior change in the primary care setting, we developed an intervention intended to enhance clinician communication skills to create and act on teachable moments for smoking cessation. In this manuscript, we describe the development and implementation of the Teachable Moment Communication Process (TMCP intervention and the baseline characteristics of a group randomized trial designed to evaluate its effectiveness. Methods/Design This group randomized trial includes thirty-one community-based primary care clinicians practicing in Northeast Ohio and 840 of their adult patients. Clinicians were randomly assigned to receive either the Teachable Moments Communication Process (TMCP intervention for smoking cessation, or the delayed intervention. The TMCP intervention consisted of two, 3-hour educational training sessions including didactic presentation, skill demonstration through video examples, skills practices with standardized patients, and feedback from peers and the trainers. For each clinician enrolled, 12 patients were recruited for two time points. Pre- and post-intervention data from the clinicians, patients and audio-recorded clinician‒patient interactions were collected. At baseline, the two groups of clinicians and their patients were similar with regard to all demographic and practice characteristics examined. Both physician and patient recruitment goals were met, and retention was 96% and 94% respectively. Discussion Findings support the feasibility of training clinicians to use the Teachable Moments
Strong approximations and sequential change-point analysis for diffusion processes
DEFF Research Database (Denmark)
Mihalache, Stefan-Radu
2012-01-01
In this paper ergodic diffusion processes depending on a parameter in the drift are considered under the assumption that the processes can be observed continuously. Strong approximations by Wiener processes for a stochastic integral and for the estimator process constructed by the one...
DEFF Research Database (Denmark)
Todsen, Tobias; Jensen, Morten Lind; Tolsgaard, Martin Grønnebæk
2016-01-01
BACKGROUND: Clinicians are increasingly using point-of-care ultrasonography for bedside examinations of patients. However, proper training is needed in this technique, and it is unknown whether the skills learned from focused Ultrasonography courses are being transferred to diagnostic performance...... test and binary logistic regression, respectively. RESULTS: There was a significant difference in the performance score between the intervention group (27.4%) and the control group (18.0%, P = .004) and the diagnostic accuracy between the intervention group (65%) and the control group (39%, P = .014......). CONCLUSIONS: Clinicians could successfully transfer learning from an Ultrasonography course to improve diagnostic performance on patients. However, our results also indicate a need for more training when new technologies such as point-of-care ultrasonography are introduced....
To be and not to be: scale correlations in random multifractal processes
DEFF Research Database (Denmark)
Cleve, Jochen; Schmiegel, Jürgen; Greiner, Martin
We discuss various properties of a random multifractal process, which are related to the issue of scale correlations. By design, the process is homogeneous, non-conservative and has no built-in scale correlations. However, when it comes to observables like breakdown coefficients, which are based...... on a coarse-graining of the multifractal field, scale correlations do appear. In the log-normal limit of the model process, the conditional distributions and moments of breakdown coefficients reproduce the observations made in fully developed small-scale turbulence. These findings help to understand several...
Gaussian random-matrix process and universal parametric correlations in complex systems
International Nuclear Information System (INIS)
Attias, H.; Alhassid, Y.
1995-01-01
We introduce the framework of the Gaussian random-matrix process as an extension of Dyson's Gaussian ensembles and use it to discuss the statistical properties of complex quantum systems that depend on an external parameter. We classify the Gaussian processes according to the short-distance diffusive behavior of their energy levels and demonstrate that all parametric correlation functions become universal upon the appropriate scaling of the parameter. The class of differentiable Gaussian processes is identified as the relevant one for most physical systems. We reproduce the known spectral correlators and compute eigenfunction correlators in their universal form. Numerical evidence from both a chaotic model and weakly disordered model confirms our predictions
Khlyupin, Aleksey; Aslyamov, Timur
2017-06-01
Realistic fluid-solid interaction potentials are essential in description of confined fluids especially in the case of geometric heterogeneous surfaces. Correlated random field is considered as a model of random surface with high geometric roughness. We provide the general theory of effective coarse-grained fluid-solid potential by proper averaging of the free energy of fluid molecules which interact with the solid media. This procedure is largely based on the theory of random processes. We apply first passage time probability problem and assume the local Markov properties of random surfaces. General expression of effective fluid-solid potential is obtained. In the case of small surface irregularities analytical approximation for effective potential is proposed. Both amorphous materials with large surface roughness and crystalline solids with several types of fcc lattices are considered. It is shown that the wider the lattice spacing in terms of molecular diameter of the fluid, the more obtained potentials differ from classical ones. A comparison with published Monte-Carlo simulations was discussed. The work provides a promising approach to explore how the random geometric heterogeneity affects on thermodynamic properties of the fluids.
Directory of Open Access Journals (Sweden)
Emrullah Hayta
2016-10-01
Full Text Available Background: Managemen of myofascial pain syndrome (MPS is a current research subject since there is a small number of randomized studies comparing different management techniques. Multiple studies attempted to assess various treatment options including trigger point dry needling and kinesiotaping. We compared the effects of trigger point dry needling and kinesiotaping in the management of myofascial pain syndome during a 3-month follow-up period. Methods: In this prospective randomized studyin MPS patients with upper trapezius muscle trigger points, the effects of dry needling (n=28 and kinesiotaping (n=27 was compared with regard to the visual analog scale (VAS, neck disability index (NDI, and Nottingham health profile (NHP scores measured at the weeks 0, 4, and 12. Results: Both dry needling and kinesiotaping comparably reduced VAS scores measured at the weeks 4 and 12 and their efficacies were more remarkable at the week 12 (p<0.05. These interventions significantly reduced the NDI and NHP score and their effects were also more remarkable at the week 12; however, dry needling was found more effective (p<0.05. Conclusion: Overall, in current clinical settings, during the management of MPS, pain can be reduced comparably by both dry needling and kinesiotaping; however, restriction in the range of motionin neck region and quality of life are more remarkably reduced by dry needling. Both dry needling and kinesiotaping can provide an increasing effectiveness up to 12 weeks.
On the Coupling Time of the Heat-Bath Process for the Fortuin-Kasteleyn Random-Cluster Model
Collevecchio, Andrea; Elçi, Eren Metin; Garoni, Timothy M.; Weigel, Martin
2018-01-01
We consider the coupling from the past implementation of the random-cluster heat-bath process, and study its random running time, or coupling time. We focus on hypercubic lattices embedded on tori, in dimensions one to three, with cluster fugacity at least one. We make a number of conjectures regarding the asymptotic behaviour of the coupling time, motivated by rigorous results in one dimension and Monte Carlo simulations in dimensions two and three. Amongst our findings, we observe that, for generic parameter values, the distribution of the appropriately standardized coupling time converges to a Gumbel distribution, and that the standard deviation of the coupling time is asymptotic to an explicit universal constant multiple of the relaxation time. Perhaps surprisingly, we observe these results to hold both off criticality, where the coupling time closely mimics the coupon collector's problem, and also at the critical point, provided the cluster fugacity is below the value at which the transition becomes discontinuous. Finally, we consider analogous questions for the single-spin Ising heat-bath process.
Concerning the acid dew point in waste gases from combustion processes
Energy Technology Data Exchange (ETDEWEB)
Knoche, K.F.; Deutz, W.; Hein, K.; Derichs, W.
1986-09-01
The paper discusses the problems associated with the measurement of acid dew point and of sulphuric acid-(say SO/sub 3/-)concentrations in the flue gas from brown coal-fired boiler plants. The sulphuric acid content in brown coal flue gas has been measured at 0.5 to 3 vpm in SO/sub 2/ concentrations of 200 to 800 vpm. Using a conditional equation, the derivation of which from new formulae for phase stability is described in the paper, an acid dew point temperature of 115 to 125/sup 0/C is produced.
Comparison of Clothing Cultures from the View Point of Funeral Procession
増田, 美子; 大枝, 近子; 梅谷, 知世; 杉本, 浄; 内村, 理奈
2011-01-01
This study was for its object to research for the look in the funeral ceremony and make the point of the different and common point between the respective cultural spheres of the Buddhism,Hinduism, Islam and Christianity clearly. In the year 21, we tried to grasp the reality of costumes of funeral courtesy in modern times and present-day. And it became clear in the result, Japan, the Buddhist cultural sphere, China and Taiwan, the Buddhism, the Confucianism and the Taoism intermingled cultura...
Fixed-point Characterization of Compositionality Properties of Probabilistic Processes Combinators
Directory of Open Access Journals (Sweden)
Daniel Gebler
2014-08-01
Full Text Available Bisimulation metric is a robust behavioural semantics for probabilistic processes. Given any SOS specification of probabilistic processes, we provide a method to compute for each operator of the language its respective metric compositionality property. The compositionality property of an operator is defined as its modulus of continuity which gives the relative increase of the distance between processes when they are combined by that operator. The compositionality property of an operator is computed by recursively counting how many times the combined processes are copied along their evolution. The compositionality properties allow to derive an upper bound on the distance between processes by purely inspecting the operators used to specify those processes.
Auditory detection of an increment in the rate of a random process
International Nuclear Information System (INIS)
Brown, W.S.; Emmerich, D.S.
1994-01-01
Recent experiments have presented listeners with complex tonal stimuli consisting of components with values (i.e., intensities or frequencies) randomly sampled from probability distributions [e.g., R. A. Lutfi, J. Acoust. Soc. Am. 86, 934--944 (1989)]. In the present experiment, brief tones were presented at intervals corresponding to the intensity of a random process. Specifically, the intervals between tones were randomly selected from exponential probability functions. Listeners were asked to decide whether tones presented during a defined observation interval represented a ''noise'' process alone or the ''noise'' with a ''signal'' process added to it. The number of tones occurring in any observation interval is a Poisson variable; receiver operating characteristics (ROCs) arising from Poisson processes have been considered by Egan [Signal Detection Theory and ROC Analysis (Academic, New York, 1975)]. Several sets of noise and signal intensities and observation interval durations were selected which were expected to yield equivalent performance. Rating ROCs were generated based on subjects' responses in a single-interval, yes--no task. The performance levels achieved by listeners and the effects of intensity and duration are compared to those predicted for an ideal observer
Zhang, Ning
2012-08-10
Molecular brushes (MBs) of poly(2-oxazoline)s were prepared by living anionic polymerization of 2-isopropenyl-2-oxazoline to form the backbone and living cationic ring-opening polymerization of 2-n-propyl-2-oxazoline and 2-methyl-2-oxazoline to form random and block copolymers. Their aqueous solutions displayed a distinct thermoresponsive behavior as a function of the side-chain composition and sequence. The cloud point (CP) of MBs with random copolymer side chains is a linear function of the hydrophilic monomer content and can be modulated in a wide range. For MBs with block copolymer side chains, it was found that the block sequence had a strong and surprising effect on the CP. While MBs with a distal hydrophobic block had a CP at 70 °C, MBs with hydrophilic outer blocks already precipitated at 32 °C. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
van Staa, T-P; Klungel, O; Smeeth, L
2014-06-01
A solid foundation of evidence of the effects of an intervention is a prerequisite of evidence-based medicine. The best source of such evidence is considered to be randomized trials, which are able to avoid confounding. However, they may not always estimate effectiveness in clinical practice. Databases that collate anonymized electronic health records (EHRs) from different clinical centres have been widely used for many years in observational studies. Randomized point-of-care trials have been initiated recently to recruit and follow patients using the data from EHR databases. In this review, we describe how EHR databases can be used for conducting large-scale simple trials and discuss the advantages and disadvantages of their use. © 2014 The Association for the Publication of the Journal of Internal Medicine.
Energy Technology Data Exchange (ETDEWEB)
Lange, Adrian; Stinchcombe, Robin [Theoretical Physics, University of Oxford, Oxford (United Kingdom)
1996-07-07
We study the general behaviour of the correlation length {zeta}(kT:h) for two-point correlation function of the local fields in an Ising chain with binary distributed fields. At zero field it is shown that {zeta} is the same as the zero-field correlation length for the spin-spin correlation function. For the field-dominated behaviour of {zeta} we find an exponent for the power-law divergence which is smaller than the exponent for the spin-spin correlation length. The entire behaviour of the correlation length can be described by a single crossover scaling function involving the new critical exponent. (author)
Focal Points, Endogenous Processes, and Exogenous Shocks in the Autism Epidemic
Liu, Kayuet; Bearman, Peter S.
2015-01-01
Autism prevalence has increased rapidly in the United States during the past two decades. We have previously shown that the diffusion of information about autism through spatially proximate social relations has contributed significantly to the epidemic. This study expands on this finding by identifying the focal points for interaction that drive…
Jansen, M.H.; Di Bucchianico, A.; Mattheij, R.M.M.; Peletier, M.A.
2006-01-01
We present a continuous wavelet analysis of count data with timevarying intensities. The objective is to extract intervals with significant intensities from background intervals. This includes the precise starting point of the significant interval, its exact duration and the (average) level of
Duan, Haoran
1997-12-01
This dissertation presents the concepts, principles, performance, and implementation of input queuing and cell-scheduling modules for the Illinois Pulsar-based Optical INTerconnect (iPOINT) input-buffered Asynchronous Transfer Mode (ATM) testbed. Input queuing (IQ) ATM switches are well suited to meet the requirements of current and future ultra-broadband ATM networks. The IQ structure imposes minimum memory bandwidth requirements for cell buffering, tolerates bursty traffic, and utilizes memory efficiently for multicast traffic. The lack of efficient cell queuing and scheduling solutions has been a major barrier to build high-performance, scalable IQ-based ATM switches. This dissertation proposes a new Three-Dimensional Queue (3DQ) and a novel Matrix Unit Cell Scheduler (MUCS) to remove this barrier. 3DQ uses a linked-list architecture based on Synchronous Random Access Memory (SRAM) to combine the individual advantages of per-virtual-circuit (per-VC) queuing, priority queuing, and N-destination queuing. It avoids Head of Line (HOL) blocking and provides per-VC Quality of Service (QoS) enforcement mechanisms. Computer simulation results verify the QoS capabilities of 3DQ. For multicast traffic, 3DQ provides efficient usage of cell buffering memory by storing multicast cells only once. Further, the multicast mechanism of 3DQ prevents a congested destination port from blocking other less- loaded ports. The 3DQ principle has been prototyped in the Illinois Input Queue (iiQueue) module. Using Field Programmable Gate Array (FPGA) devices, SRAM modules, and integrated on a Printed Circuit Board (PCB), iiQueue can process incoming traffic at 800 Mb/s. Using faster circuit technology, the same design is expected to operate at the OC-48 rate (2.5 Gb/s). MUCS resolves the output contention by evaluating the weight index of each candidate and selecting the heaviest. It achieves near-optimal scheduling and has a very short response time. The algorithm originates from a
Art Therapy and Cognitive Processing Therapy for Combat-Related PTSD: A Randomized Controlled Trial
Campbell, Melissa; Decker, Kathleen P.; Kruk, Kerry; Deaver, Sarah P.
2018-01-01
This randomized controlled trial was designed to determine if art therapy in conjunction with Cognitive Processing Therapy (CPT) was more effective for reducing symptoms of combat posttraumatic stress disorder (PTSD) than CPT alone. Veterans (N = 11) were randomized to receive either individual CPT, or individual CPT in conjunction with individual art therapy. PTSD Checklist–Military Version and Beck Depression Inventory–II scores improved with treatment in both groups with no significant difference in improvement between the experimental and control groups. Art therapy in conjunction with CPT was found to improve trauma processing and veterans considered it to be an important part of their treatment as it provided healthy distancing, enhanced trauma recall, and increased access to emotions. PMID:29332989
An Artificial Bee Colony Algorithm for the Job Shop Scheduling Problem with Random Processing Times
Directory of Open Access Journals (Sweden)
Rui Zhang
2011-09-01
Full Text Available Due to the influence of unpredictable random events, the processing time of each operation should be treated as random variables if we aim at a robust production schedule. However, compared with the extensive research on the deterministic model, the stochastic job shop scheduling problem (SJSSP has not received sufficient attention. In this paper, we propose an artificial bee colony (ABC algorithm for SJSSP with the objective of minimizing the maximum lateness (which is an index of service quality. First, we propose a performance estimate for preliminary screening of the candidate solutions. Then, the K-armed bandit model is utilized for reducing the computational burden in the exact evaluation (through Monte Carlo simulation process. Finally, the computational results on different-scale test problems validate the effectiveness and efficiency of the proposed approach.
Energy Technology Data Exchange (ETDEWEB)
Prausnitz, J.M.
1980-05-01
This research is concerned with the fundamental physical chemistry and thermodynamics of condensation of tars (dew points) from the vapor phase at advanced temperatures and pressures. Fundamental quantitative understanding of dew points is important for rational design of heat exchangers to recover sensible heat from hot, tar-containing gases that are produced in coal gasification. This report includes essentially six contributions toward establishing the desired understanding: (1) Characterization of Coal Tars for Dew-Point Calculations; (2) Fugacity Coefficients for Dew-Point Calculations in Coal-Gasification Process Design; (3) Vapor Pressures of High-Molecular-Weight Hydrocarbons; (4) Estimation of Vapor Pressures of High-Boiling Fractions in Liquefied Fossil Fuels Containing Heteroatoms Nitrogen or Sulfur; and (5) Vapor Pressures of Heavy Liquid Hydrocarbons by a Group-Contribution Method.
The McMillan Theorem for Colored Branching Processes and Dimensions of Random Fractals
Directory of Open Access Journals (Sweden)
Victor Bakhtin
2014-12-01
Full Text Available For the simplest colored branching process, we prove an analog to the McMillan theorem and calculate the Hausdorff dimensions of random fractals defined in terms of the limit behavior of empirical measures generated by finite genetic lines. In this setting, the role of Shannon’s entropy is played by the Kullback–Leibler divergence, and the Hausdorff dimensions are computed by means of the so-called Billingsley–Kullback entropy, defined in the paper.
Distributed Random Process for a Large-Scale Peer-to-Peer Lottery
Grumbach, Stéphane; Riemann, Robert
2017-01-01
International audience; Most online lotteries today fail to ensure the verifiability of the random process and rely on a trusted third party. This issue has received little attention since the emergence of distributed protocols like Bitcoin that demonstrated the potential of protocols with no trusted third party. We argue that the security requirements of online lotteries are similar to those of online voting, and propose a novel distributed online lottery protocol that applies techniques dev...
Directory of Open Access Journals (Sweden)
Yong He
Full Text Available Characterizing cytoarchitecture is crucial for understanding brain functions and neural diseases. In neuroanatomy, it is an important task to accurately extract cell populations' centroids and contours. Recent advances have permitted imaging at single cell resolution for an entire mouse brain using the Nissl staining method. However, it is difficult to precisely segment numerous cells, especially those cells touching each other. As presented herein, we have developed an automated three-dimensional detection and segmentation method applied to the Nissl staining data, with the following two key steps: 1 concave points clustering to determine the seed points of touching cells; and 2 random walker segmentation to obtain cell contours. Also, we have evaluated the performance of our proposed method with several mouse brain datasets, which were captured with the micro-optical sectioning tomography imaging system, and the datasets include closely touching cells. Comparing with traditional detection and segmentation methods, our approach shows promising detection accuracy and high robustness.
Yi, Faliu; Lee, Jieun; Moon, Inkyu
2014-05-01
The reconstruction of multiple depth images with a ray back-propagation algorithm in three-dimensional (3D) computational integral imaging is computationally burdensome. Further, a reconstructed depth image consists of a focus and an off-focus area. Focus areas are 3D points on the surface of an object that are located at the reconstructed depth, while off-focus areas include 3D points in free-space that do not belong to any object surface in 3D space. Generally, without being removed, the presence of an off-focus area would adversely affect the high-level analysis of a 3D object, including its classification, recognition, and tracking. Here, we use a graphics processing unit (GPU) that supports parallel processing with multiple processors to simultaneously reconstruct multiple depth images using a lookup table containing the shifted values along the x and y directions for each elemental image in a given depth range. Moreover, each 3D point on a depth image can be measured by analyzing its statistical variance with its corresponding samples, which are captured by the two-dimensional (2D) elemental images. These statistical variances can be used to classify depth image pixels as either focus or off-focus points. At this stage, the measurement of focus and off-focus points in multiple depth images is also implemented in parallel on a GPU. Our proposed method is conducted based on the assumption that there is no occlusion of the 3D object during the capture stage of the integral imaging process. Experimental results have demonstrated that this method is capable of removing off-focus points in the reconstructed depth image. The results also showed that using a GPU to remove the off-focus points could greatly improve the overall computational speed compared with using a CPU.
Second-order analysis of inhomogeneous spatial point processes with proportional intensity functions
DEFF Research Database (Denmark)
Guan, Yongtao; Waagepetersen, Rasmus; Beale, Colin M.
2008-01-01
of the intensity functions. The first approach is based on nonparametric kernel-smoothing, whereas the second approach uses a conditional likelihood estimation approach to fit a parametric model for the pair correlation function. A great advantage of the proposed methods is that they do not require the often...... to two spatial point patterns regarding the spatial distributions of birds in the U.K.'s Peak District in 1990 and 2004....
Fractal Point Process and Queueing Theory and Application to Communication Networks
National Research Council Canada - National Science Library
Wornel, Gregory
1999-01-01
.... A unifying theme in the approaches to these problems has been an integration of interrelated perspectives from communication theory, information theory, signal processing theory, and control theory...
Energy Technology Data Exchange (ETDEWEB)
Bergfeld, K
1935-03-09
A process of extracting oil from stones or sands bearing oils is characterized by the stones and sands being heated in a suitable furnace to a temperature below that of cracking and preferably slightly higher than the boiling-point of the oils. The oily vapors are removed from the treating chamber by means of flushing gas.
DEFF Research Database (Denmark)
Grell, Kathrine; Diggle, Peter J; Frederiksen, Kirsten
2015-01-01
We study methods for how to include the spatial distribution of tumours when investigating the relation between brain tumours and the exposure from radio frequency electromagnetic fields caused by mobile phone use. Our suggested point process model is adapted from studies investigating spatial...... the Interphone Study, a large multinational case-control study on the association between brain tumours and mobile phone use....
MINIMUM ENTROPY DECONVOLUTION OF ONE-AND MULTI-DIMENSIONAL NON-GAUSSIAN LINEAR RANDOM PROCESSES
Institute of Scientific and Technical Information of China (English)
程乾生
1990-01-01
The minimum entropy deconvolution is considered as one of the methods for decomposing non-Gaussian linear processes. The concept of peakedness of a system response sequence is presented and its properties are studied. With the aid of the peakedness, the convergence theory of the minimum entropy deconvolution is established. The problem of the minimum entropy deconvolution of multi-dimensional non-Gaussian linear random processes is first investigated and the corresponding theory is given. In addition, the relation between the minimum entropy deconvolution and parameter method is discussed.
Is neutron evaporation from highly excited nuclei a poisson random process
International Nuclear Information System (INIS)
Simbel, M.H.
1982-01-01
It is suggested that neutron emission from highly excited nuclei follows a Poisson random process. The continuous variable of the process is the excitation energy excess over the binding energy of the emitted neutrons and the discrete variable is the number of emitted neutrons. Cross sections for (HI,xn) reactions are analyzed using a formula containing a Poisson distribution function. The post- and pre-equilibrium components of the cross section are treated separately. The agreement between the predictions of this formula and the experimental results is very good. (orig.)
Estimating functions for inhomogeneous spatial point processes with incomplete covariate data
DEFF Research Database (Denmark)
Waagepetersen, Rasmus
and this leads to parameter estimation error which is difficult to quantify. In this paper we introduce a Monte Carlo version of the estimating function used in "spatstat" for fitting inhomogeneous Poisson processes and certain inhomogeneous cluster processes. For this modified estimating function it is feasible...
Estimating functions for inhomogeneous spatial point processes with incomplete covariate data
DEFF Research Database (Denmark)
Waagepetersen, Rasmus
2008-01-01
and this leads to parameter estimation error which is difficult to quantify. In this paper, we introduce a Monte Carlo version of the estimating function used in spatstat for fitting inhomogeneous Poisson processes and certain inhomogeneous cluster processes. For this modified estimating function, it is feasible...
Hazard rate model and statistical analysis of a compound point process
Czech Academy of Sciences Publication Activity Database
Volf, Petr
2005-01-01
Roč. 41, č. 6 (2005), s. 773-786 ISSN 0023-5954 R&D Projects: GA ČR(CZ) GA402/04/1294 Institutional research plan: CEZ:AV0Z10750506 Keywords : couting process * compound process * Cox regression model * intensity Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.343, year: 2005
Congruence from the operator's point of view: compositionality requirements on process semantics
Gazda, M.; Fokkink, W.J.
2010-01-01
One of the basic sanity properties of a behavioural semantics is that it constitutes a congruence with respect to standard process operators. This issue has been traditionally addressed by the development of rule formats for transition system specifications that define process algebras. In this
Congruence from the operator's point of view : compositionality requirements on process semantics
Gazda, M.W.; Fokkink, W.J.; Aceto, L.; Sobocinski, P.
2010-01-01
One of the basic sanity properties of a behavioural semantics is that it constitutes a congruence with respect to standard process operators. This issue has been traditionally addressed by the development of rule formats for transition system specifications that define process algebras. In this
Peets, Adam D; Cooke, Lara; Wright, Bruce; Coderre, Sylvain; McLaughlin, Kevin
2010-10-14
Effective teaching requires an understanding of both what (content knowledge) and how (process knowledge) to teach. While previous studies involving medical students have compared preceptors with greater or lesser content knowledge, it is unclear whether process expertise can compensate for deficient content expertise. Therefore, the objective of our study was to compare the effect of preceptors with process expertise to those with content expertise on medical students' learning outcomes in a structured small group environment. One hundred and fifty-one first year medical students were randomized to 11 groups for the small group component of the Cardiovascular-Respiratory course at the University of Calgary. Each group was then block randomized to one of three streams for the entire course: tutoring exclusively by physicians with content expertise (n = 5), tutoring exclusively by physicians with process expertise (n = 3), and tutoring by content experts for 11 sessions and process experts for 10 sessions (n = 3). After each of the 21 small group sessions, students evaluated their preceptors' teaching with a standardized instrument. Students' knowledge acquisition was assessed by an end-of-course multiple choice (EOC-MCQ) examination. Students rated the process experts significantly higher on each of the instrument's 15 items, including the overall rating. Students' mean score (±SD) on the EOC-MCQ exam was 76.1% (8.1) for groups taught by content experts, 78.2% (7.8) for the combination group and 79.5% (9.2) for process expert groups (p = 0.11). By linear regression student performance was higher if they had been taught by process experts (regression coefficient 2.7 [0.1, 5.4], p teach first year medical students within a structured small group environment; preceptors with process expertise result in at least equivalent, if not superior, student outcomes in this setting.
DEFF Research Database (Denmark)
Østergaard, Jacob; Kramer, Mark A.; Eden, Uri T.
2018-01-01
current. We then fit these spike train datawith a statistical model (a generalized linear model, GLM, with multiplicative influences of past spiking). For different levels of noise, we show how the GLM captures both the deterministic features of the Izhikevich neuron and the variability driven...... by the noise. We conclude that the GLM captures essential features of the simulated spike trains, but for near-deterministic spike trains, goodness-of-fit analyses reveal that the model does not fit very well in a statistical sense; the essential random part of the GLM is not captured....... are separately applied; understanding the relationships between these modeling approaches remains an area of active research. In this letter, we examine this relationship using simulation. To do so, we first generate spike train data from a well-known dynamical model, the Izhikevich neuron, with a noisy input...
Directory of Open Access Journals (Sweden)
Ding Yulong
2011-09-01
Full Text Available Abstract Background Acu-point specificity is a key issue in acupuncture. To date there has not been any satisfactory trial which can ratify the specific effect of acupuncture. This trial will evaluate the specific effect of BL33 for mild and moderate benign prostatic hyperplasia (BPH on the basis of its effectiveness. The non-specific effect will be excluded and the therapeutic effect will be evaluated. Method This is a double-blinded randomized controlled trial. 100 Patients will be randomly allocated into the treatment group (n = 50 and the control group (n = 50. The treatment group receives needling at BL33 and the control group receives needling at non-point. The needling depth, angle, direction, achievement of De Qi and parameters of electroacupuncture are exactly the same in both groups. The primary outcome measure is reduction of international prostate symptom score (IPSS at the 6th week and the secondary outcome measures are reduction of bladder residual urine, increase in maximum urinary flow rate at the 6th week and reduction of IPSS at the 18th week. Discussion This trial will assess the specific therapeutic effect of electroacupuncture at BL33 for mild and moderate BPH. Trial registration Protocol Registration System of Clinical Trials.gov NCT01218243
Mohamadi, Marzieh; Piroozi, Soraya; Rashidi, Iman; Hosseinifard, Saeed
2017-01-01
Latent trigger points in the upper trapezius muscle may disrupt muscle movement patterns and cause problems such as cramping and decreased muscle strength. Because latent trigger points may spontaneously become active trigger points, they should be addressed and treated to prevent further problems. In this study we compared the short-term effect of kinesiotaping versus friction massage on latent trigger points in the upper trapezius muscle. Fifty-eight male students enrolled with a stratified sampling method participated in this single-blind randomized clinical trial (Registration ID: IRCT2016080126674N3) in 2016. Pressure pain threshold was recorded with a pressure algometer and grip strength was recorded with a Collin dynamometer. The participants were randomly assigned to two different treatment groups: kinesiotape or friction massage. Friction massage was performed daily for 3 sessions and kinesiotape was used for 72 h. One hour after the last session of friction massage or removal of the kinesiotape, pressure pain threshold and grip strength were evaluated again. Pressure pain threshold decreased significantly after both friction massage (2.66 ± 0.89 to 2.25 ± 0.76; P = 0.02) and kinesiotaping (2.00 ± 0.74 to 1.71 ± 0.65; P = 0.01). Grip strength increased significantly after friction massage (40.78 ± 9.55 to 42.17 ± 10.68; P = 0.03); however there was no significant change in the kinesiotape group (39.72 ± 6.42 to 40.65 ± 7.3; P = 0.197). There were no significant differences in pressure pain threshold (2.10 ± 0.11 & 1.87 ± 0.11; P = 0.66) or grip strength (42.17 ± 10.68 & 40.65 ± 7.3; P = 0.53) between the two study groups. Friction massage and kinesiotaping had identical short-term effects on latent trigger points in the upper trapezius. Three sessions of either of these two interventions did not improve latent trigger points. Registration ID in IRCT: IRCT2016080126674N3.
Anhøj, Jacob; Olesen, Anne Vingaard
2014-01-01
A run chart is a line graph of a measure plotted over time with the median as a horizontal line. The main purpose of the run chart is to identify process improvement or degradation, which may be detected by statistical tests for non-random patterns in the data sequence. We studied the sensitivity to shifts and linear drifts in simulated processes using the shift, crossings and trend rules for detecting non-random variation in run charts. The shift and crossings rules are effective in detecting shifts and drifts in process centre over time while keeping the false signal rate constant around 5% and independent of the number of data points in the chart. The trend rule is virtually useless for detection of linear drift over time, the purpose it was intended for.
Directory of Open Access Journals (Sweden)
Alex Donaldson
2016-09-01
Conclusion: This systematic yet pragmatic and iterative intervention development process is potentially applicable to any injury prevention topic across all sports settings and levels. It will guide researchers wishing to undertake intervention development.
Main points of research in crude oil processing and petrochemistry. [German Democratic Republic
Energy Technology Data Exchange (ETDEWEB)
Keil, G.; Nowak, S.; Fiedrich, G.; Klare, H.; Apelt, E.
1982-04-01
This article analyzes general aspects in the development of petrochemistry and carbochemistry on a global scale and for industry in the German Democratic Republic. Diagrams are given for liquid and solid carbon resources and their natural hydrogen content showing the increasing hydrogen demand for chemical fuel conversion processes. The petrochemical and carbochemical industry must take a growing level of hydrogen demand into account, which is at present 25 Mt/a on a global scale and which increases by 7% annually. Various methods for chemical processing of crude oil and crude oil residues are outlined. Advanced coal conversion processes with prospects for future application in the GDR are also explained, including the methanol carbonylation process, which achieves 90% selectivity and which is based on carbon monoxide hydrogenation, further the Transcat process, using ethane for vinyl chloride production. Acetylene and carbide carbochemistry in the GDR is a further major line in research and development. Technological processes for the pyrolysis of vacuum gas oil are also evaluated. (27 refs.)
Energy Technology Data Exchange (ETDEWEB)
Matthews, J O; Hopcraft, K I; Jakeman, E [Applied Mathematics Division, School of Mathematical Sciences, University of Nottingham, Nottingham, NG7 2RD (United Kingdom)
2003-11-21
Some properties of classical population processes that comprise births, deaths and multiple immigrations are investigated. The rates at which the immigrants arrive can be tailored to produce a population whose steady state fluctuations are described by a pre-selected distribution. Attention is focused on the class of distributions with a discrete stable law, which have power-law tails and whose moments and autocorrelation function do not exist. The separate problem of monitoring and characterizing the fluctuations is studied, analysing the statistics of individuals that leave the population. The fluctuations in the size of the population are transferred to the times between emigrants that form an intermittent time series of events. The emigrants are counted with a detector of finite dynamic range and response time. This is modelled through clipping the time series or saturating it at an arbitrary but finite level, whereupon its moments and correlation properties become finite. Distributions for the time to the first counted event and for the time between events exhibit power-law regimes that are characteristic of the fluctuations in population size. The processes provide analytical models with which properties of complex discrete random phenomena can be explored, and in addition provide generic means by which random time series encompassing a wide range of intermittent and other discrete random behaviour may be generated.
International Nuclear Information System (INIS)
Matthews, J O; Hopcraft, K I; Jakeman, E
2003-01-01
Some properties of classical population processes that comprise births, deaths and multiple immigrations are investigated. The rates at which the immigrants arrive can be tailored to produce a population whose steady state fluctuations are described by a pre-selected distribution. Attention is focused on the class of distributions with a discrete stable law, which have power-law tails and whose moments and autocorrelation function do not exist. The separate problem of monitoring and characterizing the fluctuations is studied, analysing the statistics of individuals that leave the population. The fluctuations in the size of the population are transferred to the times between emigrants that form an intermittent time series of events. The emigrants are counted with a detector of finite dynamic range and response time. This is modelled through clipping the time series or saturating it at an arbitrary but finite level, whereupon its moments and correlation properties become finite. Distributions for the time to the first counted event and for the time between events exhibit power-law regimes that are characteristic of the fluctuations in population size. The processes provide analytical models with which properties of complex discrete random phenomena can be explored, and in addition provide generic means by which random time series encompassing a wide range of intermittent and other discrete random behaviour may be generated
Zhao, Shi-Bo; Liu, Ming-Zhe; Yang, Lan-Ying
2015-04-01
In this paper we investigate the dynamics of an asymmetric exclusion process on a one-dimensional lattice with long-range hopping and random update via Monte Carlo simulations theoretically. Particles in the model will firstly try to hop over successive unoccupied sites with a probability q, which is different from previous exclusion process models. The probability q may represent the random access of particles. Numerical simulations for stationary particle currents, density profiles, and phase diagrams are obtained. There are three possible stationary phases: the low density (LD) phase, high density (HD) phase, and maximal current (MC) in the system, respectively. Interestingly, bulk density in the LD phase tends to zero, while the MC phase is governed by α, β, and q. The HD phase is nearly the same as the normal TASEP, determined by exit rate β. Theoretical analysis is in good agreement with simulation results. The proposed model may provide a better understanding of random interaction dynamics in complex systems. Project supported by the National Natural Science Foundation of China (Grant Nos. 41274109 and 11104022), the Fund for Sichuan Youth Science and Technology Innovation Research Team (Grant No. 2011JTD0013), and the Creative Team Program of Chengdu University of Technology.
Directory of Open Access Journals (Sweden)
Huibing Hao
2015-01-01
Full Text Available Light emitting diode (LED lamp has attracted increasing interest in the field of lighting systems due to its low energy and long lifetime. For different functions (i.e., illumination and color, it may have two or more performance characteristics. When the multiple performance characteristics are dependent, it creates a challenging problem to accurately analyze the system reliability. In this paper, we assume that the system has two performance characteristics, and each performance characteristic is governed by a random effects Gamma process where the random effects can capture the unit to unit differences. The dependency of performance characteristics is described by a Frank copula function. Via the copula function, the reliability assessment model is proposed. Considering the model is so complicated and analytically intractable, the Markov chain Monte Carlo (MCMC method is used to estimate the unknown parameters. A numerical example about actual LED lamps data is given to demonstrate the usefulness and validity of the proposed model and method.
International Nuclear Information System (INIS)
Verma, K.; MacNeil, C.; Odar, S.; Kuhnke, K.
1997-01-01
This paper describes the chemical cleaning of the four steam generators at the Point Lepreau facility, which was accomplished as a part of a normal service outage. The steam generators had been in service for twelve years. Sludge samples showed the main elements were Fe, P and Na, with minor amounts of Ca, Mg, Mn, Cr, Zn, Cl, Cu, Ni, Ti, Si, and Pb, 90% in the form of Magnetite, substantial phosphate, and trace amounts of silicates. The steam generators were experiencing partial blockage of broached holes in the TSPs, and corrosion on tube ODs in the form of pitting and wastage. In addition heat transfer was clearly deteriorating. More than 1000 kg of magnetite and 124 kg of salts were removed from the four steam generators
Energy Technology Data Exchange (ETDEWEB)
Sopori, B.; Tan, T.Y.
1994-08-01
This report is a summary of a workshop hold on August 24--26, 1992. Session 1 of the conference discussed characteristics of various commercial photovoltaic silicon substrates, the nature of impurities and defects in them, and how they are related to the material growth. Session 2 on point defects reviewed the capabilities of theoretical approaches to determine equilibrium structure of defects in the silicon lattice arising from transitional metal impurities and hydrogen. Session 3 was devoted to a discussion of the surface photovoltaic method for characterizing bulk wafer lifetimes, and to detailed studies on the effectiveness of various gettering operations on reducing the deleterious effects of transition metals. Papers presented at the conference are also included in this summary report.
Congruence from the Operator's Point of View: Compositionality Requirements on Process Semantics
Directory of Open Access Journals (Sweden)
Maciej Gazda
2010-08-01
Full Text Available One of the basic sanity properties of a behavioural semantics is that it constitutes a congruence with respect to standard process operators. This issue has been traditionally addressed by the development of rule formats for transition system specifications that define process algebras. In this paper we suggest a novel, orthogonal approach. Namely, we focus on a number of process operators, and for each of them attempt to find the widest possible class of congruences. To this end, we impose restrictions on sublanguages of Hennessy-Milner logic, so that a semantics whose modal characterization satisfies a given criterion is guaranteed to be a congruence with respect to the operator in question. We investigate action prefix, alternative composition, two restriction operators, and parallel composition.
Nan, Zhufen; Chi, Xuefen
2016-12-20
The IEEE 802.15.7 protocol suggests that it could coordinate the channel access process based on the competitive method of carrier sensing. However, the directionality of light and randomness of diffuse reflection would give rise to a serious imperfect carrier sense (ICS) problem [e.g., hidden node (HN) problem and exposed node (EN) problem], which brings great challenges in realizing the optical carrier sense multiple access (CSMA) mechanism. In this paper, the carrier sense process implemented by diffuse reflection light is modeled as the choice of independent sets. We establish an ICS model with the presence of ENs and HNs for the multi-point to multi-point visible light communication (VLC) uplink communications system. Considering the severe optical ICS problem, an optical hard core point process (OHCPP) is developed, which characterizes the optical CSMA for the indoor VLC uplink communications system. Due to the limited coverage of the transmitted optical signal, in our OHCPP, the ENs within the transmitters' carrier sense region could be retained provided that they could not corrupt the ongoing communications. Moreover, because of the directionality of both light emitting diode (LED) transmitters and receivers, theoretical analysis of the HN problem becomes difficult. In this paper, we derive the closed-form expression for approximating the outage probability and transmission capacity of VLC networks with the presence of HNs and ENs. Simulation results validate the analysis and also show the existence of an optimal physical carrier-sensing threshold that maximizes the transmission capacity for a given emission angle of LED.
Ferrer-Mileo, V; Guede-Fernandez, F; Fernandez-Chimeno, M; Ramos-Castro, J; Garcia-Gonzalez, M A
2015-08-01
This work compares several fiducial points to detect the arrival of a new pulse in a photoplethysmographic signal using the built-in camera of smartphones or a photoplethysmograph. Also, an optimization process for the signal preprocessing stage has been done. Finally we characterize the error produced when we use the best cutoff frequencies and fiducial point for smartphones and photopletysmograph and compare if the error of smartphones can be reasonably be explained by variations in pulse transit time. The results have revealed that the peak of the first derivative and the minimum of the second derivative of the pulse wave have the lowest error. Moreover, for these points, high pass filtering the signal between 0.1 to 0.8 Hz and low pass around 2.7 Hz or 3.5 Hz are the best cutoff frequencies. Finally, the error in smartphones is slightly higher than in a photoplethysmograph.
A Traffic Model for Machine-Type Communications Using Spatial Point Processes
DEFF Research Database (Denmark)
Thomsen, Henning; Manchón, Carles Navarro; Fleury, Bernard Henri
2018-01-01
, where the generated traffic by a given device depends on its position and event positions. We first consider the case where devices and events are static and devices generate traffic according to a Bernoulli process, where we derive the total rate from the devices at the base station. We then extend...
Bayesian analysis of spatial point processes in the neighbourhood of Voronoi networks
DEFF Research Database (Denmark)
Skare, Øivind; Møller, Jesper; Jensen, Eva Bjørn Vedel
2007-01-01
A model for an inhomogeneous Poisson process with high intensity near the edges of a Voronoi tessellation in 2D or 3D is proposed. The model is analysed in a Bayesian setting with priors on nuclei of the Voronoi tessellation and other model parameters. An MCMC algorithm is constructed to sample...
Optimal estimation of the intensity function of a spatial point process
DEFF Research Database (Denmark)
Guan, Yongtao; Jalilian, Abdollah; Waagepetersen, Rasmus
easily computable estimating functions. We derive the optimal estimating function in a class of first-order estimating functions. The optimal estimating function depends on the solution of a certain Fredholm integral equation and reduces to the likelihood score in case of a Poisson process. We discuss...
Bayesian analysis of spatial point processes in the neighbourhood of Voronoi networks
DEFF Research Database (Denmark)
Skare, Øivind; Møller, Jesper; Vedel Jensen, Eva B.
A model for an inhomogeneous Poisson process with high intensity near the edges of a Voronoi tessellation in 2D or 3D is proposed. The model is analysed in a Bayesian setting with priors on nuclei of the Voronoi tessellation and other model parameters. An MCMC algorithm is constructed to sample...
Entry points to stimulation of expansion in hides and skins processing
African Journals Online (AJOL)
Only 3.4% of respondents add value to hides and skins by processing. ... For this status of the chain, it was proposed that a workable intervention model has to encompass placement of tanneries and slaughter slabs in the chain as new actors, linking chain actors, improving livestock services especially dipping, and ...
Mentoring Novice Teachers: Motives, Process, and Outcomes from the Mentor's Point of View
Iancu-Haddad, Debbie; Oplatka, Izhar
2009-01-01
The purpose of this paper is to present the major motives leading senior teachers to be involved in a mentoring process of newly appointed teachers and its benefits for the mentor teacher. Based on semi-structured interviews with 12 experienced teachers who participated in a university-based mentoring program in Israel, the current study found a…
Stressors and Turning Points in High School and Dropout: A Stress Process, Life Course Framework
Dupéré, Véronique; Leventhal, Tama; Dion, Eric; Crosnoe, Robert; Archambault, Isabelle; Janosz, Michel
2015-01-01
High school dropout is commonly seen as the result of a long-term process of failure and disengagement. As useful as it is, this view has obscured the heterogeneity of pathways leading to dropout. Research suggests, for instance, that some students leave school not as a result of protracted difficulties but in response to situations that emerge…
Dibai-Filho, Almir Vieira; de Oliveira, Alessandra Kelly; Girasol, Carlos Eduardo; Dias, Fabiana Rodrigues Cancio; Guirro, Rinaldo Roberto de Jesus
2017-04-01
To assess the additional effect of static ultrasound and diadynamic currents on myofascial trigger points in a manual therapy program to treat individuals with chronic neck pain. A single-blind randomized trial was conducted. Both men and women, between ages 18 and 45, with chronic neck pain and active myofascial trigger points in the upper trapezius were included in the study. Subjects were assigned to 3 different groups: group 1 (n = 20) was treated with manual therapy; group 2 (n = 20) was treated with manual therapy and static ultrasound; group 3 (n = 20) was treated with manual therapy and diadynamic currents. Individuals were assessed before the first treatment session, 48 hours after the first treatment session, 48 hours after the tenth treatment session, and 4 weeks after the last session. There was no group-versus-time interaction for Numeric Rating Scale, Neck Disability Index, Pain-Related Self-Statement Scale, pressure pain threshold, cervical range of motion, and skin temperature (F-value range, 0.089-1.961; P-value range, 0.106-0.977). Moreover, we found no differences between groups regarding electromyographic activity (P > 0.05). The use of static ultrasound or diadynamic currents on myofascial trigger points in upper trapezius associated with a manual therapy program did not generate greater benefits than manual therapy alone.
Sankar Sana, Shib
2016-01-01
The paper develops a production-inventory model of a two-stage supply chain consisting of one manufacturer and one retailer to study production lot size/order quantity, reorder point sales teams' initiatives where demand of the end customers is dependent on random variable and sales teams' initiatives simultaneously. The manufacturer produces the order quantity of the retailer at one lot in which the procurement cost per unit quantity follows a realistic convex function of production lot size. In the chain, the cost of sales team's initiatives/promotion efforts and wholesale price of the manufacturer are negotiated at the points such that their optimum profits reached nearer to their target profits. This study suggests to the management of firms to determine the optimal order quantity/production quantity, reorder point and sales teams' initiatives/promotional effort in order to achieve their maximum profits. An analytical method is applied to determine the optimal values of the decision variables. Finally, numerical examples with its graphical presentation and sensitivity analysis of the key parameters are presented to illustrate more insights of the model.
Producing a functional eukaryotic messenger RNA (mRNA) requires the coordinated activity of several large protein complexes to initiate transcription, elongate nascent transcripts, splice together exons, and cleave and polyadenylate the 3’ end. Kinetic competition between these various processes has been proposed to regulate mRNA maturation, but this model could lead to multiple, randomly determined, or stochastic, pathways or outcomes. Regulatory checkpoints have been suggested as a means of ensuring quality control. However, current methods have been unable to tease apart the contributions of these processes at a single gene or on a time scale that could provide mechanistic insight. To begin to investigate the kinetic relationship between transcription and splicing, Daniel Larson, Ph.D., of CCR’s Laboratory of Receptor Biology and Gene Expression, and his colleagues employed a single-molecule RNA imaging approach to monitor production and processing of a human β-globin reporter gene in living cells.
Meeting points in the VPL process - a key challenge for VPL activities
DEFF Research Database (Denmark)
Aagaard, Kirsten; Enggaard, Ellen
2014-01-01
, a step up the career ladder, personal development or threat of losing his job and the work place’s demand for new competences? There are three main players on this scene: the individual, the (HE) educational institution and the work place. There may be more players involved in the process......The right to have your competences recognized and validated as a mean to gain access to or exemptions of a higher education has existed since 2007, but the knowledge of this opportunity is still not very well spread and the potentials of the law are not exploited. This goes for individuals as well...... the individual in his or her individual career strategies benefit from the option of VPL in the process of managing his or her career strategy? What are the main barriers and obstacles the individual might meet in his or her attempt to move on in his career whether the motivation is change of career direction...
Chosen Aspects of Modernization Processes in EU Countries and in Poland - Classical Point of View
Dworak Edyta; Malarska Anna
2010-01-01
The aim of this paper is an evaluation of changes in a sectoral structure of the employment in EU-countries in time. Against this background there are exposed changes in Polish economy in the period 1997-2008. There were used classical tools of the statistical analysis to illustrate and initially verification the theory of three sectors by A. Fisher, C. Clark i J. Fourastiè, orientated to the evaluation of the modernization process of EU-economies.
The Development of Point Doppler Velocimeter Data Acquisition and Processing Software
Cavone, Angelo A.
2008-01-01
In order to develop efficient and quiet aircraft and validate Computational Fluid Dynamic predications, aerodynamic researchers require flow parameter measurements to characterize flow fields about wind tunnel models and jet flows. A one-component Point Doppler Velocimeter (pDv), a non-intrusive, laser-based instrument, was constructed using a design/develop/test/validate/deploy approach. A primary component of the instrument is software required for system control/management and data collection/reduction. This software along with evaluation algorithms, advanced pDv from a laboratory curiosity to a production level instrument. Simultaneous pDv and pitot probe velocity measurements obtained at the centerline of a flow exiting a two-inch jet, matched within 0.4%. Flow turbulence spectra obtained with pDv and a hot-wire detected the primary and secondary harmonics with equal dynamic range produced by the fan driving the flow. Novel,hardware and software methods were developed, tested and incorporated into the system to eliminate and/or minimize error sources and improve system reliability.
Bamberger, Charlotte; Rossmeier, Andreas; Lechner, Katharina; Wu, Liya; Waldmann, Elisa; Stark, Renée G; Altenhofer, Julia; Henze, Kerstin; Parhofer, Klaus G
2017-10-06
Studies indicate a positive association between walnut intake and improvements in plasma lipids. We evaluated the effect of an isocaloric replacement of macronutrients with walnuts and the time point of consumption on plasma lipids. We included 194 healthy subjects (134 females, age 63 ± 7 years, BMI 25.1 ± 4.0 kg/m²) in a randomized, controlled, prospective, cross-over study. Following a nut-free run-in period, subjects were randomized to two diet phases (8 weeks each). Ninety-six subjects first followed a walnut-enriched diet (43 g walnuts/day) and then switched to a nut-free diet. Ninety-eight subjects followed the diets in reverse order. Subjects were also randomized to either reduce carbohydrates ( n = 62), fat ( n = 65), or both ( n = 67) during the walnut diet, and instructed to consume walnuts either as a meal or as a snack. The walnut diet resulted in a significant reduction in fasting cholesterol (walnut vs. -8.5 ± 37.2 vs. -1.1 ± 35.4 mg/dL; p = 0.002), non-HDL cholesterol (-10.3 ± 35.5 vs. -1.4 ± 33.1 mg/dL; p ≤ 0.001), LDL-cholesterol (-7.4 ± 32.4 vs. -1.7 ± 29.7 mg/dL; p = 0.029), triglycerides (-5.0 ± 47.5 vs. 3.7 ± 48.5 mg/dL; p = 0.015) and apoB (-6.7 ± 22.4 vs. -0.5 ± 37.7; p ≤ 0.001), while HDL-cholesterol and lipoprotein (a) did not change significantly. Neither macronutrient replacement nor time point of consumption significantly affected the effect of walnuts on lipids. Thus, 43 g walnuts/d improved the lipid profile independent of the recommended macronutrient replacement and the time point of consumption.
Directory of Open Access Journals (Sweden)
Stephen Ashton
2011-01-01
Full Text Available In this article we discuss the process of design used to develop and design the NASA Blast exhibition at Thanksgiving Point, a museum complex in Lehi, Utah. This was a class project for the Advanced Instructional Design Class at Brigham Young University. In an attempt to create a new discourse (Krippendorff, 2006 for Thanksgiving Point visitors and staff members, the design class used a very fluid design approach by utilizing brainstorming, researching, class member personas, and prototyping to create ideas for the new exhibition. Because of the nature of the experience, the design class developed their own techniques to enhance the process of their design. The result of the design was a compelling narrative that brought all the elements of the exhibition together in a cohesive piece.
Quasi-steady-state analysis of two-dimensional random intermittent search processes
Bressloff, Paul C.
2011-06-01
We use perturbation methods to analyze a two-dimensional random intermittent search process, in which a searcher alternates between a diffusive search phase and a ballistic movement phase whose velocity direction is random. A hidden target is introduced within a rectangular domain with reflecting boundaries. If the searcher moves within range of the target and is in the search phase, it has a chance of detecting the target. A quasi-steady-state analysis is applied to the corresponding Chapman-Kolmogorov equation. This generates a reduced Fokker-Planck description of the search process involving a nonzero drift term and an anisotropic diffusion tensor. In the case of a uniform direction distribution, for which there is zero drift, and isotropic diffusion, we use the method of matched asymptotics to compute the mean first passage time (MFPT) to the target, under the assumption that the detection range of the target is much smaller than the size of the domain. We show that an optimal search strategy exists, consistent with previous studies of intermittent search in a radially symmetric domain that were based on a decoupling or moment closure approximation. We also show how the decoupling approximation can break down in the case of biased search processes. Finally, we analyze the MFPT in the case of anisotropic diffusion and find that anisotropy can be useful when the searcher starts from a fixed location. © 2011 American Physical Society.
Random Walk on a Perturbation of the Infinitely-Fast Mixing Interchange Process
Salvi, Michele; Simenhaus, François
2018-03-01
We consider a random walk in dimension d≥1 in a dynamic random environment evolving as an interchange process with rate γ >0 . We prove that, if we choose γ large enough, almost surely the empirical velocity of the walker X_t/t eventually lies in an arbitrary small ball around the annealed drift. This statement is thus a perturbation of the case γ =+∞ where the environment is refreshed between each step of the walker. We extend three-way part of the results of Huveneers and Simenhaus (Electron J Probab 20(105):42, 2015), where the environment was given by the 1-dimensional exclusion process: (i) We deal with any dimension d≥1 ; (ii) We treat the much more general interchange process, where each particle carries a transition vector chosen according to an arbitrary law μ ; (iii) We show that X_t/t is not only in the same direction of the annealed drift, but that it is also close to it.
International Nuclear Information System (INIS)
Musho, M.K.; Kozak, J.J.
1984-01-01
A method is presented for calculating exactly the relative width (sigma 2 )/sup 1/2// , the skewness γ 1 , and the kurtosis γ 2 characterizing the probability distribution function for three random-walk models of diffusion-controlled processes. For processes in which a diffusing coreactant A reacts irreversibly with a target molecule B situated at a reaction center, three models are considered. The first is the traditional one of an unbiased, nearest-neighbor random walk on a d-dimensional periodic/confining lattice with traps; the second involves the consideration of unbiased, non-nearest-neigh bor (i.e., variable-step length) walks on the same d-dimensional lattice; and, the third deals with the case of a biased, nearest-neighbor walk on a d-dimensional lattice (wherein a walker experiences a potential centered at the deep trap site of the lattice). Our method, which has been described in detail elsewhere [P.A. Politowicz and J. J. Kozak, Phys. Rev. B 28, 5549 (1983)] is based on the use of group theoretic arguments within the framework of the theory of finite Markov processes
Random Walk on a Perturbation of the Infinitely-Fast Mixing Interchange Process
Salvi, Michele; Simenhaus, François
2018-05-01
We consider a random walk in dimension d≥ 1 in a dynamic random environment evolving as an interchange process with rate γ >0. We prove that, if we choose γ large enough, almost surely the empirical velocity of the walker X_t/t eventually lies in an arbitrary small ball around the annealed drift. This statement is thus a perturbation of the case γ =+∞ where the environment is refreshed between each step of the walker. We extend three-way part of the results of Huveneers and Simenhaus (Electron J Probab 20(105):42, 2015), where the environment was given by the 1-dimensional exclusion process: (i) We deal with any dimension d≥1; (ii) We treat the much more general interchange process, where each particle carries a transition vector chosen according to an arbitrary law μ ; (iii) We show that X_t/t is not only in the same direction of the annealed drift, but that it is also close to it.
Quasi-steady-state analysis of two-dimensional random intermittent search processes
Bressloff, Paul C.; Newby, Jay M.
2011-01-01
We use perturbation methods to analyze a two-dimensional random intermittent search process, in which a searcher alternates between a diffusive search phase and a ballistic movement phase whose velocity direction is random. A hidden target is introduced within a rectangular domain with reflecting boundaries. If the searcher moves within range of the target and is in the search phase, it has a chance of detecting the target. A quasi-steady-state analysis is applied to the corresponding Chapman-Kolmogorov equation. This generates a reduced Fokker-Planck description of the search process involving a nonzero drift term and an anisotropic diffusion tensor. In the case of a uniform direction distribution, for which there is zero drift, and isotropic diffusion, we use the method of matched asymptotics to compute the mean first passage time (MFPT) to the target, under the assumption that the detection range of the target is much smaller than the size of the domain. We show that an optimal search strategy exists, consistent with previous studies of intermittent search in a radially symmetric domain that were based on a decoupling or moment closure approximation. We also show how the decoupling approximation can break down in the case of biased search processes. Finally, we analyze the MFPT in the case of anisotropic diffusion and find that anisotropy can be useful when the searcher starts from a fixed location. © 2011 American Physical Society.
Liao, Yuxi; She, Xiwei; Wang, Yiwen; Zhang, Shaomin; Zhang, Qiaosheng; Zheng, Xiaoxiang; Principe, Jose C.
2015-12-01
Objective. Representation of movement in the motor cortex (M1) has been widely studied in brain-machine interfaces (BMIs). The electromyogram (EMG) has greater bandwidth than the conventional kinematic variables (such as position, velocity), and is functionally related to the discharge of cortical neurons. As the stochastic information of EMG is derived from the explicit spike time structure, point process (PP) methods will be a good solution for decoding EMG directly from neural spike trains. Previous studies usually assume linear or exponential tuning curves between neural firing and EMG, which may not be true. Approach. In our analysis, we estimate the tuning curves in a data-driven way and find both the traditional functional-excitatory and functional-inhibitory neurons, which are widely found across a rat’s motor cortex. To accurately decode EMG envelopes from M1 neural spike trains, the Monte Carlo point process (MCPP) method is implemented based on such nonlinear tuning properties. Main results. Better reconstruction of EMG signals is shown on baseline and extreme high peaks, as our method can better preserve the nonlinearity of the neural tuning during decoding. The MCPP improves the prediction accuracy (the normalized mean squared error) 57% and 66% on average compared with the adaptive point process filter using linear and exponential tuning curves respectively, for all 112 data segments across six rats. Compared to a Wiener filter using spike rates with an optimal window size of 50 ms, MCPP decoding EMG from a point process improves the normalized mean square error (NMSE) by 59% on average. Significance. These results suggest that neural tuning is constantly changing during task execution and therefore, the use of spike timing methodologies and estimation of appropriate tuning curves needs to be undertaken for better EMG decoding in motor BMIs.
Spherical particle Brownian motion in viscous medium as non-Markovian random process
International Nuclear Information System (INIS)
Morozov, Andrey N.; Skripkin, Alexey V.
2011-01-01
The Brownian motion of a spherical particle in an infinite medium is described by the conventional methods and integral transforms considering the entrainment of surrounding particles of the medium by the Brownian particle. It is demonstrated that fluctuations of the Brownian particle velocity represent a non-Markovian random process. The features of Brownian motion in short time intervals and in small displacements are considered. -- Highlights: → Description of Brownian motion considering the entrainment of medium is developed. → We find the equations for statistical characteristics of impulse fluctuations. → Brownian motion at small time intervals is considered. → Theoretical results and experimental data are compared.
Studies in astronomical time series analysis. I - Modeling random processes in the time domain
Scargle, J. D.
1981-01-01
Several random process models in the time domain are defined and discussed. Attention is given to the moving average model, the autoregressive model, and relationships between and combinations of these models. Consideration is then given to methods for investigating pulse structure, procedures of model construction, computational methods, and numerical experiments. A FORTRAN algorithm of time series analysis has been developed which is relatively stable numerically. Results of test cases are given to study the effect of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the light curve of the quasar 3C 272 is considered as an example.
An Efficient Randomized Algorithm for Real-Time Process Scheduling in PicOS Operating System
Helmy*, Tarek; Fatai, Anifowose; Sallam, El-Sayed
PicOS is an event-driven operating environment designed for use with embedded networked sensors. More specifically, it is designed to support the concurrency in intensive operations required by networked sensors with minimal hardware requirements. Existing process scheduling algorithms of PicOS; a commercial tiny, low-footprint, real-time operating system; have their associated drawbacks. An efficient, alternative algorithm, based on a randomized selection policy, has been proposed, demonstrated, confirmed for efficiency and fairness, on the average, and has been recommended for implementation in PicOS. Simulations were carried out and performance measures such as Average Waiting Time (AWT) and Average Turn-around Time (ATT) were used to assess the efficiency of the proposed randomized version over the existing ones. The results prove that Randomized algorithm is the best and most attractive for implementation in PicOS, since it is most fair and has the least AWT and ATT on average over the other non-preemptive scheduling algorithms implemented in this paper.
Amako, Jun; Shinozaki, Yu
2016-07-11
We report on a dual-wavelength diffractive beam splitter designed for use in parallel laser processing. This novel optical element generates two beam arrays of different wavelengths and allows their overlap at the process points on a workpiece. To design the deep surface-relief profile of a splitter using a simulated annealing algorithm, we introduce a heuristic but practical scheme to determine the maximum depth and the number of quantization levels. The designed corrugations were fabricated in a photoresist by maskless grayscale exposure using a high-resolution spatial light modulator. We characterized the photoresist splitter, thereby validating the proposed beam-splitting concept.
Hübner, N-O; Fleßa, S; Haak, J; Wilke, F; Hübner, C; Dahms, C; Hoffmann, W; Kramer, A
2011-01-01
Recently, the HACCP (Hazard Analysis and Critical Control Points) concept was proposed as possible way to implement process-based hygiene concepts in clinical practice, but the extent to which this food safety concept can be transferred into the health care setting is unclear. We therefore discuss possible ways for a translation of the principles of the HACCP for health care settings. While a direct implementation of food processing concepts into health care is not very likely to be feasible and will probably not readily yield the intended results, the underlying principles of process-orientation, in-process safety control and hazard analysis based counter measures are transferable to clinical settings. In model projects the proposed concepts should be implemented, monitored, and evaluated under real world conditions.
Le Bihan, Nicolas; Margerin, Ludovic
2009-07-01
In this paper, we present a nonparametric method to estimate the heterogeneity of a random medium from the angular distribution of intensity of waves transmitted through a slab of random material. Our approach is based on the modeling of forward multiple scattering using compound Poisson processes on compact Lie groups. The estimation technique is validated through numerical simulations based on radiative transfer theory.
Mass customization process for the Social Housing. Potentiality, critical points, research lines
Directory of Open Access Journals (Sweden)
Michele Di Sivo
2012-10-01
Full Text Available The demand for lengthening the life cycle of the residential estate, engendered with the economical and housing crisis since the last few years, brings out, in the course of time, the need for conservation and improvement works of the property house performances, through the direct involvement of the users. The possibility of reducing maintenance and adjustment costs may develop into a project resource, consistent to the participation and cooperation principles, identifying social housing interventions. With this aim, the BETHA group of the d’Annunzio University is investigating the potentiality of technological transfer of the ‘mass customization’ process from the industrial products field to the social housing segment, by detecting issues, strategies and opportunities.
Stochastic dynamical model of a growing citation network based on a self-exciting point process.
Golosovsky, Michael; Solomon, Sorin
2012-08-31
We put under experimental scrutiny the preferential attachment model that is commonly accepted as a generating mechanism of the scale-free complex networks. To this end we chose a citation network of physics papers and traced the citation history of 40,195 papers published in one year. Contrary to common belief, we find that the citation dynamics of the individual papers follows the superlinear preferential attachment, with the exponent α=1.25-1.3. Moreover, we show that the citation process cannot be described as a memoryless Markov chain since there is a substantial correlation between the present and recent citation rates of a paper. Based on our findings we construct a stochastic growth model of the citation network, perform numerical simulations based on this model and achieve an excellent agreement with the measured citation distributions.
Pervasive randomness in physics: an introduction to its modelling and spectral characterisation
Howard, Roy
2017-10-01
An introduction to the modelling and spectral characterisation of random phenomena is detailed at a level consistent with a first exposure to the subject at an undergraduate level. A signal framework for defining a random process is provided and this underpins an introduction to common random processes including the Poisson point process, the random walk, the random telegraph signal, shot noise, information signalling random processes, jittered pulse trains, birth-death random processes and Markov chains. An introduction to the spectral characterisation of signals and random processes, via either an energy spectral density or a power spectral density, is detailed. The important case of defining a white noise random process concludes the paper.
Kumar, Ranjan; Mooventhan, A; Manjunath, Nandi Krishnamurthy
2017-08-01
Diabetes mellitus is a major global health problem. Needling at CV-12 has reduced blood glucose level in diabetic rats. The aim of this study was to evaluate the effect of needling at CV-12 (Zhongwan) on blood glucose level in patients with type 2 diabetes mellitus (T2DM). Forty T2DM patients were recruited and randomized into either the acupuncture group or placebo control group. The participants in the acupuncture group were needled at CV-12 (4 cun above the center of the umbilicus), and those in the placebo control group were needled at a placebo point on the right side of the abdomen (1 cun beside the CV-12). For both groups, the needle was retained for 30 minutes. Assessments were performed prior to and after the intervention. Statistical analysis was performed using SPSS version 16. There was a significant reduction in random blood glucose level in the acupuncture group compared to baseline. No such significant change was observed in the placebo control group. The result of this study suggests that 30 minutes of needling at CV-12 might be useful in reducing blood glucose level in patients with T2DM. Copyright © 2017. Published by Elsevier B.V.
Directory of Open Access Journals (Sweden)
Ying Yue
2015-01-01
Full Text Available Background. Acute carbon monoxide poisoning (ACOP is a significant cause of morbidity and mortality in many countries. Twelve Hand Jing Points (THJP have been believed to be effective to treat all kinds of emergency calls in traditional Chinese medicine (TCM for more than 3000 years. This randomized controlled trial (RCT is designed to evaluate the effectiveness of THJP in curing acute carbon monoxide poisoning in first aid treatment. This paper reports the protocol of the trial. Methods/Design. This RCT is a multicenter, randomized, controlled study undergoing in China. The compliant patients are divided into the bloodletting group and standard of care group. With first aid treatments given to both of the groups, the bloodletting group is bleeding at THJP upon being hospitalized. Primary outcomes and secondary outcomes will be measured and compared between these two groups. Before treatment, immediately after treatment, and 30 minutes, 1 hour, and 4 hours after treatment, patients’ basic vital signs and state of consciousness were observed. Before treatment and 1 and 4 hours after treatment, carboxyhemoglobin concentration in venous blood samples was detected. Discussion. The objective of this study is to provide convincing evidence to clarify the efficacy and safety of THJP for early treatment of acute carbon monoxide poisoning.
PET and diagnostic technology evaluation in a global clinical process. DGN's point of view
International Nuclear Information System (INIS)
Kotzerke, J.; Dietlein, M.; Gruenwald, F.; Bockisch, A.
2010-01-01
The German Society of Nuclear Medicine (DGN) criticizes the methodological approach of the IQWiG for evaluation of PET and the conclusions, which represent the opposite point of view compared to the most other European countries and health companies in the USA: (1) Real integration of experienced physicians into the interpretation of data and the evaluation of effectiveness should be used for best possible reporting instead of only formal hearing. (2) Data of the National Oncologic PET Registry (NOPR) from the USA have shown, that PET has changed the therapeutic management in 38% of patients. (3) The decision of the IQWiG to accept outcome data only for their benefit analyses, is controversial. Medical knowledge is generated by different methods, and an actual analysis of the scientific guidelines has shown that only 15% out of all guidelines are based on the level of evidence demanded by the IQWiG. Health economics has created different assessment methods for the evaluation of a diagnostic procedure. The strategy chosen by the IQWiG overestimated the perspective of the population and undervalue the benefit for an individual patient. (4) PET evaluates the effectiveness of a therapeutic procedure, but does not create an effective therapy. When the predictive value of PET is already implemented in a specific study design and the result of PET define a specific management, the trial evaluate the whole algorithm and PET is part of this algorithm only. When PET is implemented as test during chemotherapy or by the end of chemotherapy, the predictive value of PET will depend decisively on the effectiveness of the therapy: The better the therapy, the smaller the differences in survival detected by PET. (5) The significance of an optimal staging by the integration of PET will increase. Rationale is the actual development of ''titration'' of chemotherapy intensity and radiation dose towards the lowest possible, just about effective dosage. (6) The medical therapy of
The emergence of typical entanglement in two-party random processes
International Nuclear Information System (INIS)
Dahlsten, O C O; Oliveira, R; Plenio, M B
2007-01-01
We investigate the entanglement within a system undergoing a random, local process. We find that there is initially a phase of very fast generation and spread of entanglement. At the end of this phase the entanglement is typically maximal. In Oliveira et al (2007 Phys. Rev. Lett. 98 130502) we proved that the maximal entanglement is reached to a fixed arbitrary accuracy within O(N 3 ) steps, where N is the total number of qubits. Here we provide a detailed and more pedagogical proof. We demonstrate that one can use the so-called stabilizer gates to simulate this process efficiently on a classical computer. Furthermore, we discuss three ways of identifying the transition from the phase of rapid spread of entanglement to the stationary phase: (i) the time when saturation of the maximal entanglement is achieved, (ii) the cutoff moment, when the entanglement probability distribution is practically stationary, and (iii) the moment block entanglement exhibits volume scaling. We furthermore investigate the mixed state and multipartite setting. Numerically, we find that the mutual information appears to behave similarly to the quantum correlations and that there is a well-behaved phase-space flow of entanglement properties towards an equilibrium. We describe how the emergence of typical entanglement can be used to create a much simpler tripartite entanglement description. The results form a bridge between certain abstract results concerning typical (also known as generic) entanglement relative to an unbiased distribution on pure states and the more physical picture of distributions emerging from random local interactions
Silva Oliveira, Tatiana de Alencar; Maria, Tatiane de Oliveira Silva; Alves do Nascimento, Angela Maria; do Nascimento, Angela Alves
2011-09-01
The scope of this study was to discuss the organization of the pharmaceutical assistance service in the family healthcare program. Qualitative research from a critical/analytical perspective was conducted in family healthcare units in a municipality of the state of Bahia, Brazil. Data was collected on the basis of systematic observation, semi-structured interviews and documents analysis from a dialectic standpoint. The organization of Pharmaceutical Assistance consisted of selection, planning, acquisition, storage and dispensing activities. The process was studied in the implementation phase, which was occurring in a centralized and uncoordinated fashion, without the proposed team work. An excess of activity was observed among the healthcare workers and there was an absence of a continued education policy for the workers. For the transformation of this situation and to ensure the organization of pharmaceutical assistance with quality and in an integrated manner, a reworking of the manner of thinking and action of the players concerned (managers, health workers and users), who participate directly in the organization, is necessary. Furthermore, mechanical, bureaucratic and impersonal work practices need to be abandoned.
Aftershock identification problem via the nearest-neighbor analysis for marked point processes
Gabrielov, A.; Zaliapin, I.; Wong, H.; Keilis-Borok, V.
2007-12-01
The centennial observations on the world seismicity have revealed a wide variety of clustering phenomena that unfold in the space-time-energy domain and provide most reliable information about the earthquake dynamics. However, there is neither a unifying theory nor a convenient statistical apparatus that would naturally account for the different types of seismic clustering. In this talk we present a theoretical framework for nearest-neighbor analysis of marked processes and obtain new results on hierarchical approach to studying seismic clustering introduced by Baiesi and Paczuski (2004). Recall that under this approach one defines an asymmetric distance D in space-time-energy domain such that the nearest-neighbor spanning graph with respect to D becomes a time- oriented tree. We demonstrate how this approach can be used to detect earthquake clustering. We apply our analysis to the observed seismicity of California and synthetic catalogs from ETAS model and show that the earthquake clustering part is statistically different from the homogeneous part. This finding may serve as a basis for an objective aftershock identification procedure.
Archiving, sharing, processing and publishing historical earthquakes data: the IT point of view
Locati, Mario; Rovida, Andrea; Albini, Paola
2014-05-01
Digital tools devised for seismological data are mostly designed for handling instrumentally recorded data. Researchers working on historical seismology are forced to perform their daily job using a general purpose tool and/or coding their own to address their specific tasks. The lack of out-of-the-box tools expressly conceived to deal with historical data leads to a huge amount of time lost in performing tedious task to search for the data and, to manually reformat it in order to jump from one tool to the other, sometimes causing a loss of the original data. This reality is common to all activities related to the study of earthquakes of the past centuries, from the interpretations of past historical sources, to the compilation of earthquake catalogues. A platform able to preserve the historical earthquake data, trace back their source, and able to fulfil many common tasks was very much needed. In the framework of two European projects (NERIES and SHARE) and one global project (Global Earthquake History, GEM), two new data portals were designed and implemented. The European portal "Archive of Historical Earthquakes Data" (AHEAD) and the worldwide "Global Historical Earthquake Archive" (GHEA), are aimed at addressing at least some of the above mentioned issues. The availability of these new portals and their well-defined standards makes it easier than before the development of side tools for archiving, publishing and processing the available historical earthquake data. The AHEAD and GHEA portals, their underlying technologies and the developed side tools are presented.
Bednaršek, Nina; Tarling, Geraint A.; Bakker, Dorothee C. E.; Fielding, Sophie; Feely, Richard A.
2014-01-01
Thecosome pteropods are abundant upper-ocean zooplankton that build aragonite shells. Ocean acidification results in the lowering of aragonite saturation levels in the surface layers, and several incubation studies have shown that rates of calcification in these organisms decrease as a result. This study provides a weight-specific net calcification rate function for thecosome pteropods that includes both rates of dissolution and calcification over a range of plausible future aragonite saturation states (Ωar). We measured gross dissolution in the pteropod Limacina helicina antarctica in the Scotia Sea (Southern Ocean) by incubating living specimens across a range of aragonite saturation states for a maximum of 14 days. Specimens started dissolving almost immediately upon exposure to undersaturated conditions (Ωar∼0.8), losing 1.4% of shell mass per day. The observed rate of gross dissolution was different from that predicted by rate law kinetics of aragonite dissolution, in being higher at Ωar levels slightly above 1 and lower at Ωar levels of between 1 and 0.8. This indicates that shell mass is affected by even transitional levels of saturation, but there is, nevertheless, some partial means of protection for shells when in undersaturated conditions. A function for gross dissolution against Ωar derived from the present observations was compared to a function for gross calcification derived by a different study, and showed that dissolution became the dominating process even at Ωar levels close to 1, with net shell growth ceasing at an Ωar of 1.03. Gross dissolution increasingly dominated net change in shell mass as saturation levels decreased below 1. As well as influencing their viability, such dissolution of pteropod shells in the surface layers will result in slower sinking velocities and decreased carbon and carbonate fluxes to the deep ocean. PMID:25285916
Moraska, Albert F.; Stenerson, Lea; Butryn, Nathan; Krutsch, Jason P.; Schmiege, Sarah J.; Mann, J. Douglas
2014-01-01
Objective Myofascial trigger points (MTrPs) are focal disruptions in skeletal muscle that can refer pain to the head and reproduce the pain patterns of tension-type headache (TTH). The present study applied massage focused on MTrPs of subjects with TTH in a placebo-controlled, clinical trial to assess efficacy on reducing headache pain. Methods Fifty-six subjects with TTH were randomized to receive 12 massage or placebo (detuned ultrasound) sessions over six weeks, or to wait-list. Trigger point release (TPR) massage focused on MTrPs in cervical musculature. Headache pain (frequency, intensity and duration) was recorded in a daily headache diary. Additional outcome measures included self-report of perceived clinical change in headache pain and pressure-pain threshold (PPT) at MTrPs in the upper trapezius and sub-occipital muscles. Results From diary recordings, group differences across time were detected in headache frequency (p=0.026), but not for intensity or duration. Post hoc analysis indicated headache frequency decreased from baseline for both massage (pheadache pain for massage than placebo or wait-list groups (p=0.002). PPT improved in all muscles tested for massage only (all p'streatment of TTH, and 2) TTH, like other chronic conditions, is responsive to placebo. Clinical trials on headache that do not include a placebo group are at risk for overestimating the specific contribution from the active intervention. PMID:25329141
Wang, Fei; Zhang, Lijuan; Wang, Jianhua; Shi, Yan; Zheng, Liya
2015-08-01
To evaluate the efficacy on hemiplegic spasticity after cerebral infarction treated with plum blossom needle tapping therapy at the key points and Bobath therapy. Eighty patients were collected, in compliance with the inclusive criteria of hemiplegic spasticity after cerebral infarction, and randomized into an observation group and a control group, 40 cases in each one. In the control group, Bobath manipulation therapy was adopted to relieve spasticity and the treatment of 8 weeks was required. In the observation group, on the basis of the treatment as the control group, the tapping therapy with plum blossom needle was applied to the key points, named Jianyu (LI 15), Jianliao (LI 14), Jianzhen (SI 9), Hegu (LI 4), Chengfu (BL 36), Zusanli (ST 36), Xiyangguan (GB 33), etc. The treatment was given for 15 min each time, once a day. Before treatment, after 4 and 8 weeks of treatment, the Fugl-Meyer assessment (FMA) and Barthel index (BI) were adopted to evaluate the motor function of the extremity and the activity of daily life in the patients of the two groups separately. The modified Ashworth scale was used to evaluate the effect of anti-spasticity. In 4 and 8 weeks of treatment, FMA: scores and BI scores were all significantly increased as compared with those before treatment in the two groups: (both PBobath therapy effectively relieves hemiplegic spasticity in the patients of cerebral infarction and improves the motor function of extremity and the activity of daily life.
DEFF Research Database (Denmark)
Lilliedal, Mathilde Raad; Medford, Andrew James; Vesterager Madsen, Morten
2010-01-01
Inflection point behaviour is often observed in the current–voltage (IV) curve of polymer solar cells. This phenomenon is examined in the context of flexible roll-to-roll (R2R) processed polymer solar cells in a large series of devices with a layer structure of: PET–ITO–ZnO–P3HT...... characterization of device interfaces was carried out in order to identify possible chemical processes that are related to photo-annealing. A possible mechanism based on ZnO photoconductivity, photooxidation and redistribution of oxygen inside the cell is proposed, and it is anticipated that the findings......:PCBM–PEDOT:PSS–Ag. The devices were manufactured using a combination of slot-die coating and screen printing; they were then encapsulated by lamination using a polymer based barrier material. All manufacturing steps were carried out in ambient air. The freshly prepared devices showed a consistent inflection point in the IV...
International Nuclear Information System (INIS)
Steinbach, E.
1987-01-01
The cellular model of a dislocation is used for an investigation of the time-dependent diffusion process of irradiation-induced point defects interacting with the stress field of a moving dislocation. An analytic solution is given taking into account the elastic interaction due to the first-order size effect and the stress-induced interaction, the kinematic interaction due to the dislocation motion as well as the presence of secondary neutral sinks. The results for the space and time-dependent point defect concentration, represented in terms of Mathieu-Bessel and Mathieu-Hankel functions, emphasize the influence of the parameters which have been taken into consideration. Proceeding from these solutions, formulae for the diffusion flux reaching unit length of the dislocation, which plays an important role with regard to void swelling and irradiation-induced creep, are derived
Campbell, Ruth; Capek, Cheryl M; Gazarian, Karine; MacSweeney, Mairéad; Woll, Bencie; David, Anthony S; McGuire, Philip K; Brammer, Michael J
2011-09-01
In this study, the first to explore the cortical correlates of signed language (SL) processing under point-light display conditions, the observer identified either a signer or a lexical sign from a display in which different signers were seen producing a number of different individual signs. Many of the regions activated by point-light under these conditions replicated those previously reported for full-image displays, including regions within the inferior temporal cortex that are specialised for face and body-part identification, although such body parts were invisible in the display. Right frontal regions were also recruited - a pattern not usually seen in full-image SL processing. This activation may reflect the recruitment of information about person identity from the reduced display. A direct comparison of identify-signer and identify-sign conditions showed these tasks relied to a different extent on the posterior inferior regions. Signer identification elicited greater activation than sign identification in (bilateral) inferior temporal gyri (BA 37/19), fusiform gyri (BA 37), middle and posterior portions of the middle temporal gyri (BAs 37 and 19), and superior temporal gyri (BA 22 and 42). Right inferior frontal cortex was a further focus of differential activation (signer>sign). These findings suggest that the neural systems supporting point-light displays for the processing of SL rely on a cortical network including areas of the inferior temporal cortex specialized for face and body identification. While this might be predicted from other studies of whole body point-light actions (Vaina, Solomon, Chowdhury, Sinha, & Belliveau, 2001) it is not predicted from the perspective of spoken language processing, where voice characteristics and speech content recruit distinct cortical regions (Stevens, 2004) in addition to a common network. In this respect, our findings contrast with studies of voice/speech recognition (Von Kriegstein, Kleinschmidt, Sterzer
Neutron-rich isotopes around the r-process 'waiting-point' nuclei 2979Cu50 and 3080Zn50
International Nuclear Information System (INIS)
Kratz, K.L.; Gabelmann, H.; Pfeiffer, B.; Woehr, A.
1991-01-01
Beta-decay half-lives (T 1/2 ) and delayed-neutron emission probabilities (P n ) of very neutron-rich Cu to As nuclei have been measured, among them the new isotopes 77 Cu 48 , 79 Cu 50 , 81 Zn 51 and 84 Ga 53 . With the T 1/2 and P n -values of now four N≅50 'waiting-point' nuclei known, our hypothesis that the r-process has attained a local β-flow equilibrium around A≅80 is further strengthened. (orig.)
International Nuclear Information System (INIS)
Haq, I.; Nawaz, A.; Mukhtar, A.N.H.; Mansoor, H.M.Z.; Ameer, S.M.
2014-01-01
The study deals with the improvement of wild strain Aspergillus niger IIB-31 through random mutagenesis using chemical mutagens. The main aim of the work was to enhance the glucose oxidase (GOX) yield of wild strain (24.57+-0.01 U/g of cell mass) through random mutagenesis and process optimization. The wild strain of Aspergillus niger IIB-31 was treated with chemical mutagens such as Ethyl methane sulphonate (EMS) and nitrous acid for this purpose. Mutagen treated 98 variants indicating the positive results were picked and screened for the glucose oxidase production using submerged fermentation. EMS treated E45 mutant strain gave the highest glucose oxidase production (69.47 + 0.01 U/g of cell mass), which was approximately 3-folds greater than the wild strain IIB-31. The preliminary cultural conditions for the production of glucose oxidase using submerged fermentation from strain E45 were also optimized. The highest yield of GOD was obtained using 8% glucose as carbon and 0.3% peptone as nitrogen source at a medium pH of 7.0 after an incubation period of 72 hrs at 30 degree. (author)
Borri, Claudia; Paggi, Marco
2015-02-01
The random process theory (RPT) has been widely applied to predict the joint probability distribution functions (PDFs) of asperity heights and curvatures of rough surfaces. A check of the predictions of RPT against the actual statistics of numerically generated random fractal surfaces and of real rough surfaces has been only partially undertaken. The present experimental and numerical study provides a deep critical comparison on this matter, providing some insight into the capabilities and limitations in applying RPT and fractal modeling to antireflective and hydrophobic rough surfaces, two important types of textured surfaces. A multi-resolution experimental campaign using a confocal profilometer with different lenses is carried out and a comprehensive software for the statistical description of rough surfaces is developed. It is found that the topology of the analyzed textured surfaces cannot be fully described according to RPT and fractal modeling. The following complexities emerge: (i) the presence of cut-offs or bi-fractality in the power-law power-spectral density (PSD) functions; (ii) a more pronounced shift of the PSD by changing resolution as compared to what was expected from fractal modeling; (iii) inaccuracy of the RPT in describing the joint PDFs of asperity heights and curvatures of textured surfaces; (iv) lack of resolution-invariance of joint PDFs of textured surfaces in case of special surface treatments, not accounted for by fractal modeling.
de Jorge, Mercedes; Parra, Sonia; de la Torre-Aboki, Jenny; Herrero-Beaumont, Gabriel
2015-08-01
Patients in randomized clinical trials have to adapt themselves to a restricted language to capture the necessary information to determine the safety and efficacy of a new treatment. The aim of this study was to explore the experience of patients with rheumatoid arthritis after completing their participation in a biologic therapy randomized clinical trial for a period of 3 years. A qualitative approach was used. The information was collected using 15 semi-structured interviews of patients with rheumatoid arthritis. Data collection was guided by the emergent analysis until no more relevant variations in the categories were found. The data were analysed using the grounded theory method. The objective of the patients when entering the study was to improve their quality of life by initiating the treatment. However, the experience changed the significance of the illness as they acquired skills and practical knowledge related to the management of their disease. The category "Interactional Empowerment" emerged as core category, as it represented the participative experience in a clinical trial. The process integrates the follow categories: "weight of systematisation", "working together", and the significance of the experience: "the duties". Simultaneously these categories evolved. The clinical trial monitoring activities enabled patients to engage in a reflexive-interpretative mechanism that transformed the emotional and symbolic significance of their disease and improved the empowerment of the patient. A better communicative strategy with the health professionals, the relatives of the patients, and the community was also achieved.
Nixon, Reginald D V
2012-12-01
The study tested the efficacy and tolerability of cognitive processing therapy (CPT) for survivors of assault with acute stress disorder. Participants (N=30) were randomly allocated to CPT or supportive counseling. Therapy comprised six individual weekly sessions of 90-min duration. Independent diagnostic assessment for PTSD was conducted at posttreatment. Participants completed self-report measures of posttraumatic stress, depression, and negative trauma-related beliefs at pre-, posttreatment, and 6-month follow-up. Results indicated that both interventions were successful in reducing symptoms at posttreatment with no statistical difference between the two; within and between-group effect sizes and the proportion of participants not meeting PTSD criteria was greater in CPT. Treatment gains were maintained for both groups at 6-month follow-up. Copyright © 2012. Published by Elsevier Ltd.
Oh, Mi Sun; Yu, Kyung-Ho; Hong, Keun-Sik; Kang, Dong-Wha; Park, Jong-Moo; Bae, Hee-Joon; Koo, Jaseong; Lee, Juneyoung; Lee, Byung-Chul
2015-07-01
To assess the efficacy and safety of modest blood pressure (BP) reduction with valsartan within 48 h after symptom onset in patients with acute ischemic stroke and high BP. This was a multicenter, prospective, randomized, open-label, blinded-end-point trial. A total of 393 subjects were recruited at 28 centers and then randomly assigned in a 1:1 ratio to receive valsartan (n = 195) or no treatment (n = 198) for seven-days after presentation. The primary outcome was death or dependency, defined as a score of 3-6 on the modified Rankin Scale (mRS) at 90 days after symptom onset. Early neurological deterioration (END) within seven-days and 90-day major vascular events were also assessed. There were 372 patients who completed the 90-day follow-up. The valsartan group had 46 of 187 patients (24·6%) with a 90-day mRS 3-6, compared with 42 of 185 patients (22·6%) in the control group (odds ratio [OR], 1·11; 95% confidence interval [CI], 0·69-1·79; P = 0·667). The rate of major vascular events did not differ between groups (OR, 1·41; 95% CI, 0·44-4·49; P = 0·771). There was a significant increase of END in the valsartan group (OR, 2·43; 95% CI, 1·25-4·73; P = 0·008). Early reduction of BP with valsartan did not reduce death or dependency and major vascular events at 90 days, but increased the risk of END. © 2015 World Stroke Organization.
Rathkopf, Dana E; Beer, Tomasz M; Loriot, Yohann; Higano, Celestia S; Armstrong, Andrew J; Sternberg, Cora N; de Bono, Johann S; Tombal, Bertrand; Parli, Teresa; Bhattacharya, Suman; Phung, De; Krivoshik, Andrew; Scher, Howard I; Morris, Michael J
2018-05-01
Drug development for metastatic castration-resistant prostate cancer has been limited by a lack of clinically relevant trial end points short of overall survival (OS). Radiographic progression-free survival (rPFS) as defined by the Prostate Cancer Clinical Trials Working Group 2 (PCWG2) is a candidate end point that represents a clinically meaningful benefit to patients. To demonstrate the robustness of the PCWG2 definition and to examine the relationship between rPFS and OS. PREVAIL was a phase 3, randomized, double-blind, placebo-controlled multinational study that enrolled 1717 chemotherapy-naive men with metastatic castration-resistant prostate cancer from September 2010 through September 2012. The data were analyzed in November 2016. Patients were randomized 1:1 to enzalutamide 160 mg or placebo until confirmed radiographic disease progression or a skeletal-related event and initiation of either cytotoxic chemotherapy or an investigational agent for prostate cancer treatment. Sensitivity analyses (SAs) of investigator-assessed rPFS were performed using the final rPFS data cutoff (May 6, 2012; 439 events; SA1) and the interim OS data cutoff (September 16, 2013; 540 events; SA2). Additional SAs using investigator-assessed rPFS from the final rPFS data cutoff assessed the impact of skeletal-related events (SA3), clinical progression (SA4), a confirmatory scan for soft-tissue disease progression (SA5), and all deaths regardless of time after study drug discontinuation (SA6). Correlations between investigator-assessed rPFS (SA2) and OS were calculated using Spearman ρ and Kendall τ via Clayton copula. In the 1717 men (mean age, 72.0 [range, 43.0-93.0] years in enzalutamide arm and 71.0 [range, 42.0-93.0] years in placebo arm), enzalutamide significantly reduced risk of radiographic progression or death in all SAs, with hazard ratios of 0.22 (SA1; 95% CI, 0.18-0.27), 0.31 (SA2; 95% CI, 0.27-0.35), 0.21 (SA3; 95% CI, 0.18-0.26), 0.21 (SA4; 95% CI, 0.17-0.26), 0
Directory of Open Access Journals (Sweden)
Carles Comas
2015-04-01
Full Text Available Aim of study: Understanding inter- and intra-specific competition for water is crucial in drought-prone environments. However, little is known about the spatial interdependencies for water uptake among individuals in mixed stands. The aim of this work was to compare water uptake patterns during a drought episode in two common Mediterranean tree species, Quercus ilex L. and Pinus halepensis Mill., using the isotope composition of xylem water (δ18O, δ2H as hydrological marker. Area of study: The study was performed in a mixed stand, sampling a total of 33 oaks and 78 pines (plot area= 888 m2. We tested the hypothesis that both species uptake water differentially along the soil profile, thus showing different levels of tree-to-tree interdependency, depending on whether neighbouring trees belong to one species or the other. Material and Methods: We used pair-correlation functions to study intra-specific point-tree configurations and the bivariate pair correlation function to analyse the inter-specific spatial configuration. Moreover, the isotopic composition of xylem water was analysed as a mark point pattern. Main results: Values for Q. ilex (δ18O = –5.3 ± 0.2‰, δ2H = –54.3 ± 0.7‰ were significantly lower than for P. halepensis (δ18O = –1.2 ± 0.2‰, δ2H = –25.1 ± 0.8‰, pointing to a greater contribution of deeper soil layers for water uptake by Q. ilex. Research highlights: Point-process analyses revealed spatial intra-specific dependencies among neighbouring pines, showing neither oak-oak nor oak-pine interactions. This supports niche segregation for water uptake between the two species.
Mallik, Tanuja; Aneja, S; Tope, R; Muralidhar, V
2012-01-01
Background: In the administration of minimal flow anesthesia, traditionally a fixed time period of high flow has been used before changing over to minimal flow. However, newer studies have used “equilibration time” of a volatile anesthetic agent as the change-over point. Materials and Methods: A randomized prospective study was conducted on 60 patients, who were divided into two groups of 30 patients each. Two volatile inhalational anesthetic agents were compared. Group I received desflurane (n = 30) and group II isoflurane (n = 30). Both the groups received an initial high flow till equilibration between inspired (Fi) and expired (Fe) agent concentration were achieved, which was defined as Fe/Fi = 0.8. The mean (SD) equilibration time was obtained for both the agent. Then, a drift in end-tidal agent concentration during the minimal flow anesthesia and recovery profile was noted. Results: The mean equilibration time obtained for desflurane and isoflurane were 4.96 ± 1.60 and 16.96 ± 9.64 min (P < 0.001). The drift in end-tidal agent concentration over time was minimal in the desflurane group (P = 0.065). Recovery time was 5.70 ± 2.78 min in the desflurane group and 8.06 ± 31 min in the isoflurane group (P = 0.004). Conclusion: Use of equilibration time of the volatile anesthetic agent as a change-over point, from high flow to minimal flow, can help us use minimal flow anesthesia, in a more efficient way. PMID:23225926
Dias, Karin Ziliotto; Jutras, Benoît; Acrani, Isabela Olszanski; Pereira, Liliane Desgualdo
2012-02-01
The aim of the present study was to assess the auditory temporal resolution ability in individuals with central auditory processing disorders, to examine the maturation effect and to investigate the relationship between the performance on a temporal resolution test with the performance on other central auditory tests. Participants were divided in two groups: 131 with Central Auditory Processing Disorder and 94 with normal auditory processing. They had pure-tone air-conduction thresholds no poorer than 15 dB HL bilaterally, normal admittance measures and presence of acoustic reflexes. Also, they were assessed with a central auditory test battery. Participants who failed at least one or more tests were included in the Central Auditory Processing Disorder group and those in the control group obtained normal performance on all tests. Following the auditory processing assessment, the Random Gap Detection Test was administered to the participants. A three-way ANOVA was performed. Correlation analyses were also done between the four Random Gap Detection Test subtests data as well as between Random Gap Detection Test data and the other auditory processing test results. There was a significant difference between the age-group performances in children with and without Central Auditory Processing Disorder. Also, 48% of children with Central Auditory Processing Disorder failed the Random Gap Detection Test and the percentage decreased as a function of age. The highest percentage (86%) was found in the 5-6 year-old children. Furthermore, results revealed a strong significant correlation between the four Random Gap Detection Test subtests. There was a modest correlation between the Random Gap Detection Test results and the dichotic listening tests. No significant correlation was observed between the Random Gap Detection Test data and the results of the other tests in the battery. Random Gap Detection Test should not be administered to children younger than 7 years old because
Nouchi, Rui; Taki, Yasuyuki; Takeuchi, Hikaru; Hashizume, Hiroshi; Akitsuki, Yuko; Shigemune, Yayoi; Sekiguchi, Atsushi; Kotozaki, Yuka; Tsukiura, Takashi; Yomogida, Yukihito; Kawashima, Ryuta
2012-01-01
The beneficial effects of brain training games are expected to transfer to other cognitive functions, but these beneficial effects are poorly understood. Here we investigate the impact of the brain training game (Brain Age) on cognitive functions in the elderly. Thirty-two elderly volunteers were recruited through an advertisement in the local newspaper and randomly assigned to either of two game groups (Brain Age, Tetris). This study was completed by 14 of the 16 members in the Brain Age group and 14 of the 16 members in the Tetris group. To maximize the benefit of the interventions, all participants were non-gamers who reported playing less than one hour of video games per week over the past 2 years. Participants in both the Brain Age and the Tetris groups played their game for about 15 minutes per day, at least 5 days per week, for 4 weeks. Each group played for a total of about 20 days. Measures of the cognitive functions were conducted before and after training. Measures of the cognitive functions fell into four categories (global cognitive status, executive functions, attention, and processing speed). Results showed that the effects of the brain training game were transferred to executive functions and to processing speed. However, the brain training game showed no transfer effect on any global cognitive status nor attention. Our results showed that playing Brain Age for 4 weeks could lead to improve cognitive functions (executive functions and processing speed) in the elderly. This result indicated that there is a possibility which the elderly could improve executive functions and processing speed in short term training. The results need replication in large samples. Long-term effects and relevance for every-day functioning remain uncertain as yet. UMIN Clinical Trial Registry 000002825.
Directory of Open Access Journals (Sweden)
Rui Nouchi
Full Text Available The beneficial effects of brain training games are expected to transfer to other cognitive functions, but these beneficial effects are poorly understood. Here we investigate the impact of the brain training game (Brain Age on cognitive functions in the elderly.Thirty-two elderly volunteers were recruited through an advertisement in the local newspaper and randomly assigned to either of two game groups (Brain Age, Tetris. This study was completed by 14 of the 16 members in the Brain Age group and 14 of the 16 members in the Tetris group. To maximize the benefit of the interventions, all participants were non-gamers who reported playing less than one hour of video games per week over the past 2 years. Participants in both the Brain Age and the Tetris groups played their game for about 15 minutes per day, at least 5 days per week, for 4 weeks. Each group played for a total of about 20 days. Measures of the cognitive functions were conducted before and after training. Measures of the cognitive functions fell into four categories (global cognitive status, executive functions, attention, and processing speed. Results showed that the effects of the brain training game were transferred to executive functions and to processing speed. However, the brain training game showed no transfer effect on any global cognitive status nor attention.Our results showed that playing Brain Age for 4 weeks could lead to improve cognitive functions (executive functions and processing speed in the elderly. This result indicated that there is a possibility which the elderly could improve executive functions and processing speed in short term training. The results need replication in large samples. Long-term effects and relevance for every-day functioning remain uncertain as yet.UMIN Clinical Trial Registry 000002825.
Sasaki, Taro; Endoh, Tetsuo
2018-04-01
In this paper, from the viewpoint of cell size and sensing margin, the impact of a novel cross-point-type one transistor and one magnetic tunnel junction (1T–1MTJ) spin-transfer-torque magnetoresistive random access memory (STT-MRAM) cell with a multi-pillar vertical body channel (BC) MOSFET is shown for high density and wide sensing margin STT-MRAM, with a 10 ns writing period and 1.2 V V DD. For that purpose, all combinations of n/p-type MOSFETs and bottom/top-pin MTJs are compared, where the diameter of MTJ (D MTJ) is scaled down from 55 to 15 nm and the tunnel magnetoresistance (TMR) ratio is increased from 100 to 200%. The results show that, benefiting from the proposed STT-MRAM cell with no back bias effect, the MTJ with a high TMR ratio (200%) can be used in the design of smaller STT-MRAM cells (over 72.6% cell size reduction), which is a difficult task for conventional planar MOSFET based design.
Directory of Open Access Journals (Sweden)
Pavlos Bobos
2016-01-01
Full Text Available Background. We need to understand more about how DNF performs in different contexts and whether it affects the pain threshold over myofascial trigger points (MTrPs. Purpose. The objectives were to investigate the effect of neck muscles training on disability and pain and on pain threshold over MTrPs in people with chronic neck pain. Methods. Patients with chronic neck pain were eligible for participation with a Neck Disability Index (NDI score of over 5/50 and having at least one MTrP on either levator scapulae, upper trapezoid, or splenius capitis muscle. Patients were randomly assigned into either DNF training, superficial neck muscle exercise, or advice group. Generalized linear model (GLM was used to detect differences in treatment groups over time. Results. Out of 67 participants, 60 (47 females, mean age: 39.45 ± 12.67 completed the study. Neck disability and neck pain were improved over time between and within groups (p<0.05. However, no differences were found within and between the therapeutic groups (p<0.05 in the tested muscles’ PPTs and in cervicothoracic angle over a 7-week period. Conclusion. All three groups improved over time. This infers that the pain pathways involved in the neck pain relief are not those involved in pain threshold.
Directory of Open Access Journals (Sweden)
Elisabetta Costantini
2008-02-01
Full Text Available OBJECTIVE: To test the hypothesis that preoperative Valsalva leak point pressure (VLPP predicts long-term outcome of mid-urethra slings for female stress urinary incontinence (SUI. MATERIALS AND METHODS: One hundred and forty-five patients with SUI were prospectively randomized to two mid-urethra sling treatments: Tension free vaginal tape (TVT or transobturator tape (TOT. They were followed-up at 3, 6, 12 months post-operatively and then annually for the primary outcome variable, i.e. dry or wet and secondary outcome variables such as scores on the urogenital distress inventory (UDI-6 and the impact of incontinence on quality of life (IIQ-7 questionnaire as well as patient satisfaction as scored on a visual analogue scale (VAS. Preoperative VLPP was correlated with primary and secondary outcome variables. RESULTS: Mean follow-ups were 32 + 12 months (range 12-55 for TVT and 31 + 15 months (range 12-61 for TOT. When patients were analyzed according to VLPP stratification, 95 (65.5% patients showed a VLPP > 60 cm H2O and 50 (34.5% patients had a VLPP 60 cm H2O and 72% for those with VLPP 60 cm H2O (82 % vs. 68.9% p of 60 cm H2O, preoperative VLPP was not linked to outcome after TVT or TOT procedures.
Directory of Open Access Journals (Sweden)
Gustafsson Lars
2008-03-01
Full Text Available Abstract Background In the rural areas of sub-Saharan Africa, the majority of young children affected by malaria have no access to formal health services. Home treatment through mothers of febrile children supported by mother groups and local health workers has the potential to reduce malaria morbidity and mortality. Methods A cluster-randomized controlled effectiveness trial was implemented from 2002–2004 in a malaria endemic area of rural Burkina Faso. Six and seven villages were randomly assigned to the intervention and control arms respectively. Febrile children from intervention villages were treated with chloroquine (CQ by their mothers, supported by local women group leaders. CQ was regularly supplied through a revolving fund from local health centres. The trial was evaluated through two cross-sectional surveys at baseline and after two years of intervention. The primary endpoint of the study was the proportion of moderate to severe anaemia in children aged 6–59 months. For assessment of the development of drug efficacy over time, an in vivo CQ efficacy study was nested into the trial. The study is registered under http://www.controlled-trials.com (ISRCTN 34104704. Results The intervention was shown to be feasible under program conditions and a total of 1.076 children and 999 children were evaluated at baseline and follow-up time points respectively. Self-reported CQ treatment of fever episodes at home as well as referrals to health centres increased over the study period. At follow-up, CQ was detected in the blood of high proportions of intervention and control children. Compared to baseline findings, the prevalence of anaemia (29% vs 16%, p P. falciparum parasitaemia, fever and palpable spleens was lower at follow-up but there were no differences between the intervention and control group. CQ efficacy decreased over the study period but this was not associated with the intervention. Discussion The decreasing prevalence of malaria
Design of Energy Aware Adder Circuits Considering Random Intra-Die Process Variations
Directory of Open Access Journals (Sweden)
Marco Lanuzza
2011-04-01
Full Text Available Energy consumption is one of the main barriers to current high-performance designs. Moreover, the increased variability experienced in advanced process technologies implies further timing yield concerns and therefore intensifies this obstacle. Thus, proper techniques to achieve robust designs are a critical requirement for integrated circuit success. In this paper, the influence of intra-die random process variations is analyzed considering the particular case of the design of energy aware adder circuits. Five well known adder circuits were designed exploiting an industrial 45 nm static complementary metal-oxide semiconductor (CMOS standard cell library. The designed adders were comparatively evaluated under different energy constraints. As a main result, the performed analysis demonstrates that, for a given energy budget, simpler circuits (which are conventionally identified as low-energy slow architectures operating at higher power supply voltages can achieve a timing yield significantly better than more complex faster adders when used in low-power design with supply voltages lower than nominal.
Zhou, L.; Qu, Z. G.; Ding, T.; Miao, J. Y.
2016-04-01
The gas-solid adsorption process in reconstructed random porous media is numerically studied with the lattice Boltzmann (LB) method at the pore scale with consideration of interparticle, interfacial, and intraparticle mass transfer performances. Adsorbent structures are reconstructed in two dimensions by employing the quartet structure generation set approach. To implement boundary conditions accurately, all the porous interfacial nodes are recognized and classified into 14 types using a proposed universal program called the boundary recognition and classification program. The multiple-relaxation-time LB model and single-relaxation-time LB model are adopted to simulate flow and mass transport, respectively. The interparticle, interfacial, and intraparticle mass transfer capacities are evaluated with the permeability factor and interparticle transfer coefficient, Langmuir adsorption kinetics, and the solid diffusion model, respectively. Adsorption processes are performed in two groups of adsorbent media with different porosities and particle sizes. External and internal mass transfer resistances govern the adsorption system. A large porosity leads to an early time for adsorption equilibrium because of the controlling factor of external resistance. External and internal resistances are dominant at small and large particle sizes, respectively. Particle size, under which the total resistance is minimum, ranges from 3 to 7 μm with the preset parameters. Pore-scale simulation clearly explains the effect of both external and internal mass transfer resistances. The present paper provides both theoretical and practical guidance for the design and optimization of adsorption systems.
Directory of Open Access Journals (Sweden)
Orhan AKINOGLU
2008-01-01
Full Text Available Aim of the study is to assess how students in 6th, 7th and 8th grades of primary education see the project works made in science education and their implementation processes. The study was fulfilled upon the descriptive survey model to collect data. Participants of the research were 100 students who had project implementation experiences in science education, and they were from 24 primary schools in 7 districts randomly chosen in the city of Istanbul in Turkey. Data of the study were collected by using a semi-constructed interview form offered to students during the 2005-2006 teaching year. In the research, following items were examined: The extent to which students are inspired from the previously made projects during their own project selection process, the level of scientific document survey and the effects of contemporary events, science and technology class topics and students’ interest areas. It was seen that internet is the mostly used source to obtain information. For students, one of the most problematic issues faced during the project implementation is the time limits set out by teacher. It was found that the most obvious benefit obtained by students from the project works is their increasing interest towards science and technology class. The most significant change seen by students regarding project preparation is their increasing grades in exams during and following the project works.
Yuan, Yuan; Bachl, Fabian E.; Lindgren, Finn; Borchers, David L.; Illian, Janine B.; Buckland, Stephen T.; Rue, Haavard; Gerrodette, Tim
2017-01-01
Distance sampling is a widely used method for estimating wildlife population abundance. The fact that conventional distance sampling methods are partly design-based constrains the spatial resolution at which animal density can be estimated using these methods. Estimates are usually obtained at survey stratum level. For an endangered species such as the blue whale, it is desirable to estimate density and abundance at a finer spatial scale than stratum. Temporal variation in the spatial structure is also important. We formulate the process generating distance sampling data as a thinned spatial point process and propose model-based inference using a spatial log-Gaussian Cox process. The method adopts a flexible stochastic partial differential equation (SPDE) approach to model spatial structure in density that is not accounted for by explanatory variables, and integrated nested Laplace approximation (INLA) for Bayesian inference. It allows simultaneous fitting of detection and density models and permits prediction of density at an arbitrarily fine scale. We estimate blue whale density in the Eastern Tropical Pacific Ocean from thirteen shipboard surveys conducted over 22 years. We find that higher blue whale density is associated with colder sea surface temperatures in space, and although there is some positive association between density and mean annual temperature, our estimates are consistent with no trend in density across years. Our analysis also indicates that there is substantial spatially structured variation in density that is not explained by available covariates.
Yuan, Yuan
2017-12-28
Distance sampling is a widely used method for estimating wildlife population abundance. The fact that conventional distance sampling methods are partly design-based constrains the spatial resolution at which animal density can be estimated using these methods. Estimates are usually obtained at survey stratum level. For an endangered species such as the blue whale, it is desirable to estimate density and abundance at a finer spatial scale than stratum. Temporal variation in the spatial structure is also important. We formulate the process generating distance sampling data as a thinned spatial point process and propose model-based inference using a spatial log-Gaussian Cox process. The method adopts a flexible stochastic partial differential equation (SPDE) approach to model spatial structure in density that is not accounted for by explanatory variables, and integrated nested Laplace approximation (INLA) for Bayesian inference. It allows simultaneous fitting of detection and density models and permits prediction of density at an arbitrarily fine scale. We estimate blue whale density in the Eastern Tropical Pacific Ocean from thirteen shipboard surveys conducted over 22 years. We find that higher blue whale density is associated with colder sea surface temperatures in space, and although there is some positive association between density and mean annual temperature, our estimates are consistent with no trend in density across years. Our analysis also indicates that there is substantial spatially structured variation in density that is not explained by available covariates.
Liao, Yuxi; Li, Hongbao; Zhang, Qiaosheng; Fan, Gong; Wang, Yiwen; Zheng, Xiaoxiang
2014-01-01
Decoding algorithm in motor Brain Machine Interfaces translates the neural signals to movement parameters. They usually assume the connection between the neural firings and movements to be stationary, which is not true according to the recent studies that observe the time-varying neuron tuning property. This property results from the neural plasticity and motor learning etc., which leads to the degeneration of the decoding performance when the model is fixed. To track the non-stationary neuron tuning during decoding, we propose a dual model approach based on Monte Carlo point process filtering method that enables the estimation also on the dynamic tuning parameters. When applied on both simulated neural signal and in vivo BMI data, the proposed adaptive method performs better than the one with static tuning parameters, which raises a promising way to design a long-term-performing model for Brain Machine Interfaces decoder.
Nabi, Jameel-Un; Böyükata, Mahmut
2016-03-01
We investigate even-even nuclei in the A ∼ 70 mass region within the framework of the proton-neutron quasi-particle random phase approximation (pn-QRPA) and the interacting boson model-1 (IBM-1). Our work includes calculation of the energy spectra and the potential energy surfaces V (β , γ) of Zn, Ge, Se, Kr and Sr nuclei with the same proton and neutron number, N = Z. The parametrization of the IBM-1 Hamiltonian was performed for the calculation of the energy levels in the ground state bands. Geometric shape of the nuclei was predicted by plotting the potential energy surfaces V (β , γ) obtained from the IBM-1 Hamiltonian in the classical limit. The pn-QRPA model was later used to compute half-lives of the neutron-deficient nuclei which were found to be in very good agreement with the measured ones. The pn-QRPA model was also used to calculate the Gamow-Teller strength distributions and was found to be in decent agreement with the measured data. We further calculate the electron capture and positron decay rates for these N = Z waiting point (WP) nuclei in the stellar environment employing the pn-QRPA model. For the rp-process conditions, our total weak rates are within a factor two compared with the Skyrme HF +BCS +QRPA calculation. All calculated electron capture rates are comparable to the competing positron decay rates under rp-process conditions. Our study confirms the finding that electron capture rates form an integral part of the weak rates under rp-process conditions and should not be neglected in the nuclear network calculations.
Harden, R Norman; Cottrill, Jerod; Gagnon, Christine M; Smitherman, Todd A; Weinland, Stephan R; Tann, Beverley; Joseph, Petra; Lee, Thomas S; Houle, Timothy T
2009-05-01
To evaluate the efficacy of botulinum toxin A (BT-A) as a prophylactic treatment for chronic tension-type headache (CTTH) with myofascial trigger points (MTPs) producing referred head pain. Although BT-A has received mixed support for the treatment of TTH, deliberate injection directly into the cervical MTPs very often found in this population has not been formally evaluated. Patients with CTTH and specific MTPs producing referred head pain were assigned randomly to receive intramuscular injections of BT-A or isotonic saline (placebo) in a double-blind design. Daily headache diaries, pill counts, trigger point pressure algometry, range of motion assessment, and responses to standardized pain and psychological questionnaires were used as outcome measures; patients returned for follow-up assessment at 2 weeks, 1 month, 2 months, and 3 months post injection. After 3 months, all patients were offered participation in an open-label extension of the study. Effect sizes were calculated to index treatment effects among the intent-to-treat population; individual time series models were computed for average pain intensity. The 23 participants reported experiencing headache on a near-daily basis (average of 27 days/month). Compared with placebo, patients in the BT-A group reported greater reductions in headache frequency during the first part of the study (P = .013), but these effects dissipated by week 12. Reductions in headache intensity over time did not differ significantly between groups (P = .80; maximum d = 0.13), although a larger proportion of BT-A patients showed evidence of statistically significant improvements in headache intensity in the time series analyses (62.5% for BT-A vs 30% for placebo). There were no differences between the groups on any of the secondary outcome measures. The evidence for BT-A in headache is mixed, and even more so in CTTH. However, the putative technique of injecting BT-A directly into the ubiquitous MTPs in CTTH is partially supported
Multi-fidelity Gaussian process regression for prediction of random fields
Energy Technology Data Exchange (ETDEWEB)
Parussini, L. [Department of Engineering and Architecture, University of Trieste (Italy); Venturi, D., E-mail: venturi@ucsc.edu [Department of Applied Mathematics and Statistics, University of California Santa Cruz (United States); Perdikaris, P. [Department of Mechanical Engineering, Massachusetts Institute of Technology (United States); Karniadakis, G.E. [Division of Applied Mathematics, Brown University (United States)
2017-05-01
We propose a new multi-fidelity Gaussian process regression (GPR) approach for prediction of random fields based on observations of surrogate models or hierarchies of surrogate models. Our method builds upon recent work on recursive Bayesian techniques, in particular recursive co-kriging, and extends it to vector-valued fields and various types of covariances, including separable and non-separable ones. The framework we propose is general and can be used to perform uncertainty propagation and quantification in model-based simulations, multi-fidelity data fusion, and surrogate-based optimization. We demonstrate the effectiveness of the proposed recursive GPR techniques through various examples. Specifically, we study the stochastic Burgers equation and the stochastic Oberbeck–Boussinesq equations describing natural convection within a square enclosure. In both cases we find that the standard deviation of the Gaussian predictors as well as the absolute errors relative to benchmark stochastic solutions are very small, suggesting that the proposed multi-fidelity GPR approaches can yield highly accurate results.
A method of signal transmission path analysis for multivariate random processes
International Nuclear Information System (INIS)
Oguma, Ritsuo
1984-04-01
A method for noise analysis called ''STP (signal transmission path) analysis'' is presentd as a tool to identify noise sources and their propagation paths in multivariate random proceses. Basic idea of the analysis is to identify, via time series analysis, effective network for the signal power transmission among variables in the system and to make use of its information to the noise analysis. In the present paper, we accomplish this through two steps of signal processings; first, we estimate, using noise power contribution analysis, variables which have large contribution to the power spectrum of interest, and then evaluate the STPs for each pair of variables to identify STPs which play significant role for the generated noise to transmit to the variable under evaluation. The latter part of the analysis is executed through comparison of partial coherence function and newly introduced partial noise power contribution function. This paper presents the procedure of the STP analysis and demonstrates, using simulation data as well as Borssele PWR noise data, its effectiveness for investigation of noise generation and propagation mechanisms. (author)
Multi-fidelity Gaussian process regression for prediction of random fields
International Nuclear Information System (INIS)
Parussini, L.; Venturi, D.; Perdikaris, P.; Karniadakis, G.E.
2017-01-01
We propose a new multi-fidelity Gaussian process regression (GPR) approach for prediction of random fields based on observations of surrogate models or hierarchies of surrogate models. Our method builds upon recent work on recursive Bayesian techniques, in particular recursive co-kriging, and extends it to vector-valued fields and various types of covariances, including separable and non-separable ones. The framework we propose is general and can be used to perform uncertainty propagation and quantification in model-based simulations, multi-fidelity data fusion, and surrogate-based optimization. We demonstrate the effectiveness of the proposed recursive GPR techniques through various examples. Specifically, we study the stochastic Burgers equation and the stochastic Oberbeck–Boussinesq equations describing natural convection within a square enclosure. In both cases we find that the standard deviation of the Gaussian predictors as well as the absolute errors relative to benchmark stochastic solutions are very small, suggesting that the proposed multi-fidelity GPR approaches can yield highly accurate results.
Berthoud, Laurent; Pascual-Leone, Antonio; Caspar, Franz; Tissot, Hervé; Keller, Sabine; Rohde, Kristina B; de Roten, Yves; Despland, Jean-Nicolas; Kramer, Ueli
2017-01-01
The marked impulsivity and instability of clients suffering from borderline personality disorder (BPD) greatly challenge therapists' understanding and responsiveness. This may hinder the development of a constructive therapeutic relationship despite it being of particular importance in their treatment. Recent studies have shown that using motive-oriented therapeutic relationship (MOTR), a possible operationalization of appropriate therapist responsiveness, can enhance treatment outcome for BPD. The overall objective of this study is to examine change in emotional processing in BPD clients following the therapist's use of MOTR. The present paper focuses on N = 50 cases, n = 25 taken from each of two conditions of a randomized controlled add-on effectiveness design. Clients were either allocated to a manual-based psychiatric-psychodynamic 10-session version of general psychiatric management (GPM), a borderline-specific treatment, or to a 10-session version of GPM augmented with MOTR. Emotional states were assessed using the Classification of Affective-Meaning States (Pascual-Leone & Greenberg, 2005) at intake, midtreatment, and in the penultimate session. Across treatment, early expressions of distress, especially the emotion state of global distress, were shown to significantly decrease (p = .00), and adaptive emotions were found to emerge (p emotional variability and stronger outcome predictors in the MOTR condition. The findings indicate initial emotional change in BPD clients in a relatively short time frame and suggest the addition of MOTR to psychotherapeutic treatments as promising. Clinical implications are discussed.
International Nuclear Information System (INIS)
Chatani, M.; Matayoshi, Y.; Masaki, N.; Teshima, T.; Inoue, T.
1994-01-01
Between January 1983 and February 1989, a total of 165 patients with carcinoma of the unterine cervix was entered in a prospective randomized study concerning the point A dose of HDR therapy (6 Gy/fraktion vs 7.5 Gy/fraction) and external irradiation dose at Department of Radiation Therapy, The Center for Adult Diseases, Osaka. UICC stage distribution of patients was as follows: Stage IA=4, stage IB=33, stage IIA=18, stage IIB=38, stage III=57, stage IV=15. Overall 5-year cause specific survivals were as follows: Stage IA=100%, stage IB=96%, stage IIA=92%, stage IIB=79%, stage III=57%, stage IV=27%. In each stage, 5-year survival rates in groups A and B were 100%, 93% in stage I, 82% and 85% in stage II, 62% and 52% in stage II and 22% and 31% in stage IV, respectively. There were no statistically significant differences among these survival curves in each stage. Five-year local failure rates were 16% in group A and 16% in group B (p=0.9096), and corresponding distant failure rates were 23% in group A and 19% in group B (p=0.2955). Moderate-to-severe complications requiring treatment (Kottmeier's grade 2 or more) were noted in 6 patients (7%) in group A and 6 patients (7%) in group B. All of the bladder and rectal complications needed medical treatment (Kottmeier's grade 2). Severe complications receiving surgery were noted in 4 patients (A: 1; B: 3), i.e., small intestine 3 and sigmoid colon 1 patient. Another 1 patient (A) was dead of ileus. There were no statistically significant differences between 2 treatment schedules in survival rates, failure patterns and complications rates. This fact suggests that small number of fractions (7.5 Gy/fraction) may be advantageous because of short duration and a low load of treatment. (orig.) [de
Luoto, Jill; Najnin, Nusrat; Mahmud, Minhaj; Albert, Jeff; Islam, M. Sirajul; Luby, Stephen; Unicomb, Leanne; Levine, David I.
2011-01-01
Background There is evidence that household point-of-use (POU) water treatment products can reduce the enormous burden of water-borne illness. Nevertheless, adoption among the global poor is very low, and little evidence exists on why. Methods We gave 600 households in poor communities in Dhaka, Bangladesh randomly-ordered two-month free trials of four water treatment products: dilute liquid chlorine (sodium hypochlorite solution, marketed locally as Water Guard), sodium dichloroisocyanurate tablets (branded as Aquatabs), a combined flocculant-disinfectant powdered mixture (the PUR Purifier of Water), and a silver-coated ceramic siphon filter. Consumers also received education on the dangers of untreated drinking water. We measured which products consumers used with self-reports, observation (for the filter), and chlorine tests (for the other products). We also measured drinking water's contamination with E. coli (compared to 200 control households). Findings Households reported highest usage of the filter, although no product had even 30% usage. E. coli concentrations in stored drinking water were generally lowest when households had Water Guard. Households that self-reported product usage had large reductions in E. coli concentrations with any product as compared to controls. Conclusion Traditional arguments for the low adoption of POU products focus on affordability, consumers' lack of information about germs and the dangers of unsafe water, and specific products not meshing with a household's preferences. In this study we provided free trials, repeated informational messages explaining the dangers of untreated water, and a variety of product designs. The low usage of all products despite such efforts makes clear that important barriers exist beyond cost, information, and variation among these four product designs. Without a better understanding of the choices and aspirations of the target end-users, household-based water treatment is unlikely to reduce
Investigation of the s-process branch-point nucleus {sup 86}Rb at HIγS
Energy Technology Data Exchange (ETDEWEB)
Erbacher, Philipp; Glorius, Jan; Reifarth, Rene; Sonnabend, Kerstin [Goethe Universitaet Frankfurt am Main (Germany); Isaak, Johann; Loeher, Bastian; Savran, Deniz [GSI Helmholzzentrum fuer Schwerionenforschung (Germany); Tornow, Werner [Duke University (United States)
2016-07-01
The branch-point nucleus {sup 86}Rb determines the isotopic abundance ratio {sup 86}Sr/{sup 87}Sr in s-process nucleosynthesis. Thus, stellar parameters such as temperature and neutron density and their evolution in time as simulated by modern s-process network calculations can be constrained by a comparison of the calculated isotopic ratio with the one observed in SiC meteoritic grains. To this end, the radiative neutron-capture cross section of the unstable isotope {sup 86}Rb has to be known with sufficient accuracy. Since the short half-life of {sup 86}Rb prohibits the direct measurement, the nuclear-physics input to a calculation of the cross section has to be measured. For this reason, the γ-ray strength function of {sup 87}Rb was measured using the γ{sup 3} setup at the High Intensity γ-ray Source facility at TUNL in Durham, USA. First experimental results are presented.
Directory of Open Access Journals (Sweden)
E. G. Chapman
2009-02-01
Full Text Available The local and regional influence of elevated point sources on summertime aerosol forcing and cloud-aerosol interactions in northeastern North America was investigated using the WRF-Chem community model. The direct effects of aerosols on incoming solar radiation were simulated using existing modules to relate aerosol sizes and chemical composition to aerosol optical properties. Indirect effects were simulated by adding a prognostic treatment of cloud droplet number and adding modules that activate aerosol particles to form cloud droplets, simulate aqueous-phase chemistry, and tie a two-moment treatment of cloud water (cloud water mass and cloud droplet number to precipitation and an existing radiation scheme. Fully interactive feedbacks thus were created within the modified model, with aerosols affecting cloud droplet number and cloud radiative properties, and clouds altering aerosol size and composition via aqueous processes, wet scavenging, and gas-phase-related photolytic processes. Comparisons of a baseline simulation with observations show that the model captured the general temporal cycle of aerosol optical depths (AODs and produced clouds of comparable thickness to observations at approximately the proper times and places. The model overpredicted SO_{2} mixing ratios and PM_{2.5} mass, but reproduced the range of observed SO_{2} to sulfate aerosol ratios, suggesting that atmospheric oxidation processes leading to aerosol sulfate formation are captured in the model. The baseline simulation was compared to a sensitivity simulation in which all emissions at model levels above the surface layer were set to zero, thus removing stack emissions. Instantaneous, site-specific differences for aerosol and cloud related properties between the two simulations could be quite large, as removing above-surface emission sources influenced when and where clouds formed within the modeling domain. When summed spatially over the finest
Bouleau, Nicolas
2015-01-01
A simplified approach to Malliavin calculus adapted to Poisson random measures is developed and applied in this book. Called the “lent particle method” it is based on perturbation of the position of particles. Poisson random measures describe phenomena involving random jumps (for instance in mathematical finance) or the random distribution of particles (as in statistical physics). Thanks to the theory of Dirichlet forms, the authors develop a mathematical tool for a quite general class of random Poisson measures and significantly simplify computations of Malliavin matrices of Poisson functionals. The method gives rise to a new explicit calculus that they illustrate on various examples: it consists in adding a particle and then removing it after computing the gradient. Using this method, one can establish absolute continuity of Poisson functionals such as Lévy areas, solutions of SDEs driven by Poisson measure and, by iteration, obtain regularity of laws. The authors also give applications to error calcul...
Tournaire, O.; Paparoditis, N.
Road detection has been a topic of great interest in the photogrammetric and remote sensing communities since the end of the 70s. Many approaches dealing with various sensor resolutions, the nature of the scene or the wished accuracy of the extracted objects have been presented. This topic remains challenging today as the need for accurate and up-to-date data is becoming more and more important. Based on this context, we will study in this paper the road network from a particular point of view, focusing on road marks, and in particular dashed lines. Indeed, they are very useful clues, for evidence of a road, but also for tasks of a higher level. For instance, they can be used to enhance quality and to improve road databases. It is also possible to delineate the different circulation lanes, their width and functionality (speed limit, special lanes for buses or bicycles...). In this paper, we propose a new robust and accurate top-down approach for dashed line detection based on stochastic geometry. Our approach is automatic in the sense that no intervention from a human operator is necessary to initialise the algorithm or to track errors during the process. The core of our approach relies on defining geometric, radiometric and relational models for dashed lines objects. The model also has to deal with the interactions between the different objects making up a line, meaning that it introduces external knowledge taken from specifications. Our strategy is based on a stochastic method, and in particular marked point processes. Our goal is to find the objects configuration minimising an energy function made-up of a data attachment term measuring the consistency of the image with respect to the objects and a regularising term managing the relationship between neighbouring objects. To sample the energy function, we use Green algorithm's; coupled with a simulated annealing to find its minimum. Results from aerial images at various resolutions are presented showing that our
Directory of Open Access Journals (Sweden)
Weiguo eSong
2015-04-01
Full Text Available Currently little is known about how a mechanically coupled BMI system’s actions are integrated into ongoing body dynamics. We tested a locomotor task augmented with a BMI system driving a robot mechanically interacting with a rat under three conditions: control locomotion (BL, ‘simple elastic load’ (E and ‘BMI with elastic load’ (BMI/E. The effect of the BMI was to allow compensation of the elastic load as a function of the neural drive. Neurons recorded here were close to one another in cortex, all within a 200 micron diameter horizontal distance of one another. The interactions of these close assemblies of neurons may differ from those among neurons at longer distances in BMI tasks and thus are important to explore. A point process generalized linear model (GLM, was used to examine connectivity at two different binning timescales (1ms vs. 10ms. We used GLM models to fit non-Poisson neural dynamics solely using other neurons’ prior neural activity as covariates. Models at different timescales were compared based on Kolmogorov-Smirnov (KS goodness-of-fit and parsimony. About 15% of cells with non-Poisson firing were well fitted with the neuron-to-neuron models alone. More such cells were fitted at the 1ms binning than 10ms. Positive connection parameters (‘excitation’ ~70% exceeded negative parameters (‘inhibition’ ~30%. Significant connectivity changes in the GLM determined networks of well-fitted neurons occurred between the conditions. However, a common core of connections comprising at least ~15% of connections persisted between any two of the three conditions. Significantly almost twice as many connections were in common between the two load conditions (~27%, compared to between either load condition and the baseline. This local point process GLM identified neural correlation structure and the changes seen across task conditions in the rats in this neural subset may be intrinsic to cortex or due to feedback and input
B. Chen (Bohan); J. Blanchet; C.H. Rhee (Chang-Han); A.P. Zwart (Bert)
2017-01-01
textabstractWe propose a class of strongly efficient rare event simulation estimators for random walks and compound Poisson processes with a regularly varying increment/jump-size distribution in a general large deviations regime. Our estimator is based on an importance sampling strategy that hinges
Use of Play Therapy in Nursing Process: A Prospective Randomized Controlled Study.
Sezici, Emel; Ocakci, Ayse Ferda; Kadioglu, Hasibe
2017-03-01
Play therapy is a nursing intervention employed in multidisciplinary approaches to develop the social, emotional, and behavioral skills of children. In this study, we aim to determine the effects of play therapy on the social, emotional, and behavioral skills of pre-school children through the nursing process. A single-blind, prospective, randomized controlled study was undertaken. The design, conduct, and reporting of this study adhere to the Consolidated Standards of Reporting Trials (CONSORT) guidelines. The participants included 4- to 5-year-old kindergarten children with no oral or aural disabilities and parents who agreed to participate in the study. The Pre-school Child and Family Identification Form and Social Competence and the Behavior Evaluation Scale were used to gather data. Games in the play therapy literature about nursing diagnoses (fear, social disturbance, impaired social interactions, ineffective coping, anxiety), which were determined after the preliminary test, constituted the application of the study. There was no difference in the average scores of the children in the experimental and control groups in their Anger-Aggression (AA), Social Competence (SC), and Anxiety-Withdrawal (AW) scores beforehand (t = 0.015, p = .988; t = 0.084, p = .933; t = 0.214, p = .831, respectively). The difference between the average AA and SC scores in the post-test (t = 2.041, p = .045; t = 2.692, p = .009, respectively), and the retests were statistically significant in AA and SC average scores in the experimental and control groups (t = 4.538, p = .000; t = 4.693; p = .000, respectively). In AW average scores, no statistical difference was found in the post-test (t = 0.700, p = .486), whereas in the retest, a significant difference was identified (t = 5.839, p = .000). Play therapy helped pre-school children to improve their social, emotional, and behavioral skills. It also provided benefits for the children to decrease their fear and anxiety levels, to improve
A process evaluation of the Supermarket Healthy Eating for Life (SHELf) randomized controlled trial.
Olstad, Dana Lee; Ball, Kylie; Abbott, Gavin; McNaughton, Sarah A; Le, Ha N D; Ni Mhurchu, Cliona; Pollard, Christina; Crawford, David A
2016-02-24
Supermarket Healthy Eating for Life (SHELf) was a randomized controlled trial that operationalized a socioecological approach to population-level dietary behaviour change in a real-world supermarket setting. SHELf tested the impact of individual (skill-building), environmental (20% price reductions), and combined (skill-building + 20% price reductions) interventions on women's purchasing and consumption of fruits, vegetables, low-calorie carbonated beverages and water. This process evaluation investigated the reach, effectiveness, implementation, and maintenance of the SHELf interventions. RE-AIM provided a conceptual framework to examine the processes underlying the impact of the interventions using data from participant surveys and objective sales data collected at baseline, post-intervention (3 months) and 6-months post-intervention. Fisher's exact, χ (2) and t-tests assessed differences in quantitative survey responses among groups. Adjusted linear regression examined the impact of self-reported intervention dose on food purchasing and consumption outcomes. Thematic analysis identified key themes within qualitative survey responses. Reach of the SHELf interventions to disadvantaged groups, and beyond study participants themselves, was moderate. Just over one-third of intervention participants indicated that the interventions were effective in changing the way they bought, cooked or consumed food (p < 0.001 compared to control), with no differences among intervention groups. Improvements in purchasing and consumption outcomes were greatest among those who received a higher intervention dose. Most notably, participants who said they accessed price reductions on fruits and vegetables purchased (519 g/week) and consumed (0.5 servings/day) more vegetables. The majority of participants said they accessed (82%) and appreciated discounts on fruits and vegetables, while there was limited use (40%) and appreciation of discounts on low-calorie carbonated
Directory of Open Access Journals (Sweden)
Luca Falsiroli Maistrello
2018-04-01
Full Text Available BackgroundA variety of interventions has been proposed for symptomatology relief in primary headaches. Among these, manual trigger points (TrPs treatment gains popularity, but its effects have not been investigated yet.ObjectiveThe aim was to establish the effectiveness of manual TrP compared to minimal active or no active interventions in terms of frequency, intensity, and duration of attacks in adult people with primary headaches.MethodsWe searched MEDLINE, COCHRANE, Web Of Science, and PEDro databases up to November 2017 for randomized controlled trials (RCTs. Two independent reviewers appraised the risk-of-bias (RoB and the grading of recommendations, assessment, development, and evaluation (GRADE to evaluate the overall quality of evidence.ResultsSeven RCTs that compared manual treatment vs minimal active intervention were included: 5 focused on tension-type headache (TTH and 2 on Migraine (MH; 3 out of 7 RCTs had high RoB. Combined TTH and MH results show statistically significant reduction for all outcomes after treatment compared to controls, but the level of evidence was very low. Subgroup analysis showed a statistically significant reduction in attack frequency (no. of attacks per month after treatment in TTH (MD −3.50; 95% CI from −4.91 to −2.09; 4 RCTs and in MH (MD −1.92; 95% CI from −3.03 to −0.80; 2 RCTs. Pain intensity (0–100 scale was reduced in TTH (MD −12.83; 95% CI from −19.49 to −6.17; 4 RCTs and in MH (MD −13.60; 95% CI from −19.54 to −7.66; 2RCTs. Duration of attacks (hours was reduced in TTH (MD −0.51; 95% CI from −0.97 to −0.04; 2 RCTs and in MH (MD −10.68; 95% CI from −14.41 to −6.95; 1 RCT.ConclusionManual TrPs treatment of head and neck muscles may reduce frequency, intensity, and duration of attacks in TTH and MH, but the quality of evidence according to GRADE approach was very low for the presence of few studies, high RoB, and imprecision of results.
International Nuclear Information System (INIS)
Petersen, A.M.; Melamu, Rethabi; Knoetze, J.H.; Görgens, J.F.
2015-01-01
Highlights: • Process evaluation of thermochemical and biological routes for bagasse to fuels. • Pinch point analysis increases overall efficiencies by reducing utility consumption. • Advanced biological route increased efficiency and local environmental impacts. • Thermochemical routes have the highest efficiencies and low life cycle impacts. - Abstract: Three alternative processes for the production of liquid transportation biofuels from sugar cane bagasse were compared, on the perspective of energy efficiencies using process modelling, Process Environmental Assessments and Life Cycle Assessment. Bio-ethanol via two biological processes was considered, i.e. Separate Hydrolysis and Fermentation (Process 1) and Simultaneous Saccharification and Fermentation (Process 2), in comparison to Gasification and Fischer Tropsch synthesis for the production of synthetic fuels (Process 3). The energy efficiency of each process scenario was maximised by pinch point analysis for heat integration. The more advanced bio-ethanol process was Process 2 and it had a higher energy efficiency at 42.3%. Heat integration was critical for the Process 3, whereby the energy efficiency was increased from 51.6% to 55.7%. For both the Process Environmental and Life Cycle Assessment, Process 3 had the least potential for detrimental environmental impacts, due to its relatively high energy efficiency. Process 2 had the greatest Process Environmental Impact due to the intensive use of processing chemicals. Regarding the Life Cycle Assessments, Process 1 was the most severe due to its low energy efficiency
Directory of Open Access Journals (Sweden)
Cristina Farias da Fonseca
2013-03-01
Full Text Available This study aimed to verify the hygienic-sanitary working practices and to create and implement a Hazard Analysis Critical Control Point (HACCP in two lobster processing industries in Pernambuco State, Brazil. The industries studied process frozen whole lobsters, frozen whole cooked lobsters, and frozen lobster tails for exportation. The application of the hygienic-sanitary checklist in the industries analyzed achieved conformity rates over 96% to the aspects evaluated. The use of the Hazard Analysis Critical Control Point (HACCP plan resulted in the detection of two critical control points (CCPs including the receiving and classification steps in the processing of frozen lobster and frozen lobster tails, and an additional critical control point (CCP was detected during the cooking step of processing of the whole frozen cooked lobster. The proper implementation of the Hazard Analysis Critical Control Point (HACCP plan in the lobster processing industries studied proved to be the safest and most cost-effective method to monitor each critical control point (CCP hazards.
International Nuclear Information System (INIS)
Shen, J.
1991-01-01
Research activities were concentrated on an innovative scintillation technique for high-energy collider detection. Heretofore, scintillation waveform data of high- energy physics events have been problematically random. This paper represents a bottleneck of data flow for the next generation of detectors for proton colliders like SSC or LHC. Prevailing problems to resolve were: additional time walk and jitter resulting from the random hitting positions of particles, increased walk and jitter caused by scintillation photon propagation dispersions, and quantum fluctuations of luminescence. However, these were manageable when the different aspects of randomness had been clarified in increased detail. For this purpose, these three were defined as pseudorandomness, quasi-randomness, and real randomness, respectively. A unique scintillation counter incorporating long scintillators with light guides, a drift chamber, and fast discriminators plus integrators was employed to resolve problems of correcting time walk and reducing the additional jitter by establishing an analytical waveform description of V(t,z) for a measured (z). Resolving problem was accomplished by reducing jitter by compressing V(t,z) with a nonlinear medium, called cooling scintillation. Resolving problem was proposed by orienting molecular and polarizing scintillation through the use of intense magnetic technology, called stabilizing the waveform
Random practice - one of the factors of the motor learning process
Directory of Open Access Journals (Sweden)
Petr Valach
2012-01-01
Full Text Available BACKGROUND: An important concept of acquiring motor skills is the random practice (contextual interference - CI. The explanation of the effect of contextual interference is that the memory has to work more intensively, and therefore it provides higher effect of motor skills retention than the block practice. Only active remembering of a motor skill assigns the practical value for appropriate using in the future. OBJECTIVE: The aim of this research was to determine the difference in how the motor skills in sport gymnastics are acquired and retained using the two different teaching methods - blocked and random practice. METHODS: The blocked and random practice on the three selected gymnastics tasks were applied in the two groups students of physical education (blocked practice - the group BP, random practice - the group RP during two months, in one session a week (totally 80 trials. At the end of the experiment and 6 months after (retention tests the groups were tested on the selected gymnastics skills. RESULTS: No significant differences in a level of the gymnastics skills were found between BP group and RP group at the end of the experiment. However, the retention tests showed significantly higher level of the gymnastics skills in the RP group in comparison with the BP group. CONCLUSION: The results confirmed that a retention of the gymnastics skills using the teaching method of the random practice was significantly higher than with use of the blocked practice.
Boettle, M.; Rybski, D.; Kropp, J. P.
2016-02-01
In contrast to recent advances in projecting sea levels, estimations about the economic impact of sea level rise are vague. Nonetheless, they are of great importance for policy making with regard to adaptation and greenhouse-gas mitigation. Since the damage is mainly caused by extreme events, we propose a stochastic framework to estimate the monetary losses from coastal floods in a confined region. For this purpose, we follow a Peak-over-Threshold approach employing a Poisson point process and the Generalised Pareto Distribution. By considering the effect of sea level rise as well as potential adaptation scenarios on the involved parameters, we are able to study the development of the annual damage. An application to the city of Copenhagen shows that a doubling of losses can be expected from a mean sea level increase of only 11 cm. In general, we find that for varying parameters the expected losses can be well approximated by one of three analytical expressions depending on the extreme value parameters. These findings reveal the complex interplay of the involved parameters and allow conclusions of fundamental relevance. For instance, we show that the damage typically increases faster than the sea level rise itself. This in turn can be of great importance for the assessment of sea level rise impacts on the global scale. Our results are accompanied by an assessment of uncertainty, which reflects the stochastic nature of extreme events. While the absolute value of uncertainty about the flood damage increases with rising mean sea levels, we find that it decreases in relation to the expected damage.
Hancock, Laura M; Bruce, Jared M; Bruce, Amanda S; Lynch, Sharon G
2015-01-01
Between 40-65% of multiple sclerosis patients experience cognitive deficits, with processing speed and working memory most commonly affected. This pilot study investigated the effect of computerized cognitive training focused on improving processing speed and working memory. Participants were randomized into either an active or a sham training group and engaged in six weeks of training. The active training group improved on a measure of processing speed and attention following cognitive training, and data trended toward significance on measures of other domains. Results provide preliminary evidence that cognitive training with multiple sclerosis patients may produce moderate improvement in select areas of cognitive functioning.
Inker, Lesley A.; Lambers Heerspink, Hiddo J.; Mondal, Hasi; Schmid, Christopher H.; Tighiouart, Hocine; Noubary, Farzad; Coresh, Josef; Greene, Tom; Levey, Andrew S.
2014-01-01
Background: There is increased interest in using alternative end points for trials of kidney disease progression. The currently established end points of end-stage renal disease and doubling of serum creatinine level, equivalent to a 57% decline in estimated glomerular filtration rate (eGFR), are
Directory of Open Access Journals (Sweden)
Victoria Jane Palmer
2016-10-01
Full Text Available Background: Process evaluations are essential to understand the contextual, relational, and organizational and system factors of complex interventions. The guidance for developing process evaluations for randomized controlled trials (RCTs has until recently however, been fairly limited. Method/Design: A nested process evaluation (NPE was designed and embedded across all stages of a stepped wedge cluster RCT called the CORE study. The aim of the CORE study is to test the effectiveness of an experience-based codesign methodology for improving psychosocial recovery outcomes for people living with severe mental illness (service users. Process evaluation data collection combines qualitative and quantitative methods with four aims: (1 to describe organizational characteristics, service models, policy contexts, and government reforms and examine the interaction of these with the intervention; (2 to understand how the codesign intervention works, the cluster variability in implementation, and if the intervention is or is not sustained in different settings; (3 to assist in the interpretation of the primary and secondary outcomes and determine if the causal assumptions underpinning the codesign interventions are accurate; and (4 to determine the impact of a purposefully designed engagement model on the broader study retention and knowledge transfer in the trial. Discussion: Process evaluations require prespecified study protocols but finding a balance between their iterative nature and the structure offered by protocol development is an important step forward. Taking this step will advance the role of qualitative research within trials research and enable more focused data collection to occur at strategic points within studies.
Process convergence of self-normalized sums of i.i.d. random ...
Indian Academy of Sciences (India)
The study of the asymptotics of the self-normalized sums are also interesting. Logan ... if the constituent random variables are from the domain of attraction of a normal dis- tribution ... index of stability α which equals 2 (for definition, see §2).
Generation and Analysis of Constrained Random Sampling Patterns
DEFF Research Database (Denmark)
Pierzchlewski, Jacek; Arildsen, Thomas
2016-01-01
Random sampling is a technique for signal acquisition which is gaining popularity in practical signal processing systems. Nowadays, event-driven analog-to-digital converters make random sampling feasible in practical applications. A process of random sampling is defined by a sampling pattern, which...... indicates signal sampling points in time. Practical random sampling patterns are constrained by ADC characteristics and application requirements. In this paper, we introduce statistical methods which evaluate random sampling pattern generators with emphasis on practical applications. Furthermore, we propose...... algorithm generates random sampling patterns dedicated for event-driven-ADCs better than existed sampling pattern generators. Finally, implementation issues of random sampling patterns are discussed....