Carr, D.B.; Tolley, H.D.
1982-12-01
This paper investigates procedures for univariate nonparametric estimation of tail probabilities. Extrapolated values for tail probabilities beyond the data are also obtained based on the shape of the density in the tail. Several estimators which use exponential weighting are described. These are compared in a Monte Carlo study to nonweighted estimators, to the empirical cdf, to an integrated kernel, to a Fourier series estimate, to a penalized likelihood estimate and a maximum likelihood estimate. Selected weighted estimators are shown to compare favorably to many of these standard estimators for the sampling distributions investigated.
Estimating Probabilities in Recommendation Systems
Sun, Mingxuan; Kidwell, Paul
2010-01-01
Recommendation systems are emerging as an important business application with significant economic impact. Currently popular systems include Amazon's book recommendations, Netflix's movie recommendations, and Pandora's music recommendations. In this paper we address the problem of estimating probabilities associated with recommendation system data using non-parametric kernel smoothing. In our estimation we interpret missing items as randomly censored observations and obtain efficient computation schemes using combinatorial properties of generating functions. We demonstrate our approach with several case studies involving real world movie recommendation data. The results are comparable with state-of-the-art techniques while also providing probabilistic preference estimates outside the scope of traditional recommender systems.
Tail Probabilities for Registration Estimators
T. Mikosch; C.G. de Vries (Casper)
2006-01-01
textabstractEstimators of regression coefficients are known to be asymptotically normally distributed, provided certain regularity conditions are satisfied. In small samples and if the noise is not normally distributed, this can be a poor guide to the quality of the estimators. The paper addresses
Optimizing Probability of Detection Point Estimate Demonstration
Koshti, Ajay M.
2017-01-01
Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-18231and associated mh18232POD software gives most common methods of POD analysis. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using Point Estimate Method. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible.
Channel Capacity Estimation using Free Probability Theory
Ryan, Øyvind
2007-01-01
In many channel measurement applications, one needs to estimate some characteristics of the channels based on a limited set of measurements. This is mainly due to the highly time varying characteristics of the channel. In this contribution, it will be shown how free probability can be used for channel capacity estimation in MIMO systems. Free probability has already been applied in various application fields such as digital communications, nuclear physics and mathematical finance, and has been shown to be an invaluable tool for describing the asymptotic behaviour of many systems when the dimensions of the system get large (i.e. the number of antennas). In particular, introducing the notion of free deconvolution, we provide hereafter an asymptotically (in the number of antennas) unbiased capacity estimator (w.r.t. the number of observations) for MIMO channels impaired with noise. Another unbiased estimator (for any number of observations) is also constructed by slightly modifying the free probability based est...
Risk Probability Estimating Based on Clustering
Chen, Yong; Jensen, Christian D.; Gray, Elizabeth
2003-01-01
of prior experiences, recommendations from a trusted entity or the reputation of the other entity. In this paper we propose a dynamic mechanism for estimating the risk probability of a certain interaction in a given environment using hybrid neural networks. We argue that traditional risk assessment models...... from the insurance industry do not directly apply to ubiquitous computing environments. Instead, we propose a dynamic mechanism for risk assessment, which is based on pattern matching, classification and prediction procedures. This mechanism uses an estimator of risk probability, which is based...
A Thermodynamical Approach for Probability Estimation
Isozaki, Takashi
2012-01-01
The issue of discrete probability estimation for samples of small size is addressed in this study. The maximum likelihood method often suffers over-fitting when insufficient data is available. Although the Bayesian approach can avoid over-fitting by using prior distributions, it still has problems with objective analysis. In response to these drawbacks, a new theoretical framework based on thermodynamics, where energy and temperature are introduced, was developed. Entropy and likelihood are placed at the center of this method. The key principle of inference for probability mass functions is the minimum free energy, which is shown to unify the two principles of maximum likelihood and maximum entropy. Our method can robustly estimate probability functions from small size data.
A new estimator of the discovery probability.
Favaro, Stefano; Lijoi, Antonio; Prünster, Igor
2012-12-01
Species sampling problems have a long history in ecological and biological studies and a number of issues, including the evaluation of species richness, the design of sampling experiments, and the estimation of rare species variety, are to be addressed. Such inferential problems have recently emerged also in genomic applications, however, exhibiting some peculiar features that make them more challenging: specifically, one has to deal with very large populations (genomic libraries) containing a huge number of distinct species (genes) and only a small portion of the library has been sampled (sequenced). These aspects motivate the Bayesian nonparametric approach we undertake, since it allows to achieve the degree of flexibility typically needed in this framework. Based on an observed sample of size n, focus will be on prediction of a key aspect of the outcome from an additional sample of size m, namely, the so-called discovery probability. In particular, conditionally on an observed basic sample of size n, we derive a novel estimator of the probability of detecting, at the (n+m+1)th observation, species that have been observed with any given frequency in the enlarged sample of size n+m. Such an estimator admits a closed-form expression that can be exactly evaluated. The result we obtain allows us to quantify both the rate at which rare species are detected and the achieved sample coverage of abundant species, as m increases. Natural applications are represented by the estimation of the probability of discovering rare genes within genomic libraries and the results are illustrated by means of two expressed sequence tags datasets.
Risk Probability Estimating Based on Clustering
Chen, Yong; Jensen, Christian D.; Gray, Elizabeth;
2003-01-01
from the insurance industry do not directly apply to ubiquitous computing environments. Instead, we propose a dynamic mechanism for risk assessment, which is based on pattern matching, classification and prediction procedures. This mechanism uses an estimator of risk probability, which is based......biquitous computing environments are highly dynamic, with new unforeseen circumstances and constantly changing environments, which introduces new risks that cannot be assessed through traditional means of risk analysis. Mobile entities in a ubiquitous computing environment require the ability...... to perform an autonomous assessment of the risk incurred by a specific interaction with another entity in a given context. This assessment will allow a mobile entity to decide whether sufficient evidence exists to mitigate the risk and allow the interaction to proceed. Such evidence might include records...
Mavromoustakos, Elena; Clark, Gavin I; Rock, Adam J
2016-01-01
Probability bias regarding threat-relevant outcomes has been demonstrated across anxiety disorders but has not been investigated in flying phobia. Individual temporal orientation (time perspective) may be hypothesised to influence estimates of negative outcomes occurring. The present study investigated whether probability bias could be demonstrated in flying phobia and whether probability estimates of negative flying events was predicted by time perspective. Sixty flying phobic and fifty-five non-flying-phobic adults were recruited to complete an online questionnaire. Participants completed the Flight Anxiety Scale, Probability Scale (measuring perceived probability of flying-negative events, general-negative and general positive events) and the Past-Negative, Future and Present-Hedonistic subscales of the Zimbardo Time Perspective Inventory (variables argued to predict mental travel forward and backward in time). The flying phobic group estimated the probability of flying negative and general negative events occurring as significantly higher than non-flying phobics. Past-Negative scores (positively) and Present-Hedonistic scores (negatively) predicted probability estimates of flying negative events. The Future Orientation subscale did not significantly predict probability estimates. This study is the first to demonstrate probability bias for threat-relevant outcomes in flying phobia. Results suggest that time perspective may influence perceived probability of threat-relevant outcomes but the nature of this relationship remains to be determined.
Comparison of density estimators. [Estimation of probability density functions
Kao, S.; Monahan, J.F.
1977-09-01
Recent work in the field of probability density estimation has included the introduction of some new methods, such as the polynomial and spline methods and the nearest neighbor method, and the study of asymptotic properties in depth. This earlier work is summarized here. In addition, the computational complexity of the various algorithms is analyzed, as are some simulations. The object is to compare the performance of the various methods in small samples and their sensitivity to change in their parameters, and to attempt to discover at what point a sample is so small that density estimation can no longer be worthwhile. (RWR)
Anticipating abrupt shifts in temporal evolution of probability of eruption
Rohmer, J.; Loschetter, A.
2016-04-01
Estimating the probability of eruption by jointly accounting for different sources of monitoring parameters over time is a key component for volcano risk management. In the present study, we are interested in the transition from a state of low-to-moderate probability value to a state of high probability value. By using the data of MESIMEX exercise at the Vesuvius volcano, we investigated the potential for time-varying indicators related to the correlation structure or to the variability of the probability time series for detecting in advance this critical transition. We found that changes in the power spectra and in the standard deviation estimated over a rolling time window both present an abrupt increase, which marks the approaching shift. Our numerical experiments revealed that the transition from an eruption probability of 10-15% to > 70% could be identified up to 1-3 h in advance. This additional lead time could be useful to place different key services (e.g., emergency services for vulnerable groups, commandeering additional transportation means, etc.) on a higher level of alert before the actual call for evacuation.
Estimation of transition probabilities of credit ratings
Peng, Gan Chew; Hin, Pooi Ah
2015-12-01
The present research is based on the quarterly credit ratings of ten companies over 15 years taken from the database of the Taiwan Economic Journal. The components in the vector mi (mi1, mi2,⋯, mi10) may first be used to denote the credit ratings of the ten companies in the i-th quarter. The vector mi+1 in the next quarter is modelled to be dependent on the vector mi via a conditional distribution which is derived from a 20-dimensional power-normal mixture distribution. The transition probability Pkl (i ,j ) for getting mi+1,j = l given that mi, j = k is then computed from the conditional distribution. It is found that the variation of the transition probability Pkl (i ,j ) as i varies is able to give indication for the possible transition of the credit rating of the j-th company in the near future.
NONPARAMETRIC ESTIMATION OF CHARACTERISTICS OF PROBABILITY DISTRIBUTIONS
Orlov A. I.
2015-10-01
Full Text Available The article is devoted to the nonparametric point and interval estimation of the characteristics of the probabilistic distribution (the expectation, median, variance, standard deviation, variation coefficient of the sample results. Sample values are regarded as the implementation of independent and identically distributed random variables with an arbitrary distribution function having the desired number of moments. Nonparametric analysis procedures are compared with the parametric procedures, based on the assumption that the sample values have a normal distribution. Point estimators are constructed in the obvious way - using sample analogs of the theoretical characteristics. Interval estimators are based on asymptotic normality of sample moments and functions from them. Nonparametric asymptotic confidence intervals are obtained through the use of special output technology of the asymptotic relations of Applied Statistics. In the first step this technology uses the multidimensional central limit theorem, applied to the sums of vectors whose coordinates are the degrees of initial random variables. The second step is the conversion limit multivariate normal vector to obtain the interest of researcher vector. At the same considerations we have used linearization and discarded infinitesimal quantities. The third step - a rigorous justification of the results on the asymptotic standard for mathematical and statistical reasoning level. It is usually necessary to use the necessary and sufficient conditions for the inheritance of convergence. This article contains 10 numerical examples. Initial data - information about an operating time of 50 cutting tools to the limit state. Using the methods developed on the assumption of normal distribution, it can lead to noticeably distorted conclusions in a situation where the normality hypothesis failed. Practical recommendations are: for the analysis of real data we should use nonparametric confidence limits
无
2000-01-01
The principle of middle and long-term earthquake forecast model of spatial and temporal synthesized probability gain and the evaluation of forecast efficiency (R-values) of various forecast methods are introduced in this paper. The R-value method, developed by Xu (1989), is further developed here, and can be applied to more complicated cases. Probability gains in spatial and/or temporal domains and the R-values for different forecast methods are estimated in North China. The synthesized probability gain is then estimated as an example.
Probability shapes perceptual precision: A study in orientation estimation.
Jabar, Syaheed B; Anderson, Britt
2015-12-01
Probability is known to affect perceptual estimations, but an understanding of mechanisms is lacking. Moving beyond binary classification tasks, we had naive participants report the orientation of briefly viewed gratings where we systematically manipulated contingent probability. Participants rapidly developed faster and more precise estimations for high-probability tilts. The shapes of their error distributions, as indexed by a kurtosis measure, also showed a distortion from Gaussian. This kurtosis metric was robust, capturing probability effects that were graded, contextual, and varying as a function of stimulus orientation. Our data can be understood as a probability-induced reduction in the variability or "shape" of estimation errors, as would be expected if probability affects the perceptual representations. As probability manipulations are an implicit component of many endogenous cuing paradigms, changes at the perceptual level could account for changes in performance that might have traditionally been ascribed to "attention." (c) 2015 APA, all rights reserved).
Probabilistic Motion Estimation Based on Temporal Coherence
Burgi, Pierre-Yves; Grzywacz, Norberto M; 10.1162/089976600300015169
2012-01-01
We develop a theory for the temporal integration of visual motion motivated by psychophysical experiments. The theory proposes that input data are temporally grouped and used to predict and estimate the motion flows in the image sequence. This temporal grouping can be considered a generalization of the data association techniques used by engineers to study motion sequences. Our temporal-grouping theory is expressed in terms of the Bayesian generalization of standard Kalman filtering. To implement the theory we derive a parallel network which shares some properties of cortical networks. Computer simulations of this network demonstrate that our theory qualitatively accounts for psychophysical experiments on motion occlusion and motion outliers.
Wang, S Q; Zhang, H Y; Li, Z L
2016-10-01
Understanding spatio-temporal distribution of pest in orchards can provide important information that could be used to design monitoring schemes and establish better means for pest control. In this study, the spatial and temporal distribution of Bactrocera minax (Enderlein) (Diptera: Tephritidae) was assessed, and activity trends were evaluated by using probability kriging. Adults of B. minax were captured in two successive occurrences in a small-scale citrus orchard by using food bait traps, which were placed both inside and outside the orchard. The weekly spatial distribution of B. minax within the orchard and adjacent woods was examined using semivariogram parameters. The edge concentration was discovered during the most weeks in adult occurrence, and the population of the adults aggregated with high probability within a less-than-100-m-wide band on both of the sides of the orchard and the woods. The sequential probability kriged maps showed that the adults were estimated in the marginal zone with higher probability, especially in the early and peak stages. The feeding, ovipositing, and mating behaviors of B. minax are possible explanations for these spatio-temporal patterns. Therefore, spatial arrangement and distance to the forest edge of traps or spraying spot should be considered to enhance pest control on B. minax in small-scale orchards.
The estimation of tree posterior probabilities using conditional clade probability distributions.
Larget, Bret
2013-07-01
In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample.
Estimating the exceedance probability of extreme rainfalls up to the probable maximum precipitation
Nathan, Rory; Jordan, Phillip; Scorah, Matthew; Lang, Simon; Kuczera, George; Schaefer, Melvin; Weinmann, Erwin
2016-12-01
If risk-based criteria are used in the design of high hazard structures (such as dam spillways and nuclear power stations), then it is necessary to estimate the annual exceedance probability (AEP) of extreme rainfalls up to and including the Probable Maximum Precipitation (PMP). This paper describes the development and application of two largely independent methods to estimate the frequencies of such extreme rainfalls. One method is based on stochastic storm transposition (SST), which combines the "arrival" and "transposition" probabilities of an extreme storm using the total probability theorem. The second method, based on "stochastic storm regression" (SSR), combines frequency curves of point rainfalls with regression estimates of local and transposed areal rainfalls; rainfall maxima are generated by stochastically sampling the independent variates, where the required exceedance probabilities are obtained using the total probability theorem. The methods are applied to two large catchments (with areas of 3550 km2 and 15,280 km2) located in inland southern Australia. Both methods were found to provide similar estimates of the frequency of extreme areal rainfalls for the two study catchments. The best estimates of the AEP of the PMP for the smaller and larger of the catchments were found to be 10-7 and 10-6, respectively, but the uncertainty of these estimates spans one to two orders of magnitude. Additionally, the SST method was applied to a range of locations within a meteorologically homogenous region to investigate the nature of the relationship between the AEP of PMP and catchment area.
Information-theoretic methods for estimating of complicated probability distributions
Zong, Zhi
2006-01-01
Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur
Detection probabilities for time-domain velocity estimation
Jensen, Jørgen Arendt
1991-01-01
Estimation of blood velocities by time-domain cross-correlation of successive high frequency sampled ultrasound signals is investigated. It is shown that any velocity can result from the estimator regardless of the true velocity due to the nonlinear technique employed. Using a simple simulation...... as a filter with a transfer function depending on the actual velocity. This influences the detection probability, which gets lower at certain velocities. An index directly reflecting the probability of detection can easily be calculated from the cross-correlation estimated. This makes it possible to assess...... the reliability of the velocity estimate in real time...
Naive Probability: Model-Based Estimates of Unique Events.
Khemlani, Sangeet S; Lotstein, Max; Johnson-Laird, Philip N
2015-08-01
We describe a dual-process theory of how individuals estimate the probabilities of unique events, such as Hillary Clinton becoming U.S. President. It postulates that uncertainty is a guide to improbability. In its computer implementation, an intuitive system 1 simulates evidence in mental models and forms analog non-numerical representations of the magnitude of degrees of belief. This system has minimal computational power and combines evidence using a small repertoire of primitive operations. It resolves the uncertainty of divergent evidence for single events, for conjunctions of events, and for inclusive disjunctions of events, by taking a primitive average of non-numerical probabilities. It computes conditional probabilities in a tractable way, treating the given event as evidence that may be relevant to the probability of the dependent event. A deliberative system 2 maps the resulting representations into numerical probabilities. With access to working memory, it carries out arithmetical operations in combining numerical estimates. Experiments corroborated the theory's predictions. Participants concurred in estimates of real possibilities. They violated the complete joint probability distribution in the predicted ways, when they made estimates about conjunctions: P(A), P(B), P(A and B), disjunctions: P(A), P(B), P(A or B or both), and conditional probabilities P(A), P(B), P(B|A). They were faster to estimate the probabilities of compound propositions when they had already estimated the probabilities of each of their components. We discuss the implications of these results for theories of probabilistic reasoning.
Revising probability estimates: Why increasing likelihood means increasing impact.
Maglio, Sam J; Polman, Evan
2016-08-01
Forecasted probabilities rarely stay the same for long. Instead, they are subject to constant revision-moving upward or downward, uncertain events become more or less likely. Yet little is known about how people interpret probability estimates beyond static snapshots, like a 30% chance of rain. Here, we consider the cognitive, affective, and behavioral consequences of revisions to probability forecasts. Stemming from a lay belief that revisions signal the emergence of a trend, we find in 10 studies (comprising uncertain events such as weather, climate change, sex, sports, and wine) that upward changes to event-probability (e.g., increasing from 20% to 30%) cause events to feel less remote than downward changes (e.g., decreasing from 40% to 30%), and subsequently change people's behavior regarding those events despite the revised event-probabilities being the same. Our research sheds light on how revising the probabilities for future events changes how people manage those uncertain events. (PsycINFO Database Record
False Alarm Probability Estimation for Compressive Sensing Radar
Anitori, L.; Otten, M.P.G.; Hoogeboom, P.
2011-01-01
In this paper false alarm probability (FAP) estimation of a radar using Compressive Sensing (CS) in the frequency domain is investigated. Compressive Sensing is a recently proposed technique which allows reconstruction of sparse signal from sub-Nyquist rate measurements. The estimation of the FAP is
Estimation of State Transition Probabilities: A Neural Network Model
Saito, Hiroshi; Takiyama, Ken; Okada, Masato
2015-12-01
Humans and animals can predict future states on the basis of acquired knowledge. This prediction of the state transition is important for choosing the best action, and the prediction is only possible if the state transition probability has already been learned. However, how our brains learn the state transition probability is unknown. Here, we propose a simple algorithm for estimating the state transition probability by utilizing the state prediction error. We analytically and numerically confirmed that our algorithm is able to learn the probability completely with an appropriate learning rate. Furthermore, our learning rule reproduced experimentally reported psychometric functions and neural activities in the lateral intraparietal area in a decision-making task. Thus, our algorithm might describe the manner in which our brains learn state transition probabilities and predict future states.
Estimating the empirical probability of submarine landslide occurrence
Geist, Eric L.; Parsons, Thomas E.; Mosher, David C.; Shipp, Craig; Moscardelli, Lorena; Chaytor, Jason D.; Baxter, Christopher D. P.; Lee, Homa J.; Urgeles, Roger
2010-01-01
The empirical probability for the occurrence of submarine landslides at a given location can be estimated from age dates of past landslides. In this study, tools developed to estimate earthquake probability from paleoseismic horizons are adapted to estimate submarine landslide probability. In both types of estimates, one has to account for the uncertainty associated with age-dating individual events as well as the open time intervals before and after the observed sequence of landslides. For observed sequences of submarine landslides, we typically only have the age date of the youngest event and possibly of a seismic horizon that lies below the oldest event in a landslide sequence. We use an empirical Bayes analysis based on the Poisson-Gamma conjugate prior model specifically applied to the landslide probability problem. This model assumes that landslide events as imaged in geophysical data are independent and occur in time according to a Poisson distribution characterized by a rate parameter λ. With this method, we are able to estimate the most likely value of λ and, importantly, the range of uncertainty in this estimate. Examples considered include landslide sequences observed in the Santa Barbara Channel, California, and in Port Valdez, Alaska. We confirm that given the uncertainties of age dating that landslide complexes can be treated as single events by performing statistical test of age dates representing the main failure episode of the Holocene Storegga landslide complex.
Estimating the probability of rare events: addressing zero failure data.
Quigley, John; Revie, Matthew
2011-07-01
Traditional statistical procedures for estimating the probability of an event result in an estimate of zero when no events are realized. Alternative inferential procedures have been proposed for the situation where zero events have been realized but often these are ad hoc, relying on selecting methods dependent on the data that have been realized. Such data-dependent inference decisions violate fundamental statistical principles, resulting in estimation procedures whose benefits are difficult to assess. In this article, we propose estimating the probability of an event occurring through minimax inference on the probability that future samples of equal size realize no more events than that in the data on which the inference is based. Although motivated by inference on rare events, the method is not restricted to zero event data and closely approximates the maximum likelihood estimate (MLE) for nonzero data. The use of the minimax procedure provides a risk adverse inferential procedure where there are no events realized. A comparison is made with the MLE and regions of the underlying probability are identified where this approach is superior. Moreover, a comparison is made with three standard approaches to supporting inference where no event data are realized, which we argue are unduly pessimistic. We show that for situations of zero events the estimator can be simply approximated with 1/2.5n, where n is the number of trials.
Failure Probability Estimation of Wind Turbines by Enhanced Monte Carlo
Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Naess, Arvid
2012-01-01
This paper discusses the estimation of the failure probability of wind turbines required by codes of practice for designing them. The Standard Monte Carlo (SMC) simulations may be used for this reason conceptually as an alternative to the popular Peaks-Over-Threshold (POT) method. However......, estimation of very low failure probabilities with SMC simulations leads to unacceptably high computational costs. In this study, an Enhanced Monte Carlo (EMC) method is proposed that overcomes this obstacle. The method has advantages over both POT and SMC in terms of its low computational cost and accuracy...... is controlled by the pitch controller. This provides a fair framework for comparison of the behavior and failure event of the wind turbine with emphasis on the effect of the pitch controller. The Enhanced Monte Carlo method is then applied to the model and the failure probabilities of the model are estimated...
Simulation and Estimation of Extreme Quantiles and Extreme Probabilities
Guyader, Arnaud, E-mail: arnaud.guyader@uhb.fr [Universite Rennes 2 (France); Hengartner, Nicolas [Los Alamos National Laboratory, Information Sciences Group (United States); Matzner-Lober, Eric [Universite Rennes 2 (France)
2011-10-15
Let X be a random vector with distribution {mu} on Double-Struck-Capital-R {sup d} and {Phi} be a mapping from Double-Struck-Capital-R {sup d} to Double-Struck-Capital-R . That mapping acts as a black box, e.g., the result from some computer experiments for which no analytical expression is available. This paper presents an efficient algorithm to estimate a tail probability given a quantile or a quantile given a tail probability. The algorithm improves upon existing multilevel splitting methods and can be analyzed using Poisson process tools that lead to exact description of the distribution of the estimated probabilities and quantiles. The performance of the algorithm is demonstrated in a problem related to digital watermarking.
Allelic drop-out probabilities estimated by logistic regression
Tvedebrink, Torben; Eriksen, Poul Svante; Asplund, Maria
2012-01-01
We discuss the model for estimating drop-out probabilities presented by Tvedebrink et al. [7] and the concerns, that have been raised. The criticism of the model has demonstrated that the model is not perfect. However, the model is very useful for advanced forensic genetic work, where allelic dro...
Estimating the joint survival probabilities of married individuals
Sanders, Lisanne; Melenberg, Bertrand
2016-01-01
We estimate the joint survival probability of spouses using a large random sample drawn from a Dutch census. As benchmarks we use two bivariate Weibull models. We consider more flexible models, using a semi-nonparametric approach, by extending the independent Weibull distribution using squared polyn
Recursive estimation of prior probabilities using the mixture approach
Kazakos, D.
1974-01-01
The problem of estimating the prior probabilities q sub k of a mixture of known density functions f sub k(X), based on a sequence of N statistically independent observations is considered. It is shown that for very mild restrictions on f sub k(X), the maximum likelihood estimate of Q is asymptotically efficient. A recursive algorithm for estimating Q is proposed, analyzed, and optimized. For the M = 2 case, it is possible for the recursive algorithm to achieve the same performance with the maximum likelihood one. For M 2, slightly inferior performance is the price for having a recursive algorithm. However, the loss is computable and tolerable.
Estimating the historical and future probabilities of large terrorist events
Clauset, Aaron
2012-01-01
Quantities with right-skewed distributions are ubiquitous in complex social systems, including political conflict, economics and social networks, and these systems sometimes produce extremely large events. For instance, the 9/11 terrorist events produced nearly 3000 fatalities, nearly six times more than the next largest event. But, was this enormous loss of life statistically unlikely given modern terrorism's historical record? Accurately estimating the probability of such an event is complicated by the large fluctuations in the empirical distribution's upper tail. We present a generic statistical algorithm for making such estimates, which combines semi-parametric models of tail behavior and a non-parametric bootstrap. Applied to a global database of terrorist events, we estimate the worldwide historical probability of observing at least one 9/11-sized or larger event since 1968 to be 11-35%. These results are robust to conditioning on global variations in economic development, domestic versus international ...
Probability Estimation in the Framework of Intuitionistic Fuzzy Evidence Theory
Yafei Song
2015-01-01
Full Text Available Intuitionistic fuzzy (IF evidence theory, as an extension of Dempster-Shafer theory of evidence to the intuitionistic fuzzy environment, is exploited to process imprecise and vague information. Since its inception, much interest has been concentrated on IF evidence theory. Many works on the belief functions in IF information systems have appeared. Although belief functions on the IF sets can deal with uncertainty and vagueness well, it is not convenient for decision making. This paper addresses the issue of probability estimation in the framework of IF evidence theory with the hope of making rational decision. Background knowledge about evidence theory, fuzzy set, and IF set is firstly reviewed, followed by introduction of IF evidence theory. Axiomatic properties of probability distribution are then proposed to assist our interpretation. Finally, probability estimations based on fuzzy and IF belief functions together with their proofs are presented. It is verified that the probability estimation method based on IF belief functions is also potentially applicable to classical evidence theory and fuzzy evidence theory. Moreover, IF belief functions can be combined in a convenient way once they are transformed to interval-valued possibilities.
Incorporating medical interventions into carrier probability estimation for genetic counseling
Katki Hormuzd A
2007-03-01
Full Text Available Abstract Background Mendelian models for predicting who may carry an inherited deleterious mutation of known disease genes based on family history are used in a variety of clinical and research activities. People presenting for genetic counseling are increasingly reporting risk-reducing medical interventions in their family histories because, recently, a slew of prophylactic interventions have become available for certain diseases. For example, oophorectomy reduces risk of breast and ovarian cancers, and is now increasingly being offered to women with family histories of breast and ovarian cancer. Mendelian models should account for medical interventions because interventions modify mutation penetrances and thus affect the carrier probability estimate. Methods We extend Mendelian models to account for medical interventions by accounting for post-intervention disease history through an extra factor that can be estimated from published studies of the effects of interventions. We apply our methods to incorporate oophorectomy into the BRCAPRO model, which predicts a woman's risk of carrying mutations in BRCA1 and BRCA2 based on her family history of breast and ovarian cancer. This new BRCAPRO is available for clinical use. Results We show that accounting for interventions undergone by family members can seriously affect the mutation carrier probability estimate, especially if the family member has lived many years post-intervention. We show that interventions have more impact on the carrier probability as the benefits of intervention differ more between carriers and non-carriers. Conclusion These findings imply that carrier probability estimates that do not account for medical interventions may be seriously misleading and could affect a clinician's recommendation about offering genetic testing. The BayesMendel software, which allows one to implement any Mendelian carrier probability model, has been extended to allow medical interventions, so future
Collective animal behavior from Bayesian estimation and probability matching.
Alfonso Pérez-Escudero
2011-11-01
Full Text Available Animals living in groups make movement decisions that depend, among other factors, on social interactions with other group members. Our present understanding of social rules in animal collectives is mainly based on empirical fits to observations, with less emphasis in obtaining first-principles approaches that allow their derivation. Here we show that patterns of collective decisions can be derived from the basic ability of animals to make probabilistic estimations in the presence of uncertainty. We build a decision-making model with two stages: Bayesian estimation and probabilistic matching. In the first stage, each animal makes a Bayesian estimation of which behavior is best to perform taking into account personal information about the environment and social information collected by observing the behaviors of other animals. In the probability matching stage, each animal chooses a behavior with a probability equal to the Bayesian-estimated probability that this behavior is the most appropriate one. This model derives very simple rules of interaction in animal collectives that depend only on two types of reliability parameters, one that each animal assigns to the other animals and another given by the quality of the non-social information. We test our model by obtaining theoretically a rich set of observed collective patterns of decisions in three-spined sticklebacks, Gasterosteus aculeatus, a shoaling fish species. The quantitative link shown between probabilistic estimation and collective rules of behavior allows a better contact with other fields such as foraging, mate selection, neurobiology and psychology, and gives predictions for experiments directly testing the relationship between estimation and collective behavior.
Estimating Income Variances by Probability Sampling: A Case Study
Akbar Ali Shah
2010-08-01
Full Text Available The main focus of the study is to estimate variability in income distribution of households by conducting a survey. The variances in income distribution have been calculated by probability sampling techniques. The variances are compared and relative gains are also obtained. It is concluded that the income distribution has been better as compared to first Household Income and Expenditure Survey (HIES conducted in Pakistan 1993-94.
Maximum Entropy Estimation of Transition Probabilities of Reversible Markov Chains
Erik Van der Straeten
2009-11-01
Full Text Available In this paper, we develop a general theory for the estimation of the transition probabilities of reversible Markov chains using the maximum entropy principle. A broad range of physical models can be studied within this approach. We use one-dimensional classical spin systems to illustrate the theoretical ideas. The examples studied in this paper are: the Ising model, the Potts model and the Blume-Emery-Griffiths model.
Ensembles of probability estimation trees for customer churn prediction
2010-01-01
Customer churn prediction is one of the most, important elements tents of a company's Customer Relationship Management, (CRM) strategy In tins study, two strategies are investigated to increase the lift. performance of ensemble classification models, i.e (1) using probability estimation trees (PETs) instead of standard decision trees as base classifiers; and (n) implementing alternative fusion rules based on lift weights lot the combination of ensemble member's outputs Experiments ale conduct...
Estimating the exceedance probability of rain rate by logistic regression
Chiu, Long S.; Kedem, Benjamin
1990-01-01
Recent studies have shown that the fraction of an area with rain intensity above a fixed threshold is highly correlated with the area-averaged rain rate. To estimate the fractional rainy area, a logistic regression model, which estimates the conditional probability that rain rate over an area exceeds a fixed threshold given the values of related covariates, is developed. The problem of dependency in the data in the estimation procedure is bypassed by the method of partial likelihood. Analyses of simulated scanning multichannel microwave radiometer and observed electrically scanning microwave radiometer data during the Global Atlantic Tropical Experiment period show that the use of logistic regression in pixel classification is superior to multiple regression in predicting whether rain rate at each pixel exceeds a given threshold, even in the presence of noisy data. The potential of the logistic regression technique in satellite rain rate estimation is discussed.
Estimating probable flaw distributions in PWR steam generator tubes
Gorman, J.A.; Turner, A.P.L. [Dominion Engineering, Inc., McLean, VA (United States)
1997-02-01
This paper describes methods for estimating the number and size distributions of flaws of various types in PWR steam generator tubes. These estimates are needed when calculating the probable primary to secondary leakage through steam generator tubes under postulated accidents such as severe core accidents and steam line breaks. The paper describes methods for two types of predictions: (1) the numbers of tubes with detectable flaws of various types as a function of time, and (2) the distributions in size of these flaws. Results are provided for hypothetical severely affected, moderately affected and lightly affected units. Discussion is provided regarding uncertainties and assumptions in the data and analyses.
Probability Density and CFAR Threshold Estimation for Hyperspectral Imaging
Clark, G A
2004-09-21
The work reported here shows the proof of principle (using a small data set) for a suite of algorithms designed to estimate the probability density function of hyperspectral background data and compute the appropriate Constant False Alarm Rate (CFAR) matched filter decision threshold for a chemical plume detector. Future work will provide a thorough demonstration of the algorithms and their performance with a large data set. The LASI (Large Aperture Search Initiative) Project involves instrumentation and image processing for hyperspectral images of chemical plumes in the atmosphere. The work reported here involves research and development on algorithms for reducing the false alarm rate in chemical plume detection and identification algorithms operating on hyperspectral image cubes. The chemical plume detection algorithms to date have used matched filters designed using generalized maximum likelihood ratio hypothesis testing algorithms [1, 2, 5, 6, 7, 12, 10, 11, 13]. One of the key challenges in hyperspectral imaging research is the high false alarm rate that often results from the plume detector [1, 2]. The overall goal of this work is to extend the classical matched filter detector to apply Constant False Alarm Rate (CFAR) methods to reduce the false alarm rate, or Probability of False Alarm P{sub FA} of the matched filter [4, 8, 9, 12]. A detector designer is interested in minimizing the probability of false alarm while simultaneously maximizing the probability of detection P{sub D}. This is summarized by the Receiver Operating Characteristic Curve (ROC) [10, 11], which is actually a family of curves depicting P{sub D} vs. P{sub FA}parameterized by varying levels of signal to noise (or clutter) ratio (SNR or SCR). Often, it is advantageous to be able to specify a desired P{sub FA} and develop a ROC curve (P{sub D} vs. decision threshold r{sub 0}) for that case. That is the purpose of this work. Specifically, this work develops a set of algorithms and MATLAB
Accurate photometric redshift probability density estimation - method comparison and application
Rau, Markus Michael; Brimioulle, Fabrice; Frank, Eibe; Friedrich, Oliver; Gruen, Daniel; Hoyle, Ben
2015-01-01
We introduce an ordinal classification algorithm for photometric redshift estimation, which vastly improves the reconstruction of photometric redshift probability density functions (PDFs) for individual galaxies and galaxy samples. As a use case we apply our method to CFHTLS galaxies. The ordinal classification algorithm treats distinct redshift bins as ordered values, which improves the quality of photometric redshift PDFs, compared with non-ordinal classification architectures. We also propose a new single value point estimate of the galaxy redshift, that can be used to estimate the full redshift PDF of a galaxy sample. This method is competitive in terms of accuracy with contemporary algorithms, which stack the full redshift PDFs of all galaxies in the sample, but requires orders of magnitudes less storage space. The methods described in this paper greatly improve the log-likelihood of individual object redshift PDFs, when compared with a popular Neural Network code (ANNz). In our use case, this improvemen...
ESTIMATION OF INTRUSION DETECTION PROBABILITY BY PASSIVE INFRARED DETECTORS
V. V. Volkhonskiy
2015-07-01
Full Text Available Subject of Research. The paper deals with estimation of detection probability of intruder by passive infrared detector in different conditions of velocity and direction for automated analyses of physical protection systems effectiveness. Method. Analytic formulas for detection distance distribution laws obtained by means of experimental histogram approximation are used. Main Results. Applicability of different distribution laws has been studied, such as Rayleigh, Gauss, Gamma, Maxwell and Weibull distribution. Based on walk tests results, approximation of experimental histograms of detection distance probability distribution laws by passive infrared detectors was done. Conformity of the histograms to the mentioned analytical laws according to fitting criterion 2 has been checked for different conditions of velocity and direction of intruder movement. Mean and variance of approximate distribution laws were equal to the same parameters of experimental histograms for corresponding intruder movement parameters. Approximation accuracy evaluation for above mentioned laws was done with significance level of 0.05. According to fitting criterion 2, the Rayleigh and Gamma laws are corresponded mostly close to the histograms for different velocity and direction of intruder movement. Dependences of approximation accuracy for different conditions of intrusion have been got. They are usable for choosing an approximation law in the certain condition. Practical Relevance. Analytic formulas for detection probability are usable for modeling of intrusion process and objective effectiveness estimation of physical protection systems by both developers and users.
Estimation of the probability of success in petroleum exploration
Davis, J.C.
1977-01-01
A probabilistic model for oil exploration can be developed by assessing the conditional relationship between perceived geologic variables and the subsequent discovery of petroleum. Such a model includes two probabilistic components, the first reflecting the association between a geologic condition (structural closure, for example) and the occurrence of oil, and the second reflecting the uncertainty associated with the estimation of geologic variables in areas of limited control. Estimates of the conditional relationship between geologic variables and subsequent production can be found by analyzing the exploration history of a "training area" judged to be geologically similar to the exploration area. The geologic variables are assessed over the training area using an historical subset of the available data, whose density corresponds to the present control density in the exploration area. The success or failure of wells drilled in the training area subsequent to the time corresponding to the historical subset provides empirical estimates of the probability of success conditional upon geology. Uncertainty in perception of geological conditions may be estimated from the distribution of errors made in geologic assessment using the historical subset of control wells. These errors may be expressed as a linear function of distance from available control. Alternatively, the uncertainty may be found by calculating the semivariogram of the geologic variables used in the analysis: the two procedures will yield approximately equivalent results. The empirical probability functions may then be transferred to the exploration area and used to estimate the likelihood of success of specific exploration plays. These estimates will reflect both the conditional relationship between the geological variables used to guide exploration and the uncertainty resulting from lack of control. The technique is illustrated with case histories from the mid-Continent area of the U.S.A. ?? 1977 Plenum
Estimation of probability densities using scale-free field theories.
Kinney, Justin B
2014-07-01
The question of how best to estimate a continuous probability density from finite data is an intriguing open problem at the interface of statistics and physics. Previous work has argued that this problem can be addressed in a natural way using methods from statistical field theory. Here I describe results that allow this field-theoretic approach to be rapidly and deterministically computed in low dimensions, making it practical for use in day-to-day data analysis. Importantly, this approach does not impose a privileged length scale for smoothness of the inferred probability density, but rather learns a natural length scale from the data due to the tradeoff between goodness of fit and an Occam factor. Open source software implementing this method in one and two dimensions is provided.
Unbiased estimation of precise temporal correlations between spike trains.
Stark, Eran; Abeles, Moshe
2009-04-30
A key issue in systems neuroscience is the contribution of precise temporal inter-neuronal interactions to information processing in the brain, and the main analytical tool used for studying pair-wise interactions is the cross-correlation histogram (CCH). Although simple to generate, a CCH is influenced by multiple factors in addition to precise temporal correlations between two spike trains, thus complicating its interpretation. A Monte-Carlo-based technique, the jittering method, has been suggested to isolate the contribution of precise temporal interactions to neural information processing. Here, we show that jittering spike trains is equivalent to convolving the CCH derived from the original trains with a finite window and using a Poisson distribution to estimate probabilities. Both procedures over-fit the original spike trains and therefore the resulting statistical tests are biased and have low power. We devise an alternative method, based on convolving the CCH with a partially hollowed window, and illustrate its utility using artificial and real spike trains. The modified convolution method is unbiased, has high power, and is computationally fast. We recommend caution in the use of the jittering method and in the interpretation of results based on it, and suggest using the modified convolution method for detecting precise temporal correlations between spike trains.
Site Specific Probable Maximum Precipitation Estimates and Professional Judgement
Hayes, B. D.; Kao, S. C.; Kanney, J. F.; Quinlan, K. R.; DeNeale, S. T.
2015-12-01
State and federal regulatory authorities currently rely upon the US National Weather Service Hydrometeorological Reports (HMRs) to determine probable maximum precipitation (PMP) estimates (i.e., rainfall depths and durations) for estimating flooding hazards for relatively broad regions in the US. PMP estimates for the contributing watersheds upstream of vulnerable facilities are used to estimate riverine flooding hazards while site-specific estimates for small water sheds are appropriate for individual facilities such as nuclear power plants. The HMRs are often criticized due to their limitations on basin size, questionable applicability in regions affected by orographic effects, their lack of consist methods, and generally by their age. HMR-51 for generalized PMP estimates for the United States east of the 105th meridian, was published in 1978 and is sometimes perceived as overly conservative. The US Nuclear Regulatory Commission (NRC), is currently reviewing several flood hazard evaluation reports that rely on site specific PMP estimates that have been commercially developed. As such, NRC has recently investigated key areas of expert judgement via a generic audit and one in-depth site specific review as they relate to identifying and quantifying actual and potential storm moisture sources, determining storm transposition limits, and adjusting available moisture during storm transposition. Though much of the approach reviewed was considered a logical extension of HMRs, two key points of expert judgement stood out for further in-depth review. The first relates primarily to small storms and the use of a heuristic for storm representative dew point adjustment developed for the Electric Power Research Institute by North American Weather Consultants in 1993 in order to harmonize historic storms for which only 12 hour dew point data was available with more recent storms in a single database. The second issue relates to the use of climatological averages for spatially
Brus, D.J.; Gruijter, de J.J.
2003-01-01
In estimating spatial means of environmental variables of a region from data collected by convenience or purposive sampling, validity of the results can be ensured by collecting additional data through probability sampling. The precision of the pi estimator that uses the probability sample can be
Brus, D.J.; Gruijter, de J.J.
2003-01-01
In estimating spatial means of environmental variables of a region from data collected by convenience or purposive sampling, validity of the results can be ensured by collecting additional data through probability sampling. The precision of the pi estimator that uses the probability sample can be in
Most probable paths in temporal weighted networks: An application to ocean transport
Ser-Giacomi, Enrico; Hernandez-Garcia, Emilio; Lopez, Cristobal
2014-01-01
We consider paths in weighted and directed temporal networks, introducing tools to compute sets of paths of high probability. We quantify the relative importance of the most probable path between two nodes with respect to the whole set of paths, and to a subset of highly probable paths which incorporate most of the connection probability. These concepts are used to provide alternative definitions of betweenness centrality. We apply our formalism to a transport network describing surface flow in the Mediterranean sea. Despite the full transport dynamics is described by a very large number of paths we find that, for realistic time scales, only a very small subset of high probability paths (or even a single most probable one) is enough to characterize global connectivity properties of the network.
Polynomial probability distribution estimation using the method of moments.
Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper
2017-01-01
We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.
Stimulus probability effects on temporal bisection performance of mice (Mus musculus).
Akdoğan, Başak; Balcı, Fuat
2016-01-01
In the temporal bisection task, participants classify experienced stimulus durations as short or long based on their temporal similarity to previously learned reference durations. Temporal decision making in this task should be influenced by the experienced probabilities of the reference durations for adaptiveness. In this study, we tested the temporal bisection performance of mice (Mus musculus) under different short and long reference duration probability conditions implemented across two experimental phases. In Phase 1, the proportion of reference durations (compared to probe durations) was 0.5, whereas in Phase 2 it was increased to 0.8 to further examine the adjustment of choice behavior with more frequent reference duration presentations (under higher reinforcement rate). Our findings suggest that mice developed adaptive biases in their choice behaviors. These adjustments in choice behavior were nearly optimal as the mice maximized their gain to a great extent which required them to monitor stimulus probabilities as well as the level of variability in their temporal judgments. We further found that short but not long categorization response times were sensitive to stimulus probability manipulations, which in turn suggests an asymmetry between short and long categorizations. Finally, we investigated the latent decision processes underlying the bias manifested in subjects' choice behavior within the diffusion model framework. Our results revealed that probabilistic information influenced the starting point and the rate of evidence accumulation process. Overall, the stimulus probability effects on choice behavior were modulated by the reinforcement rate. Our findings illustrate that mice can adapt their temporal behaviors with respect to the probabilistic contingencies in the environment.
Silver, R Angus
2003-12-15
Synapses are a key determinant of information processing in the central nervous system. Investigation of the mechanisms underlying synaptic transmission at central synapses is complicated by the inaccessibility of synaptic contacts and the fact that their temporal dynamics are governed by multiple parameters. Multiple-probability fluctuation analysis (MPFA) is a recently developed method for estimating quantal parameters from the variance and mean amplitude of evoked steady-state synaptic responses recorded under a range of release probability conditions. This article describes the theoretical basis and the underlying assumptions of MPFA, illustrating how a simplified multinomial model can be used to estimate mean quantal parameters at synapses where quantal size and release probability are nonuniform. Interpretations of the quantal parameter estimates are discussed in relation to uniquantal and multiquantal models of transmission. Practical aspects of this method are illustrated including a new method for estimating quantal size and variability, approaches for optimising data collection, error analysis and a method for identifying multivesicular release. The advantages and limitations of investigating synaptic function with MPFA are explored and contrasted with those for traditional quantal analysis and more recent optical quantal analysis methods.
The effect of coupling hydrologic and hydrodynamic models on probable maximum flood estimation
Felder, Guido; Zischg, Andreas; Weingartner, Rolf
2017-07-01
Deterministic rainfall-runoff modelling usually assumes stationary hydrological system, as model parameters are calibrated with and therefore dependant on observed data. However, runoff processes are probably not stationary in the case of a probable maximum flood (PMF) where discharge greatly exceeds observed flood peaks. Developing hydrodynamic models and using them to build coupled hydrologic-hydrodynamic models can potentially improve the plausibility of PMF estimations. This study aims to assess the potential benefits and constraints of coupled modelling compared to standard deterministic hydrologic modelling when it comes to PMF estimation. The two modelling approaches are applied using a set of 100 spatio-temporal probable maximum precipitation (PMP) distribution scenarios. The resulting hydrographs, the resulting peak discharges as well as the reliability and the plausibility of the estimates are evaluated. The discussion of the results shows that coupling hydrologic and hydrodynamic models substantially improves the physical plausibility of PMF modelling, although both modelling approaches lead to PMF estimations for the catchment outlet that fall within a similar range. Using a coupled model is particularly suggested in cases where considerable flood-prone areas are situated within a catchment.
Estimating age conditional probability of developing disease from surveillance data
Fay Michael P
2004-07-01
Full Text Available Abstract Fay, Pfeiffer, Cronin, Le, and Feuer (Statistics in Medicine 2003; 22; 1837–1848 developed a formula to calculate the age-conditional probability of developing a disease for the first time (ACPDvD for a hypothetical cohort. The novelty of the formula of Fay et al (2003 is that one need not know the rates of first incidence of disease per person-years alive and disease-free, but may input the rates of first incidence per person-years alive only. Similarly the formula uses rates of death from disease and death from other causes per person-years alive. The rates per person-years alive are much easier to estimate than per person-years alive and disease-free. Fay et al (2003 used simple piecewise constant models for all three rate functions which have constant rates within each age group. In this paper, we detail a method for estimating rate functions which does not have jumps at the beginning of age groupings, and need not be constant within age groupings. We call this method the mid-age group joinpoint (MAJ model for the rates. The drawback of the MAJ model is that numerical integration must be used to estimate the resulting ACPDvD. To increase computational speed, we offer a piecewise approximation to the MAJ model, which we call the piecewise mid-age group joinpoint (PMAJ model. The PMAJ model for the rates input into the formula for ACPDvD described in Fay et al (2003 is the current method used in the freely available DevCan software made available by the National Cancer Institute.
Fused Adaptive Lasso for Spatial and Temporal Quantile Function Estimation
Sun, Ying
2015-09-01
Quantile functions are important in characterizing the entire probability distribution of a random variable, especially when the tail of a skewed distribution is of interest. This article introduces new quantile function estimators for spatial and temporal data with a fused adaptive Lasso penalty to accommodate the dependence in space and time. This method penalizes the difference among neighboring quantiles, hence it is desirable for applications with features ordered in time or space without replicated observations. The theoretical properties are investigated and the performances of the proposed methods are evaluated by simulations. The proposed method is applied to particulate matter (PM) data from the Community Multiscale Air Quality (CMAQ) model to characterize the upper quantiles, which are crucial for studying spatial association between PM concentrations and adverse human health effects. © 2016 American Statistical Association and the American Society for Quality.
Dental age estimation: the role of probability estimates at the 10 year threshold.
Lucas, Victoria S; McDonald, Fraser; Neil, Monica; Roberts, Graham
2014-08-01
The use of probability at the 18 year threshold has simplified the reporting of dental age estimates for emerging adults. The availability of simple to use widely available software has enabled the development of the probability threshold for individual teeth in growing children. Tooth development stage data from a previous study at the 10 year threshold were reused to estimate the probability of developing teeth being above or below the 10 year thresh-hold using the NORMDIST Function in Microsoft Excel. The probabilities within an individual subject are averaged to give a single probability that a subject is above or below 10 years old. To test the validity of this approach dental panoramic radiographs of 50 female and 50 male children within 2 years of the chronological age were assessed with the chronological age masked. Once the whole validation set of 100 radiographs had been assessed the masking was removed and the chronological age and dental age compared. The dental age was compared with chronological age to determine whether the dental age correctly or incorrectly identified a validation subject as above or below the 10 year threshold. The probability estimates correctly identified children as above or below on 94% of occasions. Only 2% of the validation group with a chronological age of less than 10 years were assigned to the over 10 year group. This study indicates the very high accuracy of assignment at the 10 year threshold. Further work at other legally important age thresholds is needed to explore the value of this approach to the technique of age estimation. Copyright © 2014. Published by Elsevier Ltd.
Impact of temporal probability in 4D dose calculation for lung tumors.
Rouabhi, Ouided; Ma, Mingyu; Bayouth, John; Xia, Junyi
2015-11-08
The purpose of this study was to evaluate the dosimetric uncertainty in 4D dose calculation using three temporal probability distributions: uniform distribution, sinusoidal distribution, and patient-specific distribution derived from the patient respiratory trace. Temporal probability, defined as the fraction of time a patient spends in each respiratory amplitude, was evaluated in nine lung cancer patients. Four-dimensional computed tomography (4D CT), along with deformable image registration, was used to compute 4D dose incorporating the patient's respiratory motion. First, the dose of each of 10 phase CTs was computed using the same planning parameters as those used in 3D treatment planning based on the breath-hold CT. Next, deformable image registration was used to deform the dose of each phase CT to the breath-hold CT using the deformation map between the phase CT and the breath-hold CT. Finally, the 4D dose was computed by summing the deformed phase doses using their corresponding temporal probabilities. In this study, 4D dose calculated from the patient-specific temporal probability distribution was used as the ground truth. The dosimetric evaluation matrix included: 1) 3D gamma analysis, 2) mean tumor dose (MTD), 3) mean lung dose (MLD), and 4) lung V20. For seven out of nine patients, both uniform and sinusoidal temporal probability dose distributions were found to have an average gamma passing rate > 95% for both the lung and PTV regions. Compared with 4D dose calculated using the patient respiratory trace, doses using uniform and sinusoidal distribution showed a percentage difference on average of -0.1% ± 0.6% and -0.2% ± 0.4% in MTD, -0.2% ± 1.9% and -0.2% ± 1.3% in MLD, 0.09% ± 2.8% and -0.07% ± 1.8% in lung V20, -0.1% ± 2.0% and 0.08% ± 1.34% in lung V10, 0.47% ± 1.8% and 0.19% ± 1.3% in lung V5, respectively. We concluded that four-dimensional dose computed using either a uniform or sinusoidal temporal probability distribution can
First hitting probabilities for semi markov chains and estimation
Georgiadis, Stylianos
2017-01-01
We first consider a stochastic system described by an absorbing semi-Markov chain with finite state space and we introduce the absorption probability to a class of recurrent states. Afterwards, we study the first hitting probability to a subset of states for an irreducible semi-Markov chain...
无
2009-01-01
【说词】1. He can probably tell us the truth.2. Will it rain this afternoong ？ Probably【解语】作副词，意为“大概、或许”，表示可能性很大，通常指根据目前情况作出积极推测或判断；
Brus, D J; de Gruijter, J J
2003-04-01
In estimating spatial means of environmental variables of a region from data collected by convenience or purposive sampling, validity of the results can be ensured by collecting additional data through probability sampling. The precision of the pi estimator that uses the probability sample can be increased by interpolating the values at the nonprobability sample points to the probability sample points, and using these interpolated values as an auxiliary variable in the difference or regression estimator. These estimators are (approximately) unbiased, even when the nonprobability sample is severely biased such as in preferential samples. The gain in precision compared to the pi estimator in combination with Simple Random Sampling is controlled by the correlation between the target variable and interpolated variable. This correlation is determined by the size (density) and spatial coverage of the nonprobability sample, and the spatial continuity of the target variable. In a case study the average ratio of the variances of the simple regression estimator and pi estimator was 0.68 for preferential samples of size 150 with moderate spatial clustering, and 0.80 for preferential samples of similar size with strong spatial clustering. In the latter case the simple regression estimator was substantially more precise than the simple difference estimator.
The estimation of yearly probability gain for seismic statistical model
无
2000-01-01
Based on the calculation method of information gain in the stochastic process presented by Vere-Jones, the relation between information gain and probability gain is studied, which is very common in earthquake prediction, and the yearly probability gain for seismic statistical model is proposed. The method is applied to the non-stationary Poisson model with whole-process exponential increase and stress release model. In addition, the prediction method of stress release model is obtained based on the inverse function simulation method of stochastic variable.
Improved Estimation of Forestry Edge Effects Accounting for Detection Probability
Hocking, Daniel; Babbitt, Kimberly; Yamasaki, Mariko
2013-01-01
Poster presented at the 98th annual meeting of the Ecological Society of America (ESA) in Minneapolis, Minnesota, USA. We used a non-linear, parametric model accounting for detection probability to quantify red-backed salamander (Plethodon cinereus) abundance across clearcut-forest edges. This approach allows for projection across landscapes and prediction given alternative logging plans.
Estimating Probabilities of Default for Low Default Portfolios
Katja Pluto; Dirk Tasche
2004-01-01
For credit risk management purposes in general, and for allocation of regulatory capital by banks in particular (Basel II), numerical assessments of the credit-worthiness of borrowers are indispensable. These assessments are expressed in terms of probabilities of default (PD) that should incorporate a certain degree of conservatism in order to reflect the prudential risk management style banks are required to apply. In case of credit portfolios that did not at all suffer defaults, or very few...
Estimating the concordance probability in a survival analysis with a discrete number of risk groups.
Heller, Glenn; Mo, Qianxing
2016-04-01
A clinical risk classification system is an important component of a treatment decision algorithm. A measure used to assess the strength of a risk classification system is discrimination, and when the outcome is survival time, the most commonly applied global measure of discrimination is the concordance probability. The concordance probability represents the pairwise probability of lower patient risk given longer survival time. The c-index and the concordance probability estimate have been used to estimate the concordance probability when patient-specific risk scores are continuous. In the current paper, the concordance probability estimate and an inverse probability censoring weighted c-index are modified to account for discrete risk scores. Simulations are generated to assess the finite sample properties of the concordance probability estimate and the weighted c-index. An application of these measures of discriminatory power to a metastatic prostate cancer risk classification system is examined.
Nonlinear Estimation With Sparse Temporal Measurements
2016-09-01
technology. Measurements have inherent precision and accuracy uncertainty preventing perfect knowledge of the system state. Additionally, each system state...xi)µTg(x) + µg(x)g(xi) T ) + 1 N − 1 N∑ i=1 µg(x)µ T g(x) (1.18) This method rapidly succumbs to Bellman’s " curse of dimensionality," the exponential... knowledge of each state when estimation commences. A fixed step, Runge-Kutta fourth-order solver is used to propagate the process model be- tween
Estimating the posterior probabilities using the k-nearest neighbor rule.
Atiya, Amir F
2005-03-01
In many pattern classification problems, an estimate of the posterior probabilities (rather than only a classification) is required. This is usually the case when some confidence measure in the classification is needed. In this article, we propose a new posterior probability estimator. The proposed estimator considers the K-nearest neighbors. It attaches a weight to each neighbor that contributes in an additive fashion to the posterior probability estimate. The weights corresponding to the K-nearest-neighbors (which add to 1) are estimated from the data using a maximum likelihood approach. Simulation studies confirm the effectiveness of the proposed estimator.
Estimating the probability of coexistence in cross-feeding communities.
Vessman, Björn; Gerlee, Philip; Lundh, Torbjörn
2016-11-07
The dynamics of many microbial ecosystems are driven by cross-feeding interactions, in which metabolites excreted by some species are metabolised further by others. The population dynamics of such ecosystems are governed by frequency-dependent selection, which allows for stable coexistence of two or more species. We have analysed a model of cross-feeding based on the replicator equation, with the aim of establishing criteria for coexistence in ecosystems containing three species, given the information of the three species' ability to coexist in their three separate pairs, i.e. the long term dynamics in the three two-species component systems. The triple-system is studied statistically and the probability of coexistence in the species triplet is computed for two models of species interactions. The interaction parameters are modelled either as stochastically independent or organised in a hierarchy where any derived metabolite carries less energy than previous nutrients in the metabolic chain. We differentiate between different modes of coexistence with respect to the pair-wise dynamics of the species, and find that the probability of coexistence is close to 12 for triplet systems with three pair-wise coexistent pairs and for the so-called intransitive systems. Systems with two and one pair-wise coexistent pairs are more likely to exist for random interaction parameters, but are on the other hand much less likely to exhibit triplet coexistence. Hence we conclude that certain species triplets are, from a statistical point of view, rare, but if allowed to interact are likely to coexist. This knowledge might be helpful when constructing synthetic microbial communities for industrial purposes. Copyright © 2016 Elsevier Ltd. All rights reserved.
Temporal evolution of risk estimates for presumed human teratogens.
Koebert, M K; Haun, J M; Pauli, R M
1993-01-01
We present preliminary data assessing a previously untried method of deriving estimates of risk from case reports on presumed human teratogens. We postulated that we could take advantage of biases inherent to case reports in order to generate one or more families of temporal curves that could be used to estimate the "true" risk of teratogenic exposure. Using this method (which we refer to as the "case-cumulative method") we found that two agents (parvovirus B19 and isotretinoin) demonstrated a logarithmic decrease in the estimated risk over time, as intuitively expected, while trimethadione and the coumarin derivatives showed a more complex pattern over time. Analysis of estimated risks quoted by reviews and large studies for these four agents showed large variability from estimate to estimate and no discernible temporal pattern. With further analysis of other agents, the case-cumulative method might eventually prove to be useful in teratogen counseling.
METAPHOR: Probability density estimation for machine learning based photometric redshifts
Amaro, V.; Cavuoti, S.; Brescia, M.; Vellucci, C.; Tortora, C.; Longo, G.
2017-06-01
We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method able to provide a reliable PDF for photometric galaxy redshifts estimated through empirical techniques. METAPHOR is a modular workflow, mainly based on the MLPQNA neural network as internal engine to derive photometric galaxy redshifts, but giving the possibility to easily replace MLPQNA with any other method to predict photo-z's and their PDF. We present here the results about a validation test of the workflow on the galaxies from SDSS-DR9, showing also the universality of the method by replacing MLPQNA with KNN and Random Forest models. The validation test include also a comparison with the PDF's derived from a traditional SED template fitting method (Le Phare).
Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Liu, W. F.
2013-01-01
Estimation of extreme response and failure probability of structures subjected to ultimate design loads is essential for structural design of wind turbines according to the new standard IEC61400-1. This task is focused on in the present paper in virtue of probability density evolution method (PDEM......), which underlies the schemes of random vibration analysis and structural reliability assessment. The short-term rare failure probability of 5-mega-watt wind turbines, for illustrative purposes, in case of given mean wind speeds and turbulence levels is investigated through the scheme of extreme value...... distribution instead of any other approximate schemes of fitted distribution currently used in statistical extrapolation techniques. Besides, the comparative studies against the classical fitted distributions and the standard Monte Carlo techniques are carried out. Numerical results indicate that PDEM exhibits...
The Estimation of Probability of Extreme Events for Small Samples
Pisarenko, V. F.; Rodkin, M. V.
2017-02-01
The most general approach to the study of rare extreme events is based on the extreme value theory. The fundamental General Extreme Value Distribution lies in the basis of this theory serving as the limit distribution for normalized maxima. It depends on three parameters. Usually the method of maximum likelihood (ML) is used for the estimation that possesses well-known optimal asymptotic properties. However, this method works efficiently only when sample size is large enough ( 200-500), whereas in many applications the sample size does not exceed 50-100. For such sizes, the advantage of the ML method in efficiency is not guaranteed. We have found that for this situation the method of statistical moments (SM) works more efficiently over other methods. The details of the estimation for small samples are studied. The SM is applied to the study of extreme earthquakes in three large virtual seismic zones, representing the regime of seismicity in subduction zones, intracontinental regime of seismicity, and the regime in mid-ocean ridge zones. The 68%-confidence domains for pairs of parameter (ξ, σ) and (σ, μ) are derived.
Estimating deficit probabilities with price-responsive demand in contract-based electricity markets
Galetovic, Alexander [Facultad de Ciencias Economicas y Empresariales, Universidad de los Andes, Santiago (Chile); Munoz, Cristian M. [Departamento de Ingenieria Electrica, Universidad de Chile, Mariano Sanchez Fontecilla 310, piso 3 Las Condes, Santiago (Chile)
2009-02-15
Studies that estimate deficit probabilities in hydrothermal systems have generally ignored the response of demand to changing prices, in the belief that such response is largely irrelevant. We show that ignoring the response of demand to prices can lead to substantial over or under estimation of the probability of an energy deficit. To make our point we present an estimation of deficit probabilities in Chile's Central Interconnected System between 2006 and 2010. This period is characterized by tight supply, fast consumption growth and rising electricity prices. When the response of demand to rising prices is acknowledged, forecasted deficit probabilities and marginal costs are shown to be substantially lower. (author)
Efficient Estimation of first Passage Probability of high-Dimensional Nonlinear Systems
Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Bucher, Christian
2011-01-01
An efficient method for estimating low first passage probabilities of high-dimensional nonlinear systems based on asymptotic estimation of low probabilities is presented. The method does not require any a priori knowledge of the system, i.e. it is a black-box method, and has very low requirements......, the failure probabilities of three well-known nonlinear systems are estimated. Next, a reduced degree-of-freedom model of a wind turbine is developed and is exposed to a turbulent wind field. The model incorporates very high dimensions and strong nonlinearities simultaneously. The failure probability...
Estimation and asymptotic theory for transition probabilities in Markov Renewal Multi–state models
Spitoni, C.; Verduijn, M.; Putter, H.
2012-01-01
In this paper we discuss estimation of transition probabilities for semi–Markov multi–state models. Non–parametric and semi–parametric estimators of the transition probabilities for a large class of models (forward going models) are proposed. Large sample theory is derived using the functional delta
Pradipta Panchadhyayee
2016-12-01
Full Text Available We have simulated the similar features of the well-known classical phenomena in quantum domain under the formalism of probability amplitude method. The identical pattern of interference fringes of a Fabry–Perot interferometer (especially on reflection mode is obtained through the power-broadened spectral line shape of the population distribution in the excited state with careful delineation of a coherently driven two-level atomic model. In a unit wavelength domain, such pattern can be substantially modified by controlling typical spatial field arrangement in one and two dimensions, which is found complementary to the findings of recent research on atom localization in sub-wavelength domain. The spatial dependence of temporal dynamics has also been studied at a particular condition, which is equivalent to that could be obtained under Raman–Nath diffraction controlled by spatial phase.
Estimating spatio-temporal dynamics of size-structured populations
Kristensen, Kasper; Thygesen, Uffe Høgsbro; Andersen, Ken Haste
2014-01-01
Spatial distributions of structured populations are usually estimated by fitting abundance surfaces for each stage and at each point of time separately, ignoring correlations that emerge from growth of individuals. Here, we present a statistical model that combines spatio-temporal correlations...... with simple stock dynamics, to estimate simultaneously how size distributions and spatial distributions develop in time. We demonstrate the method for a cod population sampled by trawl surveys. Particular attention is paid to correlation between size classes within each trawl haul due to clustering...... of individuals with similar size. The model estimates growth, mortality and reproduction, after which any aspect of size-structure, spatio-temporal population dynamics, as well as the sampling process can be probed. This is illustrated by two applications: 1) tracking the spatial movements of a single cohort...
Regambal, Marci J; Alden, Lynn E
2012-09-01
Individuals with posttraumatic stress disorder (PTSD) are hypothesized to have a "sense of current threat." Perceived threat from the environment (i.e., external threat), can lead to overestimating the probability of the traumatic event reoccurring (Ehlers & Clark, 2000). However, it is unclear if external threat judgments are a pre-existing vulnerability for PTSD or a consequence of trauma exposure. We used trauma analog methodology to prospectively measure probability estimates of a traumatic event, and investigate how these estimates were related to cognitive processes implicated in PTSD development. 151 participants estimated the probability of being in car-accident related situations, watched a movie of a car accident victim, and then completed a measure of data-driven processing during the movie. One week later, participants re-estimated the probabilities, and completed measures of reexperiencing symptoms and symptom appraisals/reactions. Path analysis revealed that higher pre-existing probability estimates predicted greater data-driven processing which was associated with negative appraisals and responses to intrusions. Furthermore, lower pre-existing probability estimates and negative responses to intrusions were both associated with a greater change in probability estimates. Reexperiencing symptoms were predicted by negative responses to intrusions and, to a lesser degree, by greater changes in probability estimates. The undergraduate student sample may not be representative of the general public. The reexperiencing symptoms are less severe than what would be found in a trauma sample. Threat estimates present both a vulnerability and a consequence of exposure to a distressing event. Furthermore, changes in these estimates are associated with cognitive processes implicated in PTSD. Copyright © 2012 Elsevier Ltd. All rights reserved.
Inkmann, J.
2005-01-01
The inverse probability weighted Generalised Empirical Likelihood (IPW-GEL) estimator is proposed for the estimation of the parameters of a vector of possibly non-linear unconditional moment functions in the presence of conditionally independent sample selection or attrition.The estimator is applied
Statistical Surrogate Models for Estimating Probability of High-Consequence Climate Change
Field, R.; Constantine, P.; Boslough, M.
2011-12-01
We have posed the climate change problem in a framework similar to that used in safety engineering, by acknowledging that probabilistic risk assessments focused on low-probability, high-consequence climate events are perhaps more appropriate than studies focused simply on best estimates. To properly explore the tails of the distribution requires extensive sampling, which is not possible with existing coupled atmospheric models due to the high computational cost of each simulation. We have developed specialized statistical surrogate models (SSMs) that can be used to make predictions about the tails of the associated probability distributions. A SSM is different than a deterministic surrogate model in that it represents each climate variable of interest as a space/time random field, that is, a random variable for every fixed location in the atmosphere at all times. The SSM can be calibrated to available spatial and temporal data from existing climate databases, or to a collection of outputs from general circulation models. Because of its reduced size and complexity, the realization of a large number of independent model outputs from a SSM becomes computationally straightforward, so that quantifying the risk associated with low-probability, high-consequence climate events becomes feasible. A Bayesian framework was also developed to provide quantitative measures of confidence, via Bayesian credible intervals, to assess these risks. To illustrate the use of the SSM, we considered two collections of NCAR CCSM 3.0 output data. The first collection corresponds to average December surface temperature for years 1990-1999 based on a collection of 8 different model runs obtained from the Program for Climate Model Diagnosis and Intercomparison (PCMDI). We calibrated the surrogate model to the available model data and make various point predictions. We also analyzed average precipitation rate in June, July, and August over a 54-year period assuming a cyclic Y2K ocean model. We
Kruppa, Jochen; Liu, Yufeng; Biau, Gérard; Kohler, Michael; König, Inke R; Malley, James D; Ziegler, Andreas
2014-07-01
Probability estimation for binary and multicategory outcome using logistic and multinomial logistic regression has a long-standing tradition in biostatistics. However, biases may occur if the model is misspecified. In contrast, outcome probabilities for individuals can be estimated consistently with machine learning approaches, including k-nearest neighbors (k-NN), bagged nearest neighbors (b-NN), random forests (RF), and support vector machines (SVM). Because machine learning methods are rarely used by applied biostatisticians, the primary goal of this paper is to explain the concept of probability estimation with these methods and to summarize recent theoretical findings. Probability estimation in k-NN, b-NN, and RF can be embedded into the class of nonparametric regression learning machines; therefore, we start with the construction of nonparametric regression estimates and review results on consistency and rates of convergence. In SVMs, outcome probabilities for individuals are estimated consistently by repeatedly solving classification problems. For SVMs we review classification problem and then dichotomous probability estimation. Next we extend the algorithms for estimating probabilities using k-NN, b-NN, and RF to multicategory outcomes and discuss approaches for the multicategory probability estimation problem using SVM. In simulation studies for dichotomous and multicategory dependent variables we demonstrate the general validity of the machine learning methods and compare it with logistic regression. However, each method fails in at least one simulation scenario. We conclude with a discussion of the failures and give recommendations for selecting and tuning the methods. Applications to real data and example code are provided in a companion article (doi:10.1002/bimj.201300077).
Lybbert, Travis J.; Just, David R
2006-01-01
Economists attribute many common behaviors to risk aversion and frequently focus on how wealth moderates risk preferences. This paper highlights a problem associated with empirical tests of the relationship between wealth and risk aversion that can arise when the probabilities individuals face are unobservable to researchers. The common remedy for unobservable probabilities involves the estimation of probabilities in a profit or production that includes farmer, farm and agro-climatic variable...
Probability of an Error in Estimation of States of a Modulated Synchronous Flow of Physical Events
Gortsev, A. M.; Sirotina, M. N.
2016-11-01
A flow of physical events (photons, electrons, etc.) is considered. One of the mathematical models of such flows is a modulated synchronous doubly stochastic flow of events. Analytical results for conditional and unconditional probabilities of erroneous decision in optimal estimation of flow states upon the criterion of the a posteriori probability maximum are presented.
Easy probability estimation of the diagnosis of early axial spondyloarthritis by summing up scores.
Feldtkeller, Ernst; Rudwaleit, Martin; Zeidler, Henning
2013-09-01
Several sets of criteria for the diagnosis of axial SpA (including non-radiographic axial spondyloarthritis) have been proposed in the literature in which scores were attributed to relevant findings and the diagnosis requests a minimal sum of these scores. To quantitatively estimate the probability of axial SpA, multiplying the likelihood ratios of all relevant findings was proposed by Rudwaleit et al. in 2004. The objective of our proposal is to combine the advantages of both, i.e. to estimate the probability by summing up scores instead of multiplying likelihood ratios. An easy way to estimate the probability of axial spondyloarthritis is to use the logarithms of the likelihood ratios as scores attributed to relevant findings and to use the sum of these scores for the probability estimation. A list of whole-numbered scores for relevant findings is presented, and also threshold sum values necessary for a definite and for a probable diagnosis of axial SpA as well as a threshold below which the diagnosis of axial spondyloarthritis can be excluded. In a diagram, the probability of axial spondyloarthritis is given for sum values between these thresholds. By the method proposed, the advantages of both, the easy summing up of scores and the quantitative calculation of the diagnosis probability, are combined. Our method also makes it easier to estimate which additional tests are necessary to come to a definite diagnosis.
On temporal evolution of precipitation probability of the Yangtze River delta in the last 50 years
Feng, Guo-Lin; Dong, Wen-Jie; Li, Jing-Ping
2004-09-01
The monthly precipitation observational data of the Yangtze River delta are transformed into the temporal evolution of precipitation probability (PP) and its hierarchically distributive characters have been revealed in this paper. Research results show that precipitation of the Yangtze River delta displays the interannual and interdecadal characters and the periods are all significant at a confidence level of more than 0.05. The interdecadal is an important time scale, because it is on the one hand a disturbance of long period changes and on the other hand it is also the background for interannual change. The interdecadal and 3-7y oscillations have different motion laws in the data-based mechanism self-memory model (DAMSM). Meanwhile, this paper also provides a new train of thought for dynamic modelling. Because this method only involves a certain length of data series, it can be used in many fields, such as meteorology, hydrology, seismology and economy etc and thus has a bright perspective in practical applications.
On temporal evolution of precipitation probability of the Yangtze River delta in the last 50 years
Feng Guo-Lin; Dong Wen-Jie; Li Jing-Ping
2004-01-01
The monthly precipitation observational data of the Yangtze River delta are transformed into the temporal evolution of precipitation probability (PP), and its hierarchically distributive characters have been revealed in this paper.Research results show that precipitation of the Yangtze River delta displays the interannual and interdecadal characters and the periods are all significant at a confidence level of more than 0.05. The interdecadM is an important time scale,because it is on the one hand a disturbance of long period changes, and on the other hand it is also the background for interannual change. The interdecadal and 3-7y oscillations have different motion laws in the data-based mechanism self-memory model (DAMSM). Meanwhile, this paper also provides a new train of thought for dynamic modelling.Because this method only involves a certain length of data series, it can be used in many fields, such as meteorology,hydrology, seismology, and economy etc, and thus has a bright perspective in practical applications.
2005-01-01
The inverse probability weighted Generalised Empirical Likelihood (IPW-GEL) estimator is proposed for the estimation of the parameters of a vector of possibly non-linear unconditional moment functions in the presence of conditionally independent sample selection or attrition.The estimator is applied to the estimation of the firm size elasticity of product and process R&D expenditures using a panel of German manufacturing firms, which is affected by attrition and selection into R&D activities....
van der Hoop, Julie M; Vanderlaan, Angelia S M; Taggart, Christopher T
2012-10-01
Vessel strikes are the primary source of known mortality for the endangered North Atlantic right whale (Eubalaena glacialis). Multi-institutional efforts to reduce mortality associated with vessel strikes include vessel-routing amendments such as the International Maritime Organization voluntary "area to be avoided" (ATBA) in the Roseway Basin right whale feeding habitat on the southwestern Scotian Shelf. Though relative probabilities of lethal vessel strikes have been estimated and published, absolute probabilities remain unknown. We used a modeling approach to determine the regional effect of the ATBA, by estimating reductions in the expected number of lethal vessel strikes. This analysis differs from others in that it explicitly includes a spatiotemporal analysis of real-time transits of vessels through a population of simulated, swimming right whales. Combining automatic identification system (AIS) vessel navigation data and an observationally based whale movement model allowed us to determine the spatial and temporal intersection of vessels and whales, from which various probability estimates of lethal vessel strikes are derived. We estimate one lethal vessel strike every 0.775-2.07 years prior to ATBA implementation, consistent with and more constrained than previous estimates of every 2-16 years. Following implementation, a lethal vessel strike is expected every 41 years. When whale abundance is held constant across years, we estimate that voluntary vessel compliance with the ATBA results in an 82% reduction in the per capita rate of lethal strikes; very similar to a previously published estimate of 82% reduction in the relative risk of a lethal vessel strike. The models we developed can inform decision-making and policy design, based on their ability to provide absolute, population-corrected, time-varying estimates of lethal vessel strikes, and they are easily transported to other regions and situations.
Arkhincheev, V E
2017-03-01
The new asymptotic behavior of the survival probability of particles in a medium with absorbing traps in an electric field has been established in two ways-by using the scaling approach and by the direct solution of the diffusion equation in the field. It has shown that at long times, this drift mechanism leads to a new temporal behavior of the survival probability of particles in a medium with absorbing traps.
Hubig, Michael; Muggenthaler, Holger; Mall, Gita
2014-05-01
Bayesian estimation applied to temperature based death time estimation was recently introduced as conditional probability distribution or CPD-method by Biermann and Potente. The CPD-method is useful, if there is external information that sets the boundaries of the true death time interval (victim last seen alive and found dead). CPD allows computation of probabilities for small time intervals of interest (e.g. no-alibi intervals of suspects) within the large true death time interval. In the light of the importance of the CPD for conviction or acquittal of suspects the present study identifies a potential error source. Deviations in death time estimates will cause errors in the CPD-computed probabilities. We derive formulae to quantify the CPD error as a function of input error. Moreover we observed the paradox, that in cases, in which the small no-alibi time interval is located at the boundary of the true death time interval, adjacent to the erroneous death time estimate, CPD-computed probabilities for that small no-alibi interval will increase with increasing input deviation, else the CPD-computed probabilities will decrease. We therefore advise not to use CPD if there is an indication of an error or a contra-empirical deviation in the death time estimates, that is especially, if the death time estimates fall out of the true death time interval, even if the 95%-confidence intervals of the estimate still overlap the true death time interval.
Rice yield estimation with multi-temporal Radarsat-2 data
Chen, Chi-Farn; Son, Nguyen-Thanh; Chen, Cheng-Ru
2015-04-01
Rice is the most important food crop in Taiwan. Monitoring rice crop yield is thus crucial for agronomic planners to formulate successful strategies to address national food security and rice grain export issues. However, there is a real challenge for this monitoring purpose because the size of rice fields in Taiwan was generally small and fragmented, and the cropping calendar was also different from region to region. Thus, satellite-based estimation of rice crop yield requires the data that have sufficient spatial and temporal resolutions. This study aimed to develop models to estimate rice crop yield from multi-temporal Radarsat-2 data (5 m resolution). Data processing were carried out for the first rice cropping season from February to July in 2014 in the western part of Taiwan, consisting of four main steps: (1) constructing time-series backscattering coefficient data, (2) spatiotemporal noise filtering of the time-series data, (3) establishment of crop yield models using the time-series backscattering coefficients and in-situ measured yield data, and (4) model validation using field data and government's yield statistics. The results indicated that backscattering behavior varied from region to region due to changes in cultural practices and cropping calendars. The highest correlation coefficient (R2 > 0.8) was obtained at the ripening period. The robustness of the established models was evaluated by comparisons between the estimated yields and in-situ measured yield data showed satisfactory results, with the root mean squared error (RMSE) smaller than 10%. Such results were reaffirmed by the correlation analysis between the estimated yields and government's rice yield statistics (R2 > 0.8). This study demonstrates advantages of using multi-temporal Radarsat-2 backscattering data for estimating rice crop yields in Taiwan prior to the harvesting period, and thus the methods were proposed for rice yield monitoring in other regions.
Patrick L. Zimmerman; Greg C. Liknes
2010-01-01
Dot grids are often used to estimate the proportion of land cover belonging to some class in an aerial photograph. Interpreter misclassification is an often-ignored source of error in dot-grid sampling that has the potential to significantly bias proportion estimates. For the case when the true class of items is unknown, we present a maximum-likelihood estimator of...
Simon, N V; Levisky, J S; Shearer, D M; Morris, K C; Hansberry, P A
1988-06-01
We evaluated the predictiveness of sonographically estimated fetal weight as a function of the estimation of probability of having intrauterine growth retardation (IUGR) before obtaining an ultrasound scan (prior probability). The value of the estimated fetal weight resided more in its high specificity than in its sensitivity, hence in its ability to confirm that the fetus is normal. The predictiveness of the method was further enhanced when the fetal weight estimation was placed in the context of the prior probability of IUGR. In particular, the positive predictive value of the test as well as the likelihood of having a growth-retarded infant in spite of an estimated fetal weight within the normal range were considerably higher as the prior probability of IUGR increased. Since the obstetrician using all available evidence is likely to form a rather good estimate of the possibility of IUGR before ordering a scan, this improvement in the predictiveness of estimated fetal weight through a Bayesian approach can be advantageously applied to ultrasound analysis and can effectively support clinical decision making.
Temporal Parameters Estimation for Wheelchair Propulsion Using Wearable Sensors
Manoela Ojeda
2014-01-01
Full Text Available Due to lower limb paralysis, individuals with spinal cord injury (SCI rely on their upper limbs for mobility. The prevalence of upper extremity pain and injury is high among this population. We evaluated the performance of three triaxis accelerometers placed on the upper arm, wrist, and under the wheelchair, to estimate temporal parameters of wheelchair propulsion. Twenty-six participants with SCI were asked to push their wheelchair equipped with a SMARTWheel. The estimated stroke number was compared with the criterion from video observations and the estimated push frequency was compared with the criterion from the SMARTWheel. Mean absolute errors (MAE and mean absolute percentage of error (MAPE were calculated. Intraclass correlation coefficients and Bland-Altman plots were used to assess the agreement. Results showed reasonable accuracies especially using the accelerometer placed on the upper arm where the MAPE was 8.0% for stroke number and 12.9% for push frequency. The ICC was 0.994 for stroke number and 0.916 for push frequency. The wrist and seat accelerometer showed lower accuracy with a MAPE for the stroke number of 10.8% and 13.4% and ICC of 0.990 and 0.984, respectively. Results suggested that accelerometers could be an option for monitoring temporal parameters of wheelchair propulsion.
Over, Thomas; Saito, Riki J.; Veilleux, Andrea; Sharpe, Jennifer B.; Soong, David T.; Ishii, Audrey
2016-06-28
This report provides two sets of equations for estimating peak discharge quantiles at annual exceedance probabilities (AEPs) of 0.50, 0.20, 0.10, 0.04, 0.02, 0.01, 0.005, and 0.002 (recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, respectively) for watersheds in Illinois based on annual maximum peak discharge data from 117 watersheds in and near northeastern Illinois. One set of equations was developed through a temporal analysis with a two-step least squares-quantile regression technique that measures the average effect of changes in the urbanization of the watersheds used in the study. The resulting equations can be used to adjust rural peak discharge quantiles for the effect of urbanization, and in this study the equations also were used to adjust the annual maximum peak discharges from the study watersheds to 2010 urbanization conditions.The other set of equations was developed by a spatial analysis. This analysis used generalized least-squares regression to fit the peak discharge quantiles computed from the urbanization-adjusted annual maximum peak discharges from the study watersheds to drainage-basin characteristics. The peak discharge quantiles were computed by using the Expected Moments Algorithm following the removal of potentially influential low floods defined by a multiple Grubbs-Beck test. To improve the quantile estimates, generalized skew coefficients were obtained from a newly developed regional skew model in which the skew increases with the urbanized land use fraction. The drainage-basin characteristics used as explanatory variables in the spatial analysis include drainage area, the fraction of developed land, the fraction of land with poorly drained soils or likely water, and the basin slope estimated as the ratio of the basin relief to basin perimeter.This report also provides the following: (1) examples to illustrate the use of the spatial and urbanization-adjustment equations for estimating peak discharge quantiles at
Multifractals embedded in short time series: An unbiased estimation of probability moment
Qiu, Lu; Yang, Tianguang; Yin, Yanhua; Gu, Changgui; Yang, Huijie
2016-12-01
An exact estimation of probability moments is the base for several essential concepts, such as the multifractals, the Tsallis entropy, and the transfer entropy. By means of approximation theory we propose a new method called factorial-moment-based estimation of probability moments. Theoretical prediction and computational results show that it can provide us an unbiased estimation of the probability moments of continuous order. Calculations on probability redistribution model verify that it can extract exactly multifractal behaviors from several hundred recordings. Its powerfulness in monitoring evolution of scaling behaviors is exemplified by two empirical cases, i.e., the gait time series for fast, normal, and slow trials of a healthy volunteer, and the closing price series for Shanghai stock market. By using short time series with several hundred lengths, a comparison with the well-established tools displays significant advantages of its performance over the other methods. The factorial-moment-based estimation can evaluate correctly the scaling behaviors in a scale range about three generations wider than the multifractal detrended fluctuation analysis and the basic estimation. The estimation of partition function given by the wavelet transform modulus maxima has unacceptable fluctuations. Besides the scaling invariance focused in the present paper, the proposed factorial moment of continuous order can find its various uses, such as finding nonextensive behaviors of a complex system and reconstructing the causality relationship network between elements of a complex system.
First Passage Probability Estimation of Wind Turbines by Markov Chain Monte Carlo
Sichani, Mahdi Teimouri; Nielsen, Søren R.K.
2013-01-01
Markov Chain Monte Carlo simulation has received considerable attention within the past decade as reportedly one of the most powerful techniques for the first passage probability estimation of dynamic systems. A very popular method in this direction capable of estimating probability of rare events...... with low computation cost is the subset simulation (SS). The idea of the method is to break a rare event into a sequence of more probable events which are easy to be estimated based on the conditional simulation techniques. Recently, two algorithms have been proposed in order to increase the efficiency...... of the method by modifying the conditional sampler. In this paper, applicability of the original SS is compared to the recently introduced modifications of the method on a wind turbine model. The model incorporates a PID pitch controller which aims at keeping the rotational speed of the wind turbine rotor equal...
Estimating probability curves of rock variables using orthogonal polynomials and sample moments
DENG Jian; BIAN Li
2005-01-01
A new algorithm using orthogonal polynomials and sample moments was presented for estimating probability curves directly from experimental or field data of rock variables. The moments estimated directly from a sample of observed values of a random variable could be conventional moments (moments about the origin or central moments) and probability-weighted moments (PWMs). Probability curves derived from orthogonal polynomials and conventional moments are probability density functions (PDF), and probability curves derived from orthogonal polynomials and PWMs are inverse cumulative density functions (CDF) of random variables. The proposed approach is verified by two most commonly-used theoretical standard distributions: normal and exponential distribution. Examples from observed data of uniaxial compressive strength of a rock and concrete strength data are presented for illustrative purposes. The results show that probability curves of rock variable can be accurately derived from orthogonal polynomials and sample moments. Orthogonal polynomials and PWMs enable more secure inferences to be made from relatively small samples about an underlying probability curve.
Temporal scaling analysis of irradiance estimated from daily satellite data and numerical modelling
Vindel, Jose M.; Navarro, Ana A.; Valenzuela, Rita X.; Ramírez, Lourdes
2016-11-01
The temporal variability of global irradiance estimated from daily satellite data and numerical models has been compared for different spans of time. According to the time scale considered, a different behaviour can be expected for each climate. Indeed, for all climates and at small scale, the persistence decreases as this scale increases, but the mediterranean climate, and its continental variety, shows higher persistence than oceanic climate. The probabilities of maintaining the values of irradiance after a certain period of time have been used as a first approximation to analyse the quality of each source, according to the climate. In addition, probability distributions corresponding to variations of clearness indices measured at several stations located in different climate zones have been compared with those obtained from satellite and modelling estimations. For this work, daily radiation data from the reanalysis carried out by the European Centre for Medium-Range Weather Forecasts and from the Satellite Application Facilities on climate monitoring have been used for mainland Spain. According to the results, the temporal series estimation of irradiance is more accurate when using satellite data, independent of the climate considered. In fact, the coefficients of determination corresponding to the locations studied are always above 0.92 in the case of satellite data, while this coefficient decreases to 0.69 for some cases of the numerical model. This conclusion is more evident in oceanic climates, where the most important errors can be observed. Indeed, in this case, the RRMSE derived from the CM-SAF estimations is 20.93%, while in the numerical model, it is 48.33%. Analysis of the probabilities corresponding to variations in the clearness indices also shows a better behaviour of the satellite-derived estimates for oceanic climate. For the standard mediterranean climate, the satellite also provides better results, though the numerical model improves
The estimated lifetime probability of acquiring human papillomavirus in the United States.
Chesson, Harrell W; Dunne, Eileen F; Hariri, Susan; Markowitz, Lauri E
2014-11-01
Estimates of the lifetime probability of acquiring human papillomavirus (HPV) can help to quantify HPV incidence, illustrate how common HPV infection is, and highlight the importance of HPV vaccination. We developed a simple model, based primarily on the distribution of lifetime numbers of sex partners across the population and the per-partnership probability of acquiring HPV, to estimate the lifetime probability of acquiring HPV in the United States in the time frame before HPV vaccine availability. We estimated the average lifetime probability of acquiring HPV among those with at least 1 opposite sex partner to be 84.6% (range, 53.6%-95.0%) for women and 91.3% (range, 69.5%-97.7%) for men. Under base case assumptions, more than 80% of women and men acquire HPV by age 45 years. Our results are consistent with estimates in the existing literature suggesting a high lifetime probability of HPV acquisition and are supported by cohort studies showing high cumulative HPV incidence over a relatively short period, such as 3 to 5 years.
Impaired probability estimation and decision-making in pathological gambling poker players.
Linnet, Jakob; Frøslev, Mette; Ramsgaard, Stine; Gebauer, Line; Mouridsen, Kim; Wohlert, Victoria
2012-03-01
Poker has gained tremendous popularity in recent years, increasing the risk for some individuals to develop pathological gambling. Here, we investigated cognitive biases in a computerized two-player poker task against a fictive opponent, among 12 pathological gambling poker players (PGP), 10 experienced poker players (ExP), and 11 inexperienced poker players (InP). Players were compared on probability estimation and decision-making with the hypothesis that ExP would have significantly lower cognitive biases than PGP and InP, and that the groups could be differentiated based on their cognitive bias styles. The results showed that ExP had a significantly lower average error margin in probability estimation than PGP and InP, and that PGP played hands with lower winning probability than ExP. Binomial logistic regression showed perfect differentiation (100%) between ExP and PGP, and 90.5% classification accuracy between ExP and InP. Multinomial logistic regression showed an overall classification accuracy of 23 out of 33 (69.7%) between the three groups. The classification accuracy of ExP was higher than that of PGP and InP due to the similarities in probability estimation and decision-making between PGP and InP. These impairments in probability estimation and decision-making of PGP may have implications for assessment and treatment of cognitive biases in pathological gambling poker players.
Tvedebrink, Torben; Eriksen, Poul Svante; Asplund, Maria; Mogensen, Helle Smidt; Morling, Niels
2012-03-01
We discuss the model for estimating drop-out probabilities presented by Tvedebrink et al. [7] and the concerns, that have been raised. The criticism of the model has demonstrated that the model is not perfect. However, the model is very useful for advanced forensic genetic work, where allelic drop-out is occurring. With this discussion, we hope to improve the drop-out model, so that it can be used for practical forensic genetics and stimulate further discussions. We discuss how to estimate drop-out probabilities when using a varying number of PCR cycles and other experimental conditions. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Time of Arrival Estimation in Probability-Controlled Generalized CDMA Systems
Hagit Messer
2007-11-01
Full Text Available In recent years, more and more wireless communications systems are required to provide also a positioning measurement. In code division multiple access (CDMA communication systems, the positioning accuracy is significantly degraded by the multiple access interference (MAI caused by other users in the system. This MAI is commonly managed by a power control mechanism, and yet, MAI has a major effect on positioning accuracy. Probability control is a recently introduced interference management mechanism. In this mechanism, a user with excess power chooses not to transmit some of its symbols. The information in the nontransmitted symbols is recovered by an error-correcting code (ECC, while all other users receive a more reliable data during these quiet periods. Previous research had shown that the implementation of a probability control mechanism can significantly reduce the MAI. In this paper, we show that probability control also improves the positioning accuracy. We focus on time-of-arrival (TOA based positioning systems. We analyze the TOA estimation performance in a generalized CDMA system, in which the probability control mechanism is employed, where the transmitted signal is noncontinuous with a symbol transmission probability smaller than 1. The accuracy of the TOA estimation is determined using appropriate modifications of the Cramer-Rao bound on the delay estimation. Keeping the average transmission power constant, we show that the TOA accuracy of each user does not depend on its transmission probability, while being a nondecreasing function of the transmission probability of any other user. Therefore, a generalized, noncontinuous CDMA system with a probability control mechanism can always achieve better positioning performance, for all users in the network, than a conventional, continuous, CDMA system.
Estimating stage-specific daily survival probabilities of nests when nest age is unknown
Stanley, T.R.
2004-01-01
Estimation of daily survival probabilities of nests is common in studies of avian populations. Since the introduction of Mayfield's (1961, 1975) estimator, numerous models have been developed to relax Mayfield's assumptions and account for biologically important sources of variation. Stanley (2000) presented a model for estimating stage-specific (e.g. incubation stage, nestling stage) daily survival probabilities of nests that conditions on “nest type” and requires that nests be aged when they are found. Because aging nests typically requires handling the eggs, there may be situations where nests can not or should not be aged and the Stanley (2000) model will be inapplicable. Here, I present a model for estimating stage-specific daily survival probabilities that conditions on nest stage for active nests, thereby obviating the need to age nests when they are found. Specifically, I derive the maximum likelihood function for the model, evaluate the model's performance using Monte Carlo simulations, and provide software for estimating parameters (along with an example). For sample sizes as low as 50 nests, bias was small and confidence interval coverage was close to the nominal rate, especially when a reduced-parameter model was used for estimation.
Tvedebrink, Torben; Eriksen, Poul Svante; Asplund, Maria
2012-01-01
We discuss the model for estimating drop-out probabilities presented by Tvedebrink et al. [7] and the concerns, that have been raised. The criticism of the model has demonstrated that the model is not perfect. However, the model is very useful for advanced forensic genetic work, where allelic dro...
Fast estimation of false alarm probabilities of STAP detectors - the AMF
Srinivasan, Rajan; Rangaswamy, Muralidhar
2005-01-01
This paper describes an attempt to harness the power of adaptive importance sampling techniques for estimating false alarm probabilities of detectors that use space-time adaptive processing. Fast simulation using these techniques have been notably successful in the study of conventional constant fal
Probability of Error in Estimating States of a Flow of Physical Events
Gortsev, A. M.; Solov'ev, A. A.
2016-09-01
A flow of physical events (photons, electrons, etc.) is considered. One of the mathematical models of such flows is the MAP flow of events. Analytical results for conditional and unconditional probabilities of erroneous decision in optimal estimation of states of the MAP flow of events are presented.
Wildland fire probabilities estimated from weather model-deduced monthly mean fire danger indices
Haiganoush K. Preisler; Shyh-Chin Chen; Francis Fujioka; John W. Benoit; Anthony L. Westerling
2008-01-01
The National Fire Danger Rating System indices deduced from a regional simulation weather model were used to estimate probabilities and numbers of large fire events on monthly and 1-degree grid scales. The weather model simulations and forecasts are ongoing experimental products from the Experimental Climate Prediction Center at the Scripps Institution of Oceanography...
Gray, David R
2010-12-01
As global trade increases so too does the probability of introduction of alien species to new locations. Estimating the probability of an alien species introduction and establishment following introduction is a necessary step in risk estimation (probability of an event times the consequences, in the currency of choice, of the event should it occur); risk estimation is a valuable tool for reducing the risk of biological invasion with limited resources. The Asian gypsy moth, Lymantria dispar (L.), is a pest species whose consequence of introduction and establishment in North America and New Zealand warrants over US$2 million per year in surveillance expenditure. This work describes the development of a two-dimensional phenology model (GLS-2d) that simulates insect development from source to destination and estimates: (1) the probability of introduction from the proportion of the source population that would achieve the next developmental stage at the destination and (2) the probability of establishment from the proportion of the introduced population that survives until a stable life cycle is reached at the destination. The effect of shipping schedule on the probabilities of introduction and establishment was examined by varying the departure date from 1 January to 25 December by weekly increments. The effect of port efficiency was examined by varying the length of time that invasion vectors (shipping containers and ship) were available for infection. The application of GLS-2d is demonstrated using three common marine trade routes (to Auckland, New Zealand, from Kobe, Japan, and to Vancouver, Canada, from Kobe and from Vladivostok, Russia).
Juang, K.W.; Lee, D.Y. [National Taiwan Univ., Taipei (Taiwan, Province of China). Graduate Inst. of Agricultural Chemistry
1998-09-01
The probability of incorrectly delineating hazardous areas in a contaminated site is very important for decision-makers because it indicates the magnitude of confidence that decision-makers have in determining areas in need of remediation. In this study, simple indicator kriging (SIK) was used to estimate the probability of incorrectly delineating hazardous areas in a heavy metal-contaminated site, which is located at Taoyuan, Taiwan, and is about 10 ha in area. In the procedure, the values 0 and 1 were assigned to be the stationary means of the indicator codes in the SIK model to represent two hypotheses, hazardous and safe, respectively. The spatial distribution of the conditional probability of heavy metal concentrations lower than a threshold, given each hypothesis, was estimated using SIK. Then, the probabilities of false positives ({alpha}) (i.e., the probability of declaring a location hazardous when it is not) and false negatives ({beta}) (i.e., the probability of declaring a location safe when it is not) in delineating hazardous areas for the heavy metal-contaminated site could be obtained. The spatial distribution of the probabilities of false positives and false negatives could help in delineating hazardous areas based on a tolerable probability level of incorrect delineation. In addition, delineation complicated by the cost of remediation, hazards in the environment, and hazards to human health could be made based on the minimum values of {alpha} and {beta}. The results suggest that the proposed SIK procedure is useful for decision-makers who need to delineate hazardous areas in a heavy metal-contaminated site.
Estimating the Probability of Elevated Nitrate Concentrations in Ground Water in Washington State
Frans, Lonna M.
2008-01-01
Logistic regression was used to relate anthropogenic (manmade) and natural variables to the occurrence of elevated nitrate concentrations in ground water in Washington State. Variables that were analyzed included well depth, ground-water recharge rate, precipitation, population density, fertilizer application amounts, soil characteristics, hydrogeomorphic regions, and land-use types. Two models were developed: one with and one without the hydrogeomorphic regions variable. The variables in both models that best explained the occurrence of elevated nitrate concentrations (defined as concentrations of nitrite plus nitrate as nitrogen greater than 2 milligrams per liter) were the percentage of agricultural land use in a 4-kilometer radius of a well, population density, precipitation, soil drainage class, and well depth. Based on the relations between these variables and measured nitrate concentrations, logistic regression models were developed to estimate the probability of nitrate concentrations in ground water exceeding 2 milligrams per liter. Maps of Washington State were produced that illustrate these estimated probabilities for wells drilled to 145 feet below land surface (median well depth) and the estimated depth to which wells would need to be drilled to have a 90-percent probability of drawing water with a nitrate concentration less than 2 milligrams per liter. Maps showing the estimated probability of elevated nitrate concentrations indicated that the agricultural regions are most at risk followed by urban areas. The estimated depths to which wells would need to be drilled to have a 90-percent probability of obtaining water with nitrate concentrations less than 2 milligrams per liter exceeded 1,000 feet in the agricultural regions; whereas, wells in urban areas generally would need to be drilled to depths in excess of 400 feet.
Evaluating probability forecasts
Lai, Tze Leung; Shen, David Bo; 10.1214/11-AOS902
2012-01-01
Probability forecasts of events are routinely used in climate predictions, in forecasting default probabilities on bank loans or in estimating the probability of a patient's positive response to treatment. Scoring rules have long been used to assess the efficacy of the forecast probabilities after observing the occurrence, or nonoccurrence, of the predicted events. We develop herein a statistical theory for scoring rules and propose an alternative approach to the evaluation of probability forecasts. This approach uses loss functions relating the predicted to the actual probabilities of the events and applies martingale theory to exploit the temporal structure between the forecast and the subsequent occurrence or nonoccurrence of the event.
Langtimm, C.A.; O'Shea, T.J.; Pradel, R.; Beck, C.A.
1998-01-01
The population dynamics of large, long-lived mammals are particularly sensitive to changes in adult survival. Understanding factors affecting survival patterns is therefore critical for developing and testing theories of population dynamics and for developing management strategies aimed at preventing declines or extinction in such taxa. Few studies have used modern analytical approaches for analyzing variation and testing hypotheses about survival probabilities in large mammals. This paper reports a detailed analysis of annual adult survival in the Florida manatee (Trichechus manatus latirostris), an endangered marine mammal, based on a mark-recapture approach. Natural and boat-inflicted scars distinctively 'marked' individual manatees that were cataloged in a computer-based photographic system. Photo-documented resightings provided 'recaptures.' Using open population models, annual adult-survival probabilities were estimated for manatees observed in winter in three areas of Florida: Blue Spring, Crystal River, and the Atlantic coast. After using goodness-of-fit tests in Program RELEASE to search for violations of the assumptions of mark-recapture analysis, survival and sighting probabilities were modeled under several different biological hypotheses with Program SURGE. Estimates of mean annual probability of sighting varied from 0.948 for Blue Spring to 0.737 for Crystal River and 0.507 for the Atlantic coast. At Crystal River and Blue Spring, annual survival probabilities were best estimated as constant over the study period at 0.96 (95% CI = 0.951-0.975 and 0.900-0.985, respectively). On the Atlantic coast, where manatees are impacted more by human activities, annual survival probabilities had a significantly lower mean estimate of 0.91 (95% CI = 0.887-0.926) and varied unpredictably over the study period. For each study area, survival did not differ between sexes and was independent of relative adult age. The high constant adult-survival probabilities estimated
A novel approach to estimate the eruptive potential and probability in open conduit volcanoes.
De Gregorio, Sofia; Camarda, Marco
2016-01-01
In open conduit volcanoes, volatile-rich magma continuously enters into the feeding system nevertheless the eruptive activity occurs intermittently. From a practical perspective, the continuous steady input of magma in the feeding system is not able to produce eruptive events alone, but rather surplus of magma inputs are required to trigger the eruptive activity. The greater the amount of surplus of magma within the feeding system, the higher is the eruptive probability.Despite this observation, eruptive potential evaluations are commonly based on the regular magma supply, and in eruptive probability evaluations, generally any magma input has the same weight. Conversely, herein we present a novel approach based on the quantification of surplus of magma progressively intruded in the feeding system. To quantify the surplus of magma, we suggest to process temporal series of measurable parameters linked to the magma supply. We successfully performed a practical application on Mt Etna using the soil CO2 flux recorded over ten years.
Estimation of failure probabilities of linear dynamic systems by importance sampling
Anna Ivanova Olsen; Arvid Naess
2006-08-01
An iterative method for estimating the failure probability for certain time-variant reliability problems has been developed. In the paper, the focus is on the displacement response of a linear oscillator driven by white noise. Failure is then assumed to occur when the displacement response exceeds a critical threshold. The iteration procedure is a two-step method. On the ﬁrst iteration, a simple control function promoting failure is constructed using the design point weighting principle. After time discretization, two points are chosen to construct a compound deterministic control function. It is based on the time point when the ﬁrst maximum of the homogenous solution has occurred and on the point at the end of the considered time interval. An importance sampling technique is used in order to estimate the failure probability functional on a set of initial values of state space variables and time. On the second iteration, the concept of optimal control function can be implemented to construct a Markov control which allows much better accuracy in the failure probability estimate than the simple control function. On both iterations, the concept of changing the probability measure by the Girsanov transformation is utilized. As a result the CPU time is substantially reduced compared with the crude Monte Carlo procedure.
Efficient Estimation of first Passage Probability of high-Dimensional Nonlinear Systems
Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Bucher, Christian
2011-01-01
on the system memory. Consequently, high-dimensional problems can be handled, and nonlinearities in the model neither bring any difficulty in applying it nor lead to considerable reduction of its efficiency. These characteristics suggest that the method is a powerful candidate for complicated problems. First......, the failure probabilities of three well-known nonlinear systems are estimated. Next, a reduced degree-of-freedom model of a wind turbine is developed and is exposed to a turbulent wind field. The model incorporates very high dimensions and strong nonlinearities simultaneously. The failure probability...
Estimating Super Heavy Element Event Random Probabilities Using Monte Carlo Methods
Stoyer, Mark; Henderson, Roger; Kenneally, Jacqueline; Moody, Kenton; Nelson, Sarah; Shaughnessy, Dawn; Wilk, Philip
2009-10-01
Because superheavy element (SHE) experiments involve very low event rates and low statistics, estimating the probability that a given event sequence is due to random events is extremely important in judging the validity of the data. A Monte Carlo method developed at LLNL [1] is used on recent SHE experimental data to calculate random event probabilities. Current SHE experimental activities in collaboration with scientists at Dubna, Russia will be discussed. [4pt] [1] N.J. Stoyer, et al., Nucl. Instrum. Methods Phys. Res. A 455 (2000) 433.
Hanayama, Nobutane; Sibuya, Masaaki
2016-08-01
In modern biology, theories of aging fall mainly into two groups: damage theories and programed theories. If programed theories are true, the probability that human beings live beyond a specific age will be zero. In contrast, if damage theories are true, such an age does not exist, and a longevity record will be eventually destroyed. In this article, for examining real state, a special type of binomial model based on the generalized Pareto distribution has been applied to data of Japanese centenarians. From the results, it is concluded that the upper limit of lifetime probability distribution in the Japanese population has been estimated 123 years.
A fast algorithm for estimating transmission probabilities in QTL detection designs with dense maps
Gilbert Hélène
2009-11-01
Full Text Available Abstract Background In the case of an autosomal locus, four transmission events from the parents to progeny are possible, specified by the grand parental origin of the alleles inherited by this individual. Computing the probabilities of these transmission events is essential to perform QTL detection methods. Results A fast algorithm for the estimation of these probabilities conditional to parental phases has been developed. It is adapted to classical QTL detection designs applied to outbred populations, in particular to designs composed of half and/or full sib families. It assumes the absence of interference. Conclusion The theory is fully developed and an example is given.
Inverse probability of censoring weighted estimates of Kendall's τ for gap time analyses.
Lakhal-Chaieb, Lajmi; Cook, Richard J; Lin, Xihong
2010-12-01
In life history studies, interest often lies in the analysis of the interevent, or gap times and the association between event times. Gap time analyses are challenging however, even when the length of follow-up is determined independently of the event process, because associations between gap times induce dependent censoring for second and subsequent gap times. This article discusses nonparametric estimation of the association between consecutive gap times based on Kendall's τ in the presence of this type of dependent censoring. A nonparametric estimator that uses inverse probability of censoring weights is provided. Estimates of conditional gap time distributions can be obtained following specification of a particular copula function. Simulation studies show the estimator performs well and compares favorably with an alternative estimator. Generalizations to a piecewise constant Clayton copula are given. Several simulation studies and illustrations with real data sets are also provided.
Saviane, Chiara; Silver, R Angus
2006-06-15
Synapses play a crucial role in information processing in the brain. Amplitude fluctuations of synaptic responses can be used to extract information about the mechanisms underlying synaptic transmission and its modulation. In particular, multiple-probability fluctuation analysis can be used to estimate the number of functional release sites, the mean probability of release and the amplitude of the mean quantal response from fits of the relationship between the variance and mean amplitude of postsynaptic responses, recorded at different probabilities. To determine these quantal parameters, calculate their uncertainties and the goodness-of-fit of the model, it is important to weight the contribution of each data point in the fitting procedure. We therefore investigated the errors associated with measuring the variance by determining the best estimators of the variance of the variance and have used simulations of synaptic transmission to test their accuracy and reliability under different experimental conditions. For central synapses, which generally have a low number of release sites, the amplitude distribution of synaptic responses is not normal, thus the use of a theoretical variance of the variance based on the normal assumption is not a good approximation. However, appropriate estimators can be derived for the population and for limited sample sizes using a more general expression that involves higher moments and introducing unbiased estimators based on the h-statistics. Our results are likely to be relevant for various applications of fluctuation analysis when few channels or release sites are present.
Estimation of submarine mass failure probability from a sequence of deposits with age dates
Geist, Eric L.; Chaytor, Jason D.; Parsons, Thomas E.; ten Brink, Uri S.
2013-01-01
The empirical probability of submarine mass failure is quantified from a sequence of dated mass-transport deposits. Several different techniques are described to estimate the parameters for a suite of candidate probability models. The techniques, previously developed for analyzing paleoseismic data, include maximum likelihood and Type II (Bayesian) maximum likelihood methods derived from renewal process theory and Monte Carlo methods. The estimated mean return time from these methods, unlike estimates from a simple arithmetic mean of the center age dates and standard likelihood methods, includes the effects of age-dating uncertainty and of open time intervals before the first and after the last event. The likelihood techniques are evaluated using Akaike’s Information Criterion (AIC) and Akaike’s Bayesian Information Criterion (ABIC) to select the optimal model. The techniques are applied to mass transport deposits recorded in two Integrated Ocean Drilling Program (IODP) drill sites located in the Ursa Basin, northern Gulf of Mexico. Dates of the deposits were constrained by regional bio- and magnetostratigraphy from a previous study. Results of the analysis indicate that submarine mass failures in this location occur primarily according to a Poisson process in which failures are independent and return times follow an exponential distribution. However, some of the model results suggest that submarine mass failures may occur quasiperiodically at one of the sites (U1324). The suite of techniques described in this study provides quantitative probability estimates of submarine mass failure occurrence, for any number of deposits and age uncertainty distributions.
Annotated corpus and the empirical evaluation of probability estimates of grammatical forms
Ševa Nada
2003-01-01
Full Text Available The aim of the present study is to demonstrate the usage of an annotated corpus in the field of experimental psycholinguistics. Specifically, we demonstrate how the manually annotated Corpus of Serbian Language (Kostić, Đ. 2001 can be used for probability estimates of grammatical forms, which allow the control of independent variables in psycholinguistic experiments. We address the issue of processing Serbian inflected forms within two subparadigms of feminine nouns. In regression analysis, almost all processing variability of inflected forms has been accounted for by the amount of information (i.e. bits carried by the presented forms. In spite of the fact that probability distributions of inflected forms for the two paradigms differ, it was shown that the best prediction of processing variability is obtained by the probabilities derived from the predominant subparadigm which encompasses about 80% of feminine nouns. The relevance of annotated corpora in experimental psycholinguistics is discussed more in detail .
Estimates for the Finite-time Ruin Probability with Insurance and Financial Risks
Min ZHOU; Kai-yong WANG; Yue-bao WANG
2012-01-01
The paper gives estimates for the finite-time ruin probability with insurance and financial risks.When the distribution of the insurance risk belongs to the class (L)(γ) for some γ ＞ 0 or the subexponential distribution class,we abtain some asymptotic equivalent relationships for the finite-time ruin probability,respectively. When the distribution of the insurance risk belongs to the dominated varying-tailed distribution class,we obtain asymptotic upper bound and lower bound for the finite-time ruin probability,where for the asymptotic upper bound,we completely get rid of the restriction of mutual independence on insurance risks,and for the lower bound,we only need the insurance risks to have a weak positive association structure.The obtained results extend and improve some existing results.
A simple method for realistic estimation of the most probable energy loss in thin gas layers
Grishin, V. M.; Merson, G. I.
1989-01-01
A simple method for the estimation of the relativistic rise of the most probable ionisation loss in thin gas layers is suggested. The method is based on the similarity of the most probable and restricted energy loss of relativistic charged particles in matter. This allows to correct the Landau-Sternheimer theory taking into account the fact that particle collisions with internal atomic electrons do not influence the most probable value of the ionisation loss. The effective values of the charge number and average ionisation potential which are simple to calculate are used for this correction. A similarity of the energy loss distributions for various gases and gas layers is found. This similarity is expressed in a constant fraction of the ionisation loss distribution tail area ( ˜ 1:3.5). It is the value which was used for correction of the Landau-Sternheimer formula.
Novel Approach to Estimate Missing Data Using Spatio-Temporal Estimation Method
Aniruddha D. Shelotkar
2016-04-01
Full Text Available With advancement of wireless technology and the processing power in mobile devices, every handheld device supports numerous video streaming applications. Generally, user datagram protocol (UDP is used in video transmission technology which does not provide assured quality of service (QoS. Therefore, there is need for video post processing modules for error concealments. In this paper we propose one such algorithm to recover multiple lost blocks of data in video. The proposed algorithm is based on a combination of wavelet transform and spatio-temporal data estimation. We decomposed the frame with lost blocks using wavelet transform in low and high frequency bands. Then the approximate information (low frequency of missing block is estimated using spatial smoothening and the details (high frequency are added using bidirectional (temporal predication of high frequency wavelet coefficients. Finally inverse wavelet transform is applied on modified wavelet coefficients to recover the frame. In proposed algorithm, we carry out an automatic estimation of missing block using spatio-temporal manner. Experiments are carried with different YUV and compressed domain streams. The experimental results show enhancement in PSNR as well as visual quality and cross verified by video quality metrics (VQM.
Estimating migratory connectivity of birds when re-encounter probabilities are heterogeneous
Cohen, Emily B.; Hostelter, Jeffrey A.; Royle, J. Andrew; Marra, Peter P.
2014-01-01
Understanding the biology and conducting effective conservation of migratory species requires an understanding of migratory connectivity – the geographic linkages of populations between stages of the annual cycle. Unfortunately, for most species, we are lacking such information. The North American Bird Banding Laboratory (BBL) houses an extensive database of marking, recaptures and recoveries, and such data could provide migratory connectivity information for many species. To date, however, few species have been analyzed for migratory connectivity largely because heterogeneous re-encounter probabilities make interpretation problematic. We accounted for regional variation in re-encounter probabilities by borrowing information across species and by using effort covariates on recapture and recovery probabilities in a multistate capture–recapture and recovery model. The effort covariates were derived from recaptures and recoveries of species within the same regions. We estimated the migratory connectivity for three tern species breeding in North America and over-wintering in the tropics, common (Sterna hirundo), roseate (Sterna dougallii), and Caspian terns (Hydroprogne caspia). For western breeding terns, model-derived estimates of migratory connectivity differed considerably from those derived directly from the proportions of re-encounters. Conversely, for eastern breeding terns, estimates were merely refined by the inclusion of re-encounter probabilities. In general, eastern breeding terns were strongly connected to eastern South America, and western breeding terns were strongly linked to the more western parts of the nonbreeding range under both models. Through simulation, we found this approach is likely useful for many species in the BBL database, although precision improved with higher re-encounter probabilities and stronger migratory connectivity. We describe an approach to deal with the inherent biases in BBL banding and re-encounter data to demonstrate
Time-Varying Transition Probability Matrix Estimation and Its Application to Brand Share Analysis
Chiba, Tomoaki; Akaho, Shotaro; Murata, Noboru
2017-01-01
In a product market or stock market, different products or stocks compete for the same consumers or purchasers. We propose a method to estimate the time-varying transition matrix of the product share using a multivariate time series of the product share. The method is based on the assumption that each of the observed time series of shares is a stationary distribution of the underlying Markov processes characterized by transition probability matrices. We estimate transition probability matrices for every observation under natural assumptions. We demonstrate, on a real-world dataset of the share of automobiles, that the proposed method can find intrinsic transition of shares. The resulting transition matrices reveal interesting phenomena, for example, the change in flows between TOYOTA group and GM group for the fiscal year where TOYOTA group’s sales beat GM’s sales, which is a reasonable scenario. PMID:28076383
Time-Varying Transition Probability Matrix Estimation and Its Application to Brand Share Analysis.
Chiba, Tomoaki; Hino, Hideitsu; Akaho, Shotaro; Murata, Noboru
2017-01-01
In a product market or stock market, different products or stocks compete for the same consumers or purchasers. We propose a method to estimate the time-varying transition matrix of the product share using a multivariate time series of the product share. The method is based on the assumption that each of the observed time series of shares is a stationary distribution of the underlying Markov processes characterized by transition probability matrices. We estimate transition probability matrices for every observation under natural assumptions. We demonstrate, on a real-world dataset of the share of automobiles, that the proposed method can find intrinsic transition of shares. The resulting transition matrices reveal interesting phenomena, for example, the change in flows between TOYOTA group and GM group for the fiscal year where TOYOTA group's sales beat GM's sales, which is a reasonable scenario.
Fung, D. C. N.; Wang, J. P.; Chang, S. H.; Chang, S. C.
2014-12-01
Using a revised statistical model built on past seismic probability models, the probability of different magnitude earthquakes occurring within variable timespans can be estimated. The revised model is based on Poisson distribution and includes the use of best-estimate values of the probability distribution of different magnitude earthquakes recurring from a fault from literature sources. Our study aims to apply this model to the Taipei metropolitan area with a population of 7 million, which lies in the Taipei Basin and is bounded by two normal faults: the Sanchaio and Taipei faults. The Sanchaio fault is suggested to be responsible for previous large magnitude earthquakes, such as the 1694 magnitude 7 earthquake in northwestern Taipei (Cheng et. al., 2010). Based on a magnitude 7 earthquake return period of 543 years, the model predicts the occurrence of a magnitude 7 earthquake within 20 years at 1.81%, within 79 years at 6.77% and within 300 years at 21.22%. These estimates increase significantly when considering a magnitude 6 earthquake; the chance of one occurring within the next 20 years is estimated to be 3.61%, 79 years at 13.54% and 300 years at 42.45%. The 79 year period represents the average lifespan of the Taiwan population. In contrast, based on data from 2013, the probability of Taiwan residents experiencing heart disease or malignant neoplasm is 11.5% and 29%. The inference of this study is that the calculated risk that the Taipei population is at from a potentially damaging magnitude 6 or greater earthquake occurring within their lifetime is just as great as of suffering from a heart attack or other health ailments.
Remediating Non-Positive Definite State Covariances for Collision Probability Estimation
Hall, Doyle T.; Hejduk, Matthew D.; Johnson, Lauren C.
2017-01-01
The NASA Conjunction Assessment Risk Analysis team estimates the probability of collision (Pc) for a set of Earth-orbiting satellites. The Pc estimation software processes satellite position+velocity states and their associated covariance matri-ces. On occasion, the software encounters non-positive definite (NPD) state co-variances, which can adversely affect or prevent the Pc estimation process. Inter-polation inaccuracies appear to account for the majority of such covariances, alt-hough other mechanisms contribute also. This paper investigates the origin of NPD state covariance matrices, three different methods for remediating these co-variances when and if necessary, and the associated effects on the Pc estimation process.
Bousquet, Nicolas
2010-01-01
This article deals with the estimation of a probability p of an undesirable event. Its occurence is formalized by the exceedance of a threshold reliability value by the unidimensional output of a time-consuming computer code G with multivariate probabilistic input X. When G is assumed monotonous with respect to X, the Monotonous Reliability Method was proposed by de Rocquigny (2009) in an engineering context to provide sequentially narrowing 100%-confidence bounds and a crude estimate of p, via deterministic or stochastic designs of experiments. The present article consists in a formalization and technical deepening of this idea, as a large basis for future theoretical and applied studies. Three kinds of results are especially emphasized. First, the bounds themselves remain too crude and conservative estimators of p for a dimension of X upper than 2. Second, a maximum-likelihood estimator of p can be easily built, presenting a high variance reduction with respect to a standard Monte Carlo case, but suffering ...
Cavuoti, Stefano; Brescia, Massimo; Vellucci, Civita; Tortora, Crescenzo; Longo, Giuseppe
2016-01-01
A variety of fundamental astrophysical science topics require the determination of very accurate photometric redshifts (photo-z's). A wide plethora of methods have been developed, based either on template models fitting or on empirical explorations of the photometric parameter space. Machine learning based techniques are not explicitly dependent on the physical priors and able to produce accurate photo-z estimations within the photometric ranges derived from the spectroscopic training set. These estimates, however, are not easy to characterize in terms of a photo-z Probability Density Function (PDF), due to the fact that the analytical relation mapping the photometric parameters onto the redshift space is virtually unknown. We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method designed to provide a reliable PDF of the error distribution for empirical techniques. The method is implemented as a modular workflow, whose internal engine for photo-z estimation makes use...
PIGS: improved estimates of identity-by-descent probabilities by probabilistic IBD graph sampling.
Park, Danny S; Baran, Yael; Hormozdiari, Farhad; Eng, Celeste; Torgerson, Dara G; Burchard, Esteban G; Zaitlen, Noah
2015-01-01
Identifying segments in the genome of different individuals that are identical-by-descent (IBD) is a fundamental element of genetics. IBD data is used for numerous applications including demographic inference, heritability estimation, and mapping disease loci. Simultaneous detection of IBD over multiple haplotypes has proven to be computationally difficult. To overcome this, many state of the art methods estimate the probability of IBD between each pair of haplotypes separately. While computationally efficient, these methods fail to leverage the clique structure of IBD resulting in less powerful IBD identification, especially for small IBD segments.
The Probability of Default Under IFRS 9: Multi-period Estimation and Macroeconomic Forecast
Tomáš Vaněk
2017-01-01
Full Text Available In this paper we propose a straightforward, flexible and intuitive computational framework for the multi-period probability of default estimation incorporating macroeconomic forecasts. The concept is based on Markov models, the estimated economic adjustment coefficient and the official economic forecasts of the Czech National Bank. The economic forecasts are taken into account in a separate step to better distinguish between idiosyncratic and systemic risk. This approach is also attractive from the interpretational point of view. The proposed framework can be used especially when calculating lifetime expected credit losses under IFRS 9.
Isabel C. Pérez Hoyos
2016-04-01
Full Text Available Groundwater Dependent Ecosystems (GDEs are increasingly threatened by humans’ rising demand for water resources. Consequently, it is imperative to identify the location of GDEs to protect them. This paper develops a methodology to identify the probability of an ecosystem to be groundwater dependent. Probabilities are obtained by modeling the relationship between the known locations of GDEs and factors influencing groundwater dependence, namely water table depth and climatic aridity index. Probabilities are derived for the state of Nevada, USA, using modeled water table depth and aridity index values obtained from the Global Aridity database. The model selected results from the performance comparison of classification trees (CT and random forests (RF. Based on a threshold-independent accuracy measure, RF has a better ability to generate probability estimates. Considering a threshold that minimizes the misclassification rate for each model, RF also proves to be more accurate. Regarding training accuracy, performance measures such as accuracy, sensitivity, and specificity are higher for RF. For the test set, higher values of accuracy and kappa for CT highlight the fact that these measures are greatly affected by low prevalence. As shown for RF, the choice of the cutoff probability value has important consequences on model accuracy and the overall proportion of locations where GDEs are found.
Robust Estimation of Temporal Resistivity Variations from Magnetotelluric Data
Cortés-Arroyo, O. J.; Romo, J. M.; Gomez-Trevino, E.
2016-12-01
In recent years there has been a worldwide increase of projects related to fluid injection, such as enhanced geothermal systems, CO2 sequestration and/or fracture monitoring studies. For the success of such projects, a full knowledge of the fluid penetration and propagation is needed, making monitoring techniques sensible to the fluid displacement an essential tool. The magnetotelluric (MT) method is a widely used geophysical technique in tectonic and reservoir exploration studies, where its sensitivity to changes in the electrical resistivity of rocks makes it a very promising tool for monitoring application. Several authors have already reported experiments using the MT method to monitor the effects produced by the injection of fluids in the subsurface. Most of them analyze the changes registered in the variables measured at the surface, particularly the apparent resistivity and the impedance phase. However, few studies have tried to estimate the changes in the ground resistivity from the observable data at the surface. The main difficulty in this kind of problems is that a full control of the geo-electric structure is needed a priori, and distortion effects in the measured data can lead to estimate false temporal or spatial variations. In this work we remove distortion in the data by applying a combination of the phase tensor and the quadratic equation, and propose a new technique to estimate ground resistivity variations by applying a 1D inversion scheme, based on a relationship between changes in resistivity at depth to observed changes of rotation invariant MT responses, applying the Marquardt-Levenberg regularization technique. We test the method using synthetic data and then apply it to real MT data sets collected before and after the 2010, Mw 7.2 earthquake in the Mexicali Valley, Mexico. We also apply it to data sets registered continuously by permanent electromagnetic monitoring stations. Keywords: Magnetotellurics, continuous monitoring, regularized
Using optimal transport theory to estimate transition probabilities in metapopulation dynamics
Nichols, Jonathan M.; Spendelow, Jeffrey A.; Nichols, James
2017-01-01
This work considers the estimation of transition probabilities associated with populations moving among multiple spatial locations based on numbers of individuals at each location at two points in time. The problem is generally underdetermined as there exists an extremely large number of ways in which individuals can move from one set of locations to another. A unique solution therefore requires a constraint. The theory of optimal transport provides such a constraint in the form of a cost function, to be minimized in expectation over the space of possible transition matrices. We demonstrate the optimal transport approach on marked bird data and compare to the probabilities obtained via maximum likelihood estimation based on marked individuals. It is shown that by choosing the squared Euclidean distance as the cost, the estimated transition probabilities compare favorably to those obtained via maximum likelihood with marked individuals. Other implications of this cost are discussed, including the ability to accurately interpolate the population's spatial distribution at unobserved points in time and the more general relationship between the cost and minimum transport energy.
Diaz, P. M. A.; Feitosa, R. Q.; Sanches, I. D.; Costa, G. A. O. P.
2016-06-01
This paper presents a method to estimate the temporal interaction in a Conditional Random Field (CRF) based approach for crop recognition from multitemporal remote sensing image sequences. This approach models the phenology of different crop types as a CRF. Interaction potentials are assumed to depend only on the class labels of an image site at two consecutive epochs. In the proposed method, the estimation of temporal interaction parameters is considered as an optimization problem, whose goal is to find the transition matrix that maximizes the CRF performance, upon a set of labelled data. The objective functions underlying the optimization procedure can be formulated in terms of different accuracy metrics, such as overall and average class accuracy per crop or phenological stages. To validate the proposed approach, experiments were carried out upon a dataset consisting of 12 co-registered LANDSAT images of a region in southeast of Brazil. Pattern Search was used as the optimization algorithm. The experimental results demonstrated that the proposed method was able to substantially outperform estimates related to joint or conditional class transition probabilities, which rely on training samples.
Estimation of probability of failure for damage-tolerant aerospace structures
Halbert, Keith
The majority of aircraft structures are designed to be damage-tolerant such that safe operation can continue in the presence of minor damage. It is necessary to schedule inspections so that minor damage can be found and repaired. It is generally not possible to perform structural inspections prior to every flight. The scheduling is traditionally accomplished through a deterministic set of methods referred to as Damage Tolerance Analysis (DTA). DTA has proven to produce safe aircraft but does not provide estimates of the probability of failure of future flights or the probability of repair of future inspections. Without these estimates maintenance costs cannot be accurately predicted. Also, estimation of failure probabilities is now a regulatory requirement for some aircraft. The set of methods concerned with the probabilistic formulation of this problem are collectively referred to as Probabilistic Damage Tolerance Analysis (PDTA). The goal of PDTA is to control the failure probability while holding maintenance costs to a reasonable level. This work focuses specifically on PDTA for fatigue cracking of metallic aircraft structures. The growth of a crack (or cracks) must be modeled using all available data and engineering knowledge. The length of a crack can be assessed only indirectly through evidence such as non-destructive inspection results, failures or lack of failures, and the observed severity of usage of the structure. The current set of industry PDTA tools are lacking in several ways: they may in some cases yield poor estimates of failure probabilities, they cannot realistically represent the variety of possible failure and maintenance scenarios, and they do not allow for model updates which incorporate observed evidence. A PDTA modeling methodology must be flexible enough to estimate accurately the failure and repair probabilities under a variety of maintenance scenarios, and be capable of incorporating observed evidence as it becomes available. This
On estimating probability of presence from use-availability or presence-background data.
Phillips, Steven J; Elith, Jane
2013-06-01
A fundamental ecological modeling task is to estimate the probability that a species is present in (or uses) a site, conditional on environmental variables. For many species, available data consist of "presence" data (locations where the species [or evidence of it] has been observed), together with "background" data, a random sample of available environmental conditions. Recently published papers disagree on whether probability of presence is identifiable from such presence-background data alone. This paper aims to resolve the disagreement, demonstrating that additional information is required. We defined seven simulated species representing various simple shapes of response to environmental variables (constant, linear, convex, unimodal, S-shaped) and ran five logistic model-fitting methods using 1000 presence samples and 10 000 background samples; the simulations were repeated 100 times. The experiment revealed a stark contrast between two groups of methods: those based on a strong assumption that species' true probability of presence exactly matches a given parametric form had highly variable predictions and much larger RMS error than methods that take population prevalence (the fraction of sites in which the species is present) as an additional parameter. For six species, the former group grossly under- or overestimated probability of presence. The cause was not model structure or choice of link function, because all methods were logistic with linear and, where necessary, quadratic terms. Rather, the experiment demonstrates that an estimate of prevalence is not just helpful, but is necessary (except in special cases) for identifying probability of presence. We therefore advise against use of methods that rely on the strong assumption, due to Lele and Keim (recently advocated by Royle et al.) and Lancaster and Imbens. The methods are fragile, and their strong assumption is unlikely to be true in practice. We emphasize, however, that we are not arguing against
2016-01-01
We have simulated the similar features of the well-known classical phenomena in quantum domain under the formalism of probability amplitude method. The identical pattern of interference fringes of a Fabry–Perot interferometer (especially on reflection mode) is obtained through the power-broadened spectral line shape of the population distribution in the excited state with careful delineation of a coherently driven two-level atomic model. In a unit wavelength domain, such pattern can be substa...
Barengoltz, Jack
2016-07-01
Monte Carlo (MC) is a common method to estimate probability, effectively by a simulation. For planetary protection, it may be used to estimate the probability of impact P{}_{I} by a launch vehicle (upper stage) of a protected planet. The object of the analysis is to provide a value for P{}_{I} with a given level of confidence (LOC) that the true value does not exceed the maximum allowed value of P{}_{I}. In order to determine the number of MC histories required, one must also guess the maximum number of hits that will occur in the analysis. This extra parameter is needed because a LOC is desired. If more hits occur, the MC analysis would indicate that the true value may exceed the specification value with a higher probability than the LOC. (In the worst case, even the mean value of the estimated P{}_{I} might exceed the specification value.) After the analysis is conducted, the actual number of hits is, of course, the mean. The number of hits arises from a small probability per history and a large number of histories; these are the classic requirements for a Poisson distribution. For a known Poisson distribution (the mean is the only parameter), the probability for some interval in the number of hits is calculable. Before the analysis, this is not possible. Fortunately, there are methods that can bound the unknown mean for a Poisson distribution. F. Garwoodfootnote{ F. Garwood (1936), ``Fiduciary limits for the Poisson distribution.'' Biometrika 28, 437-442.} published an appropriate method that uses the Chi-squared function, actually its inversefootnote{ The integral chi-squared function would yield probability α as a function of the mean µ and an actual value n.} (despite the notation used): This formula for the upper and lower limits of the mean μ with the two-tailed probability 1-α depends on the LOC α and an estimated value of the number of "successes" n. In a MC analysis for planetary protection, only the upper limit is of interest, i.e., the single
Estimates of EPSP amplitude based on changes in motoneuron discharge rate and probability.
Powers, Randall K; Türker, K S
2010-10-01
When motor units are discharging tonically, transient excitatory synaptic inputs produce an increase in the probability of spike occurrence and also increase the instantaneous discharge rate. Several researchers have proposed that these induced changes in discharge rate and probability can be used to estimate the amplitude of the underlying excitatory post-synaptic potential (EPSP). We tested two different methods of estimating EPSP amplitude by comparing the amplitude of simulated EPSPs with their effects on the discharge of rat hypoglossal motoneurons recorded in an in vitro brainstem slice preparation. The first estimation method (simplified-trajectory method) is based on the assumptions that the membrane potential trajectory between spikes can be approximated by a 10 mV post-spike hyperpolarization followed by a linear rise to the next spike and that EPSPs sum linearly with this trajectory. We hypothesized that this estimation method would not be accurate due to interspike variations in membrane conductance and firing threshold that are not included in the model and that an alternative method based on estimating the effective distance to threshold would provide more accurate estimates of EPSP amplitude. This second method (distance-to-threshold method) uses interspike interval statistics to estimate the effective distance to threshold throughout the interspike interval and incorporates this distance-to-threshold trajectory into a threshold-crossing model. We found that the first method systematically overestimated the amplitude of small (EPSPs and underestimated the amplitude of large (>5 mV EPSPs). For large EPSPs, the degree of underestimation increased with increasing background discharge rate. Estimates based on the second method were more accurate for small EPSPs than those based on the first model, but estimation errors were still large for large EPSPs. These errors were likely due to two factors: (1) the distance to threshold can only be directly
Spatio-Temporal Matching for Human Pose Estimation in Video.
Zhou, Feng; Torre, Fernando De la
2016-08-01
Detection and tracking humans in videos have been long-standing problems in computer vision. Most successful approaches (e.g., deformable parts models) heavily rely on discriminative models to build appearance detectors for body joints and generative models to constrain possible body configurations (e.g., trees). While these 2D models have been successfully applied to images (and with less success to videos), a major challenge is to generalize these models to cope with camera views. In order to achieve view-invariance, these 2D models typically require a large amount of training data across views that is difficult to gather and time-consuming to label. Unlike existing 2D models, this paper formulates the problem of human detection in videos as spatio-temporal matching (STM) between a 3D motion capture model and trajectories in videos. Our algorithm estimates the camera view and selects a subset of tracked trajectories that matches the motion of the 3D model. The STM is efficiently solved with linear programming, and it is robust to tracking mismatches, occlusions and outliers. To the best of our knowledge this is the first paper that solves the correspondence between video and 3D motion capture data for human pose detection. Experiments on the CMU motion capture, Human3.6M, Berkeley MHAD and CMU MAD databases illustrate the benefits of our method over state-of-the-art approaches.
Saarela, Olli; Liu, Zhihui Amy
2016-10-15
Marginal structural Cox models are used for quantifying marginal treatment effects on outcome event hazard function. Such models are estimated using inverse probability of treatment and censoring (IPTC) weighting, which properly accounts for the impact of time-dependent confounders, avoiding conditioning on factors on the causal pathway. To estimate the IPTC weights, the treatment assignment mechanism is conventionally modeled in discrete time. While this is natural in situations where treatment information is recorded at scheduled follow-up visits, in other contexts, the events specifying the treatment history can be modeled in continuous time using the tools of event history analysis. This is particularly the case for treatment procedures, such as surgeries. In this paper, we propose a novel approach for flexible parametric estimation of continuous-time IPTC weights and illustrate it in assessing the relationship between metastasectomy and mortality in metastatic renal cell carcinoma patients. Copyright © 2016 John Wiley & Sons, Ltd.
Sadeh, Iftach; Lahav, Ofer
2015-01-01
We present ANNz2, a new implementation of the public software for photometric redshift (photo-z) estimation of Collister and Lahav (2004). Large photometric galaxy surveys are important for cosmological studies, and in particular for characterizing the nature of dark energy. The success of such surveys greatly depends on the ability to measure photo-zs, based on limited spectral data. ANNz2 utilizes multiple machine learning methods, such as artificial neural networks, boosted decision/regression trees and k-nearest neighbours. The objective of the algorithm is to dynamically optimize the performance of the photo-z estimation, and to properly derive the associated uncertainties. In addition to single-value solutions, the new code also generates full probability density functions (PDFs) in two different ways. In addition, estimators are incorporated to mitigate possible problems of spectroscopic training samples which are not representative or are incomplete. ANNz2 is also adapted to provide optimized solution...
ZHANG Hua; WANG Yun-jia; LI Yong-feng
2009-01-01
A new mathematical model to estimate the parameters of the probability-integral method for mining subsidence prediction is proposed. Based on least squares support vector machine (LS-SVM) theory, it is capable of improving the precision and reliability of mining subsidence prediction. Many of the geological and mining factors involved are related in a nonlinear way. The new model is based on statistical theory (SLT) and empirical risk minimization (ERM) principles. Typical data collected from observation stations were used for the learning and training samples. The calculated results from the LS-SVM model were compared with the prediction results of a back propagation neural network (BPNN) model. The results show that the parameters were more precisely predicted by the LS-SVM model than by the BPNN model. The LS-SVM model was faster in computation and had better generalized performance. It provides a highly effective method for calculating the predicting parameters of the probability-integral method.
Lamb, Jennifer Y.; Waddle, J. Hardin; Qualls, Carl P.
2017-01-01
Large gaps exist in our knowledge of the ecology of stream-breeding plethodontid salamanders in the Gulf Coastal Plain. Data describing where these salamanders are likely to occur along environmental gradients, as well as their likelihood of detection, are important for the prevention and management of amphibian declines. We used presence/absence data from leaf litter bag surveys and a hierarchical Bayesian multispecies single-season occupancy model to estimate the occurrence of five species of plethodontids across reaches in headwater streams in the Gulf Coastal Plain. Average detection probabilities were high (range = 0.432–0.942) and unaffected by sampling covariates specific to the use of litter bags (i.e., bag submergence, sampling season, in-stream cover). Estimates of occurrence probabilities differed substantially between species (range = 0.092–0.703) and were influenced by the size of the upstream drainage area and by the maximum proportion of the reach that dried. The effects of these two factors were not equivalent across species. Our results demonstrate that hierarchical multispecies models successfully estimate occurrence parameters for both rare and common stream-breeding plethodontids. The resulting models clarify how species are distributed within stream networks, and they provide baseline values that will be useful in evaluating the conservation statuses of plethodontid species within lotic systems in the Gulf Coastal Plain.
Estimating superpopulation size and annual probability of breeding for pond-breeding salamanders
Kinkead, K.E.; Otis, D.L.
2007-01-01
It has long been accepted that amphibians can skip breeding in any given year, and environmental conditions act as a cue for breeding. In this paper, we quantify temporary emigration or nonbreeding probability for mole and spotted salamanders (Ambystoma talpoideum and A. maculatum). We estimated that 70% of mole salamanders may skip breeding during an average rainfall year and 90% may skip during a drought year. Spotted salamanders may be more likely to breed, with only 17% avoiding the breeding pond during an average rainfall year. We illustrate how superpopulations can be estimated using temporary emigration probability estimates. The superpopulation is the total number of salamanders associated with a given breeding pond. Although most salamanders stay within a certain distance of a breeding pond for the majority of their life spans, it is difficult to determine true overall population sizes for a given site if animals are only captured during a brief time frame each year with some animals unavailable for capture at any time during a given year. ?? 2007 by The Herpetologists' League, Inc.
Lukeš, Tomáš; Křížek, Pavel; Švindrych, Zdeněk; Benda, Jakub; Ovesný, Martin; Fliegel, Karel; Klíma, Miloš; Hagen, Guy M
2014-12-01
We introduce and demonstrate a new high performance image reconstruction method for super-resolution structured illumination microscopy based on maximum a posteriori probability estimation (MAP-SIM). Imaging performance is demonstrated on a variety of fluorescent samples of different thickness, labeling density and noise levels. The method provides good suppression of out of focus light, improves spatial resolution, and allows reconstruction of both 2D and 3D images of cells even in the case of weak signals. The method can be used to process both optical sectioning and super-resolution structured illumination microscopy data to create high quality super-resolution images.
Estimating the probability of allelic drop-out of STR alleles in forensic genetics
Tvedebrink, Torben; Eriksen, Poul Svante; Mogensen, Helle Smidt;
2009-01-01
In crime cases with available DNA evidence, the amount of DNA is often sparse due to the setting of the crime. In such cases, allelic drop-out of one or more true alleles in STR typing is possible. We present a statistical model for estimating the per locus and overall probability of allelic drop......-out using the results of all STR loci in the case sample as reference. The methodology of logistic regression is appropriate for this analysis, and we demonstrate how to incorporate this in a forensic genetic framework....
On the method of logarithmic cumulants for parametric probability density function estimation.
Krylov, Vladimir A; Moser, Gabriele; Serpico, Sebastiano B; Zerubia, Josiane
2013-10-01
Parameter estimation of probability density functions is one of the major steps in the area of statistical image and signal processing. In this paper we explore several properties and limitations of the recently proposed method of logarithmic cumulants (MoLC) parameter estimation approach which is an alternative to the classical maximum likelihood (ML) and method of moments (MoM) approaches. We derive the general sufficient condition for a strong consistency of the MoLC estimates which represents an important asymptotic property of any statistical estimator. This result enables the demonstration of the strong consistency of MoLC estimates for a selection of widely used distribution families originating from (but not restricted to) synthetic aperture radar image processing. We then derive the analytical conditions of applicability of MoLC to samples for the distribution families in our selection. Finally, we conduct various synthetic and real data experiments to assess the comparative properties, applicability and small sample performance of MoLC notably for the generalized gamma and K families of distributions. Supervised image classification experiments are considered for medical ultrasound and remote-sensing SAR imagery. The obtained results suggest that MoLC is a feasible and computationally fast yet not universally applicable alternative to MoM. MoLC becomes especially useful when the direct ML approach turns out to be unfeasible.
Estimated Probability of a Cervical Spine Injury During an ISS Mission
Brooker, John E.; Weaver, Aaron S.; Myers, Jerry G.
2013-01-01
Introduction: The Integrated Medical Model (IMM) utilizes historical data, cohort data, and external simulations as input factors to provide estimates of crew health, resource utilization and mission outcomes. The Cervical Spine Injury Module (CSIM) is an external simulation designed to provide the IMM with parameter estimates for 1) a probability distribution function (PDF) of the incidence rate, 2) the mean incidence rate, and 3) the standard deviation associated with the mean resulting from injury/trauma of the neck. Methods: An injury mechanism based on an idealized low-velocity blunt impact to the superior posterior thorax of an ISS crewmember was used as the simulated mission environment. As a result of this impact, the cervical spine is inertially loaded from the mass of the head producing an extension-flexion motion deforming the soft tissues of the neck. A multibody biomechanical model was developed to estimate the kinematic and dynamic response of the head-neck system from a prescribed acceleration profile. Logistic regression was performed on a dataset containing AIS1 soft tissue neck injuries from rear-end automobile collisions with published Neck Injury Criterion values producing an injury transfer function (ITF). An injury event scenario (IES) was constructed such that crew 1 is moving through a primary or standard translation path transferring large volume equipment impacting stationary crew 2. The incidence rate for this IES was estimated from in-flight data and used to calculate the probability of occurrence. The uncertainty in the model input factors were estimated from representative datasets and expressed in terms of probability distributions. A Monte Carlo Method utilizing simple random sampling was employed to propagate both aleatory and epistemic uncertain factors. Scatterplots and partial correlation coefficients (PCC) were generated to determine input factor sensitivity. CSIM was developed in the SimMechanics/Simulink environment with a
Fisicaro, E; Braibanti, A; Sambasiva Rao, R; Compari, C; Ghiozzi, A; Nageswara Rao, G
1998-04-01
An algorithm is proposed for the estimation of binding parameters for the interaction of biologically important macromolecules with smaller ones from electrometric titration data. The mathematical model is based on the representation of equilibria in terms of probability concepts of statistical molecular thermodynamics. The refinement of equilibrium concentrations of the components and estimation of binding parameters (log site constant and cooperativity factor) is performed using singular value decomposition, a chemometric technique which overcomes the general obstacles due to near singularity. The present software is validated with a number of biochemical systems of varying number of sites and cooperativity factors. The effect of random errors of realistic magnitude in experimental data is studied using the simulated primary data for some typical systems. The safe area within which approximate binding parameters ensure convergence has been reported for the non-self starting optimization algorithms.
Estimation of (n,f) Cross-Sections by Measuring Reaction Probability Ratios
Plettner, C; Ai, H; Beausang, C W; Bernstein, L A; Ahle, L; Amro, H; Babilon, M; Burke, J T; Caggiano, J A; Casten, R F; Church, J A; Cooper, J R; Crider, B; Gurdal, G; Heinz, A; McCutchan, E A; Moody, K; Punyon, J A; Qian, J; Ressler, J J; Schiller, A; Williams, E; Younes, W
2005-04-21
Neutron-induced reaction cross-sections on unstable nuclei are inherently difficult to measure due to target activity and the low intensity of neutron beams. In an alternative approach, named the 'surrogate' technique, one measures the decay probability of the same compound nucleus produced using a stable beam on a stable target to estimate the neutron-induced reaction cross-section. As an extension of the surrogate method, in this paper they introduce a new technique of measuring the fission probabilities of two different compound nuclei as a ratio, which has the advantage of removing most of the systematic uncertainties. This method was benchmarked in this report by measuring the probability of deuteron-induced fission events in coincidence with protons, and forming the ratio P({sup 236}U(d,pf))/P({sup 238}U(d,pf)), which serves as a surrogate for the known cross-section ratio of {sup 236}U(n,f)/{sup 238}U(n,f). IN addition, the P({sup 238}U(d,d{prime}f))/P({sup 236}U(d,d{prime}f)) ratio as a surrogate for the {sup 237}U(n,f)/{sup 235}U(n,f) cross-section ratio was measured for the first time in an unprecedented range of excitation energies.
Lahodny, G E; Gautam, R; Ivanek, R
2015-01-01
Indirect transmission through the environment, pathogen shedding by infectious hosts, replication of free-living pathogens within the environment, and environmental decontamination are suspected to play important roles in the spread and control of environmentally transmitted infectious diseases. To account for these factors, the classic Susceptible-Infectious-Recovered-Susceptible epidemic model is modified to include a compartment representing the amount of free-living pathogen within the environment. The model accounts for host demography, direct and indirect transmission, replication of free-living pathogens in the environment, and removal of free-living pathogens by natural death or environmental decontamination. Based on the assumptions of the deterministic model, a continuous-time Markov chain model is developed. An estimate for the probability of disease extinction or a major outbreak is obtained by approximating the Markov chain with a multitype branching process. Numerical simulations illustrate important differences between the deterministic and stochastic counterparts, relevant for outbreak prevention, that depend on indirect transmission, pathogen shedding by infectious hosts, replication of free-living pathogens, and environmental decontamination. The probability of a major outbreak is computed for salmonellosis in a herd of dairy cattle as well as cholera in a human population. An explicit expression for the probability of disease extinction or a major outbreak in terms of the model parameters is obtained for systems with no direct transmission or replication of free-living pathogens.
Development of a statistical tool for the estimation of riverbank erosion probability
Varouchakis, Emmanouil
2016-04-01
Riverbank erosion affects river morphology and local habitat, and results in riparian land loss, property and infrastructure damage, and ultimately flood defence weakening. An important issue concerning riverbank erosion is the identification of the vulnerable areas in order to predict river changes and assist stream management/restoration. An approach to predict areas vulnerable to erosion is to quantify the erosion probability by identifying the underlying relations between riverbank erosion and geomorphological or hydrological variables that prevent or stimulate erosion. In the present work, a innovative statistical methodology is proposed to predict the probability of presence or absence of erosion in a river section. A physically based model determines the locations vulnerable to erosion by quantifying the potential eroded area. The derived results are used to determine validation locations for the evaluation of the statistical tool performance. The statistical tool is based on a series of independent local variables and employs the Logistic Regression methodology. It is developed in two forms, Logistic Regression and Locally Weighted Logistic Regression, which both deliver useful and accurate results. The second form though, provides the most accurate results as it validates the presence or absence of erosion at all validation locations. The proposed tool is easy to use, accurate and can be applied to any region and river. Varouchakis, E. A., Giannakis, G. V., Lilli, M. A., Ioannidou, E., Nikolaidis, N. P., and Karatzas, G. P.: Development of a statistical tool for the estimation of riverbank erosion probability, SOIL (EGU), in print, 2016.
Failure probability estimation of flaw in CANDU pressure tube considering the dimensional change
Kwak, Sang Log; Kim, Young Jin [Sungkyunkwan Univ., Suwon (Korea, Republic of); Lee, Joon Seong [Kyonggi Univ., Suwon (Korea, Republic of); Park, Youn Won [KINS, Taejon (Korea, Republic of)
2002-11-01
The pressure tube is a major component of the CANDU reactor, which supports nuclear fuel bundle and heavy water coolant. Pressure tubes are installed horizontally inside the reactor and only selected samples are periodically examined during in-service inspection. In this respect, a probabilistic safety assessment method is more appropriate for the assessment of overall pressure tube safety. The failure behavior of CANDU pressure tubes, however, is governed by delayed hydride cracking which is the major difference from pipings and reactor pressure vessels. Since the delayed hydride cracking has more widely distributed governing parameters, it is impossible to apply a general PFM methodology directly. In this paper, a PFM methodology for the safety assessment of CANDU pressure tubes is introduced by applying Monte Carlo simulation in determining failure probability. Initial hydrogen concentration, flaw shape and depth, axial and radial crack growth rate and fracture toughness were considered as probabilistic variables. Parametric study has been done under the base of pressure tube dimension and hydride precipitation temperature in calculating failure probability. Unstable fracture and plastic collapse are used for the failure assessment. The estimated failure probability showed about three-order difference with changing dimensions of pressure tube.
Estimating the Probability of Electrical Short Circuits from Tin Whiskers. Part 2
Courey, Karim J.; Asfour, Shihab S.; Onar, Arzu; Bayliss, Jon A.; Ludwig, Larry L.; Wright, Maria C.
2010-01-01
To comply with lead-free legislation, many manufacturers have converted from tin-lead to pure tin finishes of electronic components. However, pure tin finishes have a greater propensity to grow tin whiskers than tin-lead finishes. Since tin whiskers present an electrical short circuit hazard in electronic components, simulations have been developed to quantify the risk of said short circuits occurring. Existing risk simulations make the assumption that when a free tin whisker has bridged two adjacent exposed electrical conductors, the result is an electrical short circuit. This conservative assumption is made because shorting is a random event that had an unknown probability associated with it. Note however that due to contact resistance electrical shorts may not occur at lower voltage levels. In our first article we developed an empirical probability model for tin whisker shorting. In this paper, we develop a more comprehensive empirical model using a refined experiment with a larger sample size, in which we studied the effect of varying voltage on the breakdown of the contact resistance which leads to a short circuit. From the resulting data we estimated the probability distribution of an electrical short, as a function of voltage. In addition, the unexpected polycrystalline structure seen in the focused ion beam (FIB) cross section in the first experiment was confirmed in this experiment using transmission electron microscopy (TEM). The FIB was also used to cross section two card guides to facilitate the measurement of the grain size of each card guide's tin plating to determine its finish .
Ho, Chih-Hsiang; Smith, Eugene I.; Feuerbach, Daniel L.; Naumann, Terry R.
1991-12-01
Investigations are currently underway to evaluate the impact of potentially adverse conditions (e.g. volcanism, faulting, seismicity) on the waste-isolation capability of the proposed nuclear waste repository at Yucca Mountain, Nevada, USA. This paper is the first in a series that will examine the probability of disruption of the Yucca Mountain site by volcanic eruption. In it, we discuss three estimating techniques for determining the recurrence rate of volcanic eruption (λ), an important parameter in the Poisson probability model. The first method is based on the number of events occurring over a certain observation period, the second is based on repose times, and the final is based on magma volume. All three require knowledge of the total number of eruptions in the Yucca Mountain area during the observation period ( E). Following this discussion we then propose an estimate of E which takes into account the possibility of polygenetic and polycyclic volcanism at all the volcanic centers near the Yucca Mountain site.
Estimating probabilities of peptide database identifications to LC-FTICR-MS observations
Daly Don S
2006-02-01
Full Text Available Abstract Background The field of proteomics involves the characterization of the peptides and proteins expressed in a cell under specific conditions. Proteomics has made rapid advances in recent years following the sequencing of the genomes of an increasing number of organisms. A prominent technology for high throughput proteomics analysis is the use of liquid chromatography coupled to Fourier transform ion cyclotron resonance mass spectrometry (LC-FTICR-MS. Meaningful biological conclusions can best be made when the peptide identities returned by this technique are accompanied by measures of accuracy and confidence. Methods After a tryptically digested protein mixture is analyzed by LC-FTICR-MS, the observed masses and normalized elution times of the detected features are statistically matched to the theoretical masses and elution times of known peptides listed in a large database. The probability of matching is estimated for each peptide in the reference database using statistical classification methods assuming bivariate Gaussian probability distributions on the uncertainties in the masses and the normalized elution times. Results A database of 69,220 features from 32 LC-FTICR-MS analyses of a tryptically digested bovine serum albumin (BSA sample was matched to a database populated with 97% false positive peptides. The percentage of high confidence identifications was found to be consistent with other database search procedures. BSA database peptides were identified with high confidence on average in 14.1 of the 32 analyses. False positives were identified on average in just 2.7 analyses. Conclusion Using a priori probabilities that contrast peptides from expected and unexpected proteins was shown to perform better in identifying target peptides than using equally likely a priori probabilities. This is because a large percentage of the target peptides were similar to unexpected peptides which were included to be false positives. The use of
Fast and accurate probability density estimation in large high dimensional astronomical datasets
Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.
2015-01-01
Astronomical surveys will generate measurements of hundreds of attributes (e.g. color, size, shape) on hundreds of millions of sources. Analyzing these large, high dimensional data sets will require efficient algorithms for data analysis. An example of this is probability density estimation that is at the heart of many classification problems such as the separation of stars and quasars based on their colors. Popular density estimation techniques use binning or kernel density estimation. Kernel density estimation has a small memory footprint but often requires large computational resources. Binning has small computational requirements but usually binning is implemented with multi-dimensional arrays which leads to memory requirements which scale exponentially with the number of dimensions. Hence both techniques do not scale well to large data sets in high dimensions. We present an alternative approach of binning implemented with hash tables (BASH tables). This approach uses the sparseness of data in the high dimensional space to ensure that the memory requirements are small. However hashing requires some extra computation so a priori it is not clear if the reduction in memory requirements will lead to increased computational requirements. Through an implementation of BASH tables in C++ we show that the additional computational requirements of hashing are negligible. Hence this approach has small memory and computational requirements. We apply our density estimation technique to photometric selection of quasars using non-parametric Bayesian classification and show that the accuracy of the classification is same as the accuracy of earlier approaches. Since the BASH table approach is one to three orders of magnitude faster than the earlier approaches it may be useful in various other applications of density estimation in astrostatistics.
Estimating the probability of arsenic occurrence in domestic wells in the United States
Ayotte, J.; Medalie, L.; Qi, S.; Backer, L. F.; Nolan, B. T.
2016-12-01
Approximately 43 million people (about 14 percent of the U.S. population) rely on privately owned domestic wells as their source of drinking water. Unlike public water systems, which are regulated by the Safe Drinking Water Act, there is no comprehensive national program to ensure that the water from domestic wells is routinely tested and that is it safe to drink. A study published in 2009 from the National Water-Quality Assessment Program of the U.S. Geological Survey assessed water-quality conditions from 2,100 domestic wells within 48 states and reported that more than one in five (23 percent) of the sampled wells contained one or more contaminants at a concentration greater than a human-health benchmark. In addition, there are many activities such as resource extraction, climate change-induced drought, and changes in land use patterns that could potentially affect the quality of the ground water source for domestic wells. The Health Studies Branch (HSB) of the National Center for Environmental Health, Centers for Disease Control and Prevention, created a Clean Water for Health Program to help address domestic well concerns. The goals of this program are to identify emerging public health issues associated with using domestic wells for drinking water and develop plans to address these issues. As part of this effort, HSB in cooperation with the U.S. Geological Survey has created probability models to estimate the probability of arsenic occurring at various concentrations in domestic wells in the U.S. We will present preliminary results of the project, including estimates of the population supplied by domestic wells that is likely to have arsenic greater than 10 micrograms per liter. Nationwide, we estimate this to be just over 2 million people. Logistic regression model results showing probabilities of arsenic greater than the Maximum Contaminant Level for public supply wells of 10 micrograms per liter in domestic wells in the U.S., based on data for arsenic
Estimation of probability of coastal flooding: A case study in the Norton Sound, Alaska
Kim, S.; Chapman, R. S.; Jensen, R. E.; Azleton, M. T.; Eisses, K. J.
2010-12-01
Along the Norton Sound, Alaska, coastal communities have been exposed to flooding induced by the extra-tropical storms. Lack of observation data especially with long-term variability makes it difficult to assess the probability of coastal flooding critical in planning for development and evacuation of the coastal communities. We estimated the probability of coastal flooding with the help of an existing storm surge model using ADCIRC and a wave model using WAM for the Western Alaska which includes the Norton Sound as well as the adjacent Bering Sea and Chukchi Sea. The surface pressure and winds as well as ice coverage was analyzed and put in a gridded format with 3 hour interval over the entire Alaskan Shelf by Ocean Weather Inc. (OWI) for the period between 1985 and 2009. The OWI also analyzed the surface conditions for the storm events over the 31 year time period between 1954 and 1984. The correlation between water levels recorded by NOAA tide gage and local meteorological conditions at Nome between 1992 and 2005 suggested strong local winds with prevailing Southerly components period are good proxies for high water events. We also selected heuristically the local winds with prevailing Westerly components at Shaktoolik which locates at the eastern end of the Norton Sound provided extra selection of flood events during the continuous meteorological data record between 1985 and 2009. The frequency analyses were performed using the simulated water levels and wave heights for the 56 year time period between 1954 and 2009. Different methods of estimating return periods were compared including the method according to FEMA guideline, the extreme value statistics, and fitting to the statistical distributions such as Weibull and Gumbel. The estimates are similar as expected but with a variation.
Cavuoti, S.; Amaro, V.; Brescia, M.; Vellucci, C.; Tortora, C.; Longo, G.
2017-02-01
A variety of fundamental astrophysical science topics require the determination of very accurate photometric redshifts (photo-z). A wide plethora of methods have been developed, based either on template models fitting or on empirical explorations of the photometric parameter space. Machine-learning-based techniques are not explicitly dependent on the physical priors and able to produce accurate photo-z estimations within the photometric ranges derived from the spectroscopic training set. These estimates, however, are not easy to characterize in terms of a photo-z probability density function (PDF), due to the fact that the analytical relation mapping the photometric parameters on to the redshift space is virtually unknown. We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method designed to provide a reliable PDF of the error distribution for empirical techniques. The method is implemented as a modular workflow, whose internal engine for photo-z estimation makes use of the MLPQNA neural network (Multi Layer Perceptron with Quasi Newton learning rule), with the possibility to easily replace the specific machine-learning model chosen to predict photo-z. We present a summary of results on SDSS-DR9 galaxy data, used also to perform a direct comparison with PDFs obtained by the LE PHARE spectral energy distribution template fitting. We show that METAPHOR is capable to estimate the precision and reliability of photometric redshifts obtained with three different self-adaptive techniques, i.e. MLPQNA, Random Forest and the standard K-Nearest Neighbors models.
Jang, Seunghyun; Jae, Moosung [Hanyang University, Seoul (Korea, Republic of)
2016-10-15
The human failure events (HFEs) are considered in the development of system fault trees as well as accident sequence event trees in part of Probabilistic Safety Assessment (PSA). As a method for analyzing the human error, several methods, such as Technique for Human Error Rate Prediction (THERP), Human Cognitive Reliability (HCR), and Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) are used and new methods for human reliability analysis (HRA) are under developing at this time. This paper presents a dynamic HRA method for assessing the human failure events and estimation of human error probability for filtered containment venting system (FCVS) is performed. The action associated with implementation of the containment venting during a station blackout sequence is used as an example. In this report, dynamic HRA method was used to analyze FCVS-related operator action. The distributions of the required time and the available time were developed by MAAP code and LHS sampling. Though the numerical calculations given here are only for illustrative purpose, the dynamic HRA method can be useful tools to estimate the human error estimation and it can be applied to any kind of the operator actions, including the severe accident management strategy.
ANNz2: Photometric Redshift and Probability Distribution Function Estimation using Machine Learning
Sadeh, I.; Abdalla, F. B.; Lahav, O.
2016-10-01
We present ANNz2, a new implementation of the public software for photometric redshift (photo-z) estimation of Collister & Lahav, which now includes generation of full probability distribution functions (PDFs). ANNz2 utilizes multiple machine learning methods, such as artificial neural networks and boosted decision/regression trees. The objective of the algorithm is to optimize the performance of the photo-z estimation, to properly derive the associated uncertainties, and to produce both single-value solutions and PDFs. In addition, estimators are made available, which mitigate possible problems of non-representative or incomplete spectroscopic training samples. ANNz2 has already been used as part of the first weak lensing analysis of the Dark Energy Survey, and is included in the experiment's first public data release. Here we illustrate the functionality of the code using data from the tenth data release of the Sloan Digital Sky Survey and the Baryon Oscillation Spectroscopic Survey. The code is available for download at http://github.com/IftachSadeh/ANNZ.
Lin, Feng; Chen, Xinguang
2010-02-01
In order to find better strategies for tobacco control, it is often critical to know the transitional probabilities among various stages of tobacco use. Traditionally, such probabilities are estimated by analyzing data from longitudinal surveys that are often time-consuming and expensive to conduct. Since cross-sectional surveys are much easier to conduct, it will be much more practical and useful to estimate transitional probabilities from cross-sectional survey data if possible. However, no previous research has attempted to do this. In this paper, we propose a method to estimate transitional probabilities from cross-sectional survey data. The method is novel and is based on a discrete event system framework. In particular, we introduce state probabilities and transitional probabilities to conventional discrete event system models. We derive various equations that can be used to estimate the transitional probabilities. We test the method using cross-sectional data of the National Survey on Drug Use and Health. The estimated transitional probabilities can be used in predicting the future smoking behavior for decision-making, planning and evaluation of various tobacco control programs. The method also allows a sensitivity analysis that can be used to find the most effective way of tobacco control. Since there are much more cross-sectional survey data in existence than longitudinal ones, the impact of this new method is expected to be significant.
Estimation of the nuclear fuel assembly eigenfrequencies in the probability sense
Zeman V.
2014-12-01
Full Text Available The paper deals with upper and lower limits estimation of the nuclear fuel assembly eigenfrequencies, whose design and operation parameters are random variables. Each parameter is defined by its mean value and standard deviation or by a range of values. The gradient and three sigma criterion approach is applied to the calculation of the upper and lower limits of fuel assembly eigenfrequencies in the probability sense. Presented analytical approach used for the calculation of eigenfrequencies sensitivity is based on the modal synthesis method and the fuel assembly decomposition into six identical revolved fuel rod segments, centre tube and load-bearing skeleton linked by spacer grids. The method is applied for the Russian TVSA-T fuel assembly in the WWER1000/320 type reactor core in the Czech nuclear power plant Temelín.
Vanessa M Adams
Full Text Available The need to integrate social and economic factors into conservation planning has become a focus of academic discussions and has important practical implications for the implementation of conservation areas, both private and public. We conducted a survey in the Daly Catchment, Northern Territory, to inform the design and implementation of a stewardship payment program. We used a choice model to estimate the likely level of participation in two legal arrangements--conservation covenants and management agreements--based on payment level and proportion of properties required to be managed. We then spatially predicted landholders' probability of participating at the resolution of individual properties and incorporated these predictions into conservation planning software to examine the potential for the stewardship program to meet conservation objectives. We found that the properties that were least costly, per unit area, to manage were also the least likely to participate. This highlights a tension between planning for a cost-effective program and planning for a program that targets properties with the highest probability of participation.
Estimating the ground-state probability of a quantum simulation with product-state measurements
Bryce eYoshimura
2015-10-01
Full Text Available .One of the goals in quantum simulation is to adiabatically generate the ground state of a complicated Hamiltonian by starting with the ground state of a simple Hamiltonian and slowly evolving the system to the complicated one. If the evolution is adiabatic and the initial and final ground states are connected due to having the same symmetry, then the simulation will be successful. But in most experiments, adiabatic simulation is not possible because it would take too long, and the system has some level of diabatic excitation. In this work, we quantify the extent of the diabatic excitation even if we do not know {it a priori} what the complicated ground state is. Since many quantum simulator platforms, like trapped ions, can measure the probabilities to be in a product state, we describe techniques that can employ these simple measurements to estimate the probability of being in the ground state of the system after the diabatic evolution. These techniques do not require one to know any properties about the Hamiltonian itself, nor to calculate its eigenstate properties. All the information is derived by analyzing the product-state measurements as functions of time.
Measuring and Modeling Fault Density for Plume-Fault Encounter Probability Estimation
Jordan, P.D.; Oldenburg, C.M.; Nicot, J.-P.
2011-05-15
Emission of carbon dioxide from fossil-fueled power generation stations contributes to global climate change. Storage of this carbon dioxide within the pores of geologic strata (geologic carbon storage) is one approach to mitigating the climate change that would otherwise occur. The large storage volume needed for this mitigation requires injection into brine-filled pore space in reservoir strata overlain by cap rocks. One of the main concerns of storage in such rocks is leakage via faults. In the early stages of site selection, site-specific fault coverages are often not available. This necessitates a method for using available fault data to develop an estimate of the likelihood of injected carbon dioxide encountering and migrating up a fault, primarily due to buoyancy. Fault population statistics provide one of the main inputs to calculate the encounter probability. Previous fault population statistics work is shown to be applicable to areal fault density statistics. This result is applied to a case study in the southern portion of the San Joaquin Basin with the result that the probability of a carbon dioxide plume from a previously planned injection had a 3% chance of encountering a fully seal offsetting fault.
A method for Bayesian estimation of the probability of local intensity for some cities in Japan
G. C. Koravos
2002-06-01
Full Text Available Seismic hazard in terms of probability of exceedance of a given intensity in a given time span,was assessed for 12 sites in Japan.The method does not use any attenuation law.Instead,the dependence of local intensity on epicentral intensity I 0 is calculated directly from the data,using a Bayesian model.According to this model (Meroni et al., 1994,local intensity follows the binomial distribution with parameters (I 0 ,p .The parameter p is considered as a random variable following the Beta distribution.This manner of Bayesian estimates of p are assessed for various values of epicentral intensity and epicentral distance.In order to apply this model for the assessment of seismic hazard,the area under consideration is divided into seismic sources (zonesof known seismicity.The contribution of each source on the seismic hazard at every site is calculated according to the Bayesian model and the result is the combined effect of all the sources.High probabilities of exceedance were calculated for the sites that are in the central part of the country,with hazard decreasing slightly towards the north and the south parts.
Carr, J.R. (Nevada Univ., Reno, NV (United States). Dept. of Geological Sciences); Mao, Nai-hsien (Lawrence Livermore National Lab., CA (United States))
1992-01-01
Disjunctive kriging has been compared previously to multigaussian kriging and indicator cokriging for estimation of cumulative distribution functions; it has yet to be compared extensively to probability kriging. Herein, disjunctive kriging and generalized probability kriging are applied to one real and one simulated data set and compared for estimation of the cumulative distribution functions. Generalized probability kriging is an extension, based on generalized cokriging theory, of simple probability kriging for the estimation of the indicator and uniform transforms at each cutoff, Z{sub k}. The disjunctive kriging and the generalized probability kriging give similar results for simulated data of normal distribution, but differ considerably for real data set with non-normal distribution.
Probability density function and estimation for error of digitized map coordinates in GIS
童小华; 刘大杰
2004-01-01
Traditionally, it is widely accepted that measurement error usually obeys the normal distribution. However, in this paper a new idea is proposed that the error in digitized data which is a major derived data source in GIS does not obey the normal distribution but the p-norm distribution with a determinate parameter. Assuming that the error is random and has the same statistical properties, the probability density function of the normal distribution,Laplace distribution and p-norm distribution are derived based on the arithmetic mean axiom, median axiom and pmedian axiom, which means that the normal distribution is only one of these distributions but not the least one.Based on this idea, distribution fitness tests such as Skewness and Kurtosis coefficient test, Pearson chi-square x2 test and Kolmogorov test for digitized data are conducted. The results show that the error in map digitization obeys the p-norm distribution whose parameter is close to 1.60. A least p-norm estimation and the least square estimation of digitized data are further analyzed, showing that the least p-norm adiustment is better than the least square adjustment for digitized data processing in GIS.
Estimates of probability of a cloud-free line of sight for RAPTOR TALON
Bauer, Ernest
1994-07-01
RAPTOR TALON is a concept that includes optical sensing of rocket plumes from low altitudes (2 km) through burnout from an air vehicle at 18-20 km, at a long range, i.e., R approx. 20-100 km. The presence of clouds can interfere with optical sensing, and thus it is important to establish the Probability of a Cloud-Free Line of Sight (PCFLOS) at locations and times of concern. Some previous estimates of PCFLOS were counter-intuitive and provided varying results; POET was asked to resolve the discrepancies. Here we ask for the PCFLOS for paths from 18 km altitude above all clouds to 2 km, at a slant range of about 20 - 100 km at two different locations (Baghdad, Iraq, and Seoul, Korea) for January and July at average cloudiness. There are very few clouds in Iraq during the summer, so for this case PCFLOS - 0.9 - 0.95. At the other locations and seasons the mean cloud cover ranges between 0.4 and 0.7, and the POFLOS values range between 0.6 and 0.7 at P = 20 km, and between 0.4 and 0.6 at P = 100 km. Estimates made by a variety of methods are generally consistent with this finding.
A. Akbulut
2012-04-01
Full Text Available In this study, Particle Swarm Optimization is applied for the estimation of the channel state transition probabilities. Unlike most other studies, where the channel state transition probabilities are assumed to be known and/or constant, in this study, these values are realistically considered to be time-varying parameters, which are unknown to the secondary users of the cognitive radio systems. The results of this study demonstrate the following: without any a priori information about the channel characteristics, even in a very transient environment, it is quite possible to achieve reasonable estimates of channel state transition probabilities with a practical and simple implementation.
Carreras Giulia
2012-03-01
Full Text Available Abstract Background No data on annual smoking cessation probability (i.e., the probability of successfully quit in a given year are available for Italy at a population level. Mathematical models typically used to estimate smoking cessation probabilities do not account for smoking relapse. In this paper, we developed a mathematical model to estimate annual quitting probabilities, taking into account smoking relapse and time since cessation. Methods We developed a dynamic model describing the evolution of current, former, and never smokers. We estimated probabilities of smoking cessation by fitting the model with observed smoking prevalence in Italy, 1986-2009. Results Annual cessation probabilities were higher than 5% only in elderly persons and in women aged Conclusions Over the last 20 years, cessation probabilities among Italian smokers, particularly for those aged 30-59 years, have been very low and stalled. Quitting in Italy is considered as a practicable strategy only by women in the age of pregnancy and by elderly persons, when it’s likely that symptoms of tobacco-related diseases have already appeared. In order to increase cessation probabilities, smoking cessation treatment policies (introducing total reimbursement of cessation treatments, with a further development of quitlines and smoking cessation services should be empowered and a country-wide mass media campaign targeting smokers aged 30-59 years and focusing on promotion of quitting should be implemented.
Estimating linear temporal trends from aggregated environmental monitoring data
Erickson, Richard A.; Gray, Brian R.; Eager, Eric A.
2017-01-01
Trend estimates are often used as part of environmental monitoring programs. These trends inform managers (e.g., are desired species increasing or undesired species decreasing?). Data collected from environmental monitoring programs is often aggregated (i.e., averaged), which confounds sampling and process variation. State-space models allow sampling variation and process variations to be separated. We used simulated time-series to compare linear trend estimations from three state-space models, a simple linear regression model, and an auto-regressive model. We also compared the performance of these five models to estimate trends from a long term monitoring program. We specifically estimated trends for two species of fish and four species of aquatic vegetation from the Upper Mississippi River system. We found that the simple linear regression had the best performance of all the given models because it was best able to recover parameters and had consistent numerical convergence. Conversely, the simple linear regression did the worst job estimating populations in a given year. The state-space models did not estimate trends well, but estimated population sizes best when the models converged. We found that a simple linear regression performed better than more complex autoregression and state-space models when used to analyze aggregated environmental monitoring data.
Haber, M; An, Q; Foppa, I M; Shay, D K; Ferdinands, J M; Orenstein, W A
2015-05-01
As influenza vaccination is now widely recommended, randomized clinical trials are no longer ethical in many populations. Therefore, observational studies on patients seeking medical care for acute respiratory illnesses (ARIs) are a popular option for estimating influenza vaccine effectiveness (VE). We developed a probability model for evaluating and comparing bias and precision of estimates of VE against symptomatic influenza from two commonly used case-control study designs: the test-negative design and the traditional case-control design. We show that when vaccination does not affect the probability of developing non-influenza ARI then VE estimates from test-negative design studies are unbiased even if vaccinees and non-vaccinees have different probabilities of seeking medical care against ARI, as long as the ratio of these probabilities is the same for illnesses resulting from influenza and non-influenza infections. Our numerical results suggest that in general, estimates from the test-negative design have smaller bias compared to estimates from the traditional case-control design as long as the probability of non-influenza ARI is similar among vaccinated and unvaccinated individuals. We did not find consistent differences between the standard errors of the estimates from the two study designs.
Estimation of the Probable Maximum Flood for a Small Lowland River in Poland
Banasik, K.; Hejduk, L.
2009-04-01
The planning, designe and use of hydrotechnical structures often requires the assesment of maximu flood potentials. The most common term applied to this upper limit of flooding is the probable maximum flood (PMF). The PMP/UH (probable maximum precipitation/unit hydrograph) method has been used in the study to predict PMF from a small agricultural lowland river basin of Zagozdzonka (left tributary of Vistula river) in Poland. The river basin, located about 100 km south of Warsaw, with an area - upstream the gauge of Plachty - of 82 km2, has been investigated by Department of Water Engineering and Environmenal Restoration of Warsaw University of Life Sciences - SGGW since 1962. Over 40-year flow record was used in previous investigation for predicting T-year flood discharge (Banasik et al., 2003). The objective here was to estimate the PMF using the PMP/UH method and to compare the results with the 100-year flood. A new relation of depth-duration curve of PMP for the local climatic condition has been developed based on Polish maximum observed rainfall data (Ozga-Zielinska & Ozga-Zielinski, 2003). Exponential formula, with the value of exponent of 0.47, i.e. close to the exponent in formula for world PMP and also in the formula of PMP for Great Britain (Wilson, 1993), gives the rainfall depth about 40% lower than the Wilson's one. The effective rainfall (runoff volume) has been estimated from the PMP of various duration using the CN-method (USDA-SCS, 1986). The CN value as well as parameters of the IUH model (Nash, 1957) have been established from the 27 rainfall-runoff events, recorded in the river basin in the period 1980-2004. Varibility of the parameter values with the size of the events will be discussed in the paper. The results of the analyse have shown that the peak discharge of the PMF is 4.5 times larger then 100-year flood, and volume ratio of the respective direct hydrographs caused by rainfall events of critical duration is 4.0. References 1.Banasik K
Using of bayesian networks to estimate the probability of "NATECH" scenario occurrence
Dobes, Pavel; Dlabka, Jakub; Jelšovská, Katarína; Polorecká, Mária; Baudišová, Barbora; Danihelka, Pavel
2015-04-01
In the twentieth century, implementation of Bayesian statistics and probability was not much used (may be it wasn't a preferred approach) in the area of natural and industrial risk analysis and management. Neither it was used within analysis of so called NATECH accidents (chemical accidents triggered by natural events, such as e.g. earthquakes, floods, lightning etc.; ref. E. Krausmann, 2011, doi:10.5194/nhess-11-921-2011). Main role, from the beginning, played here so called "classical" frequentist probability (ref. Neyman, 1937), which rely up to now especially on the right/false results of experiments and monitoring and didn't enable to count on expert's beliefs, expectations and judgements (which is, on the other hand, one of the once again well known pillars of Bayessian approach to probability). In the last 20 or 30 years, there is possible to observe, through publications and conferences, the Renaissance of Baysssian statistics into many scientific disciplines (also into various branches of geosciences). The necessity of a certain level of trust in expert judgment within risk analysis is back? After several decades of development on this field, it could be proposed following hypothesis (to be checked): "We couldn't estimate probabilities of complex crisis situations and their TOP events (many NATECH events could be classified as crisis situations or emergencies), only by classical frequentist approach, but also by using of Bayessian approach (i.e. with help of prestaged Bayessian Network including expert belief and expectation as well as classical frequentist inputs). Because - there is not always enough quantitative information from monitoring of historical emergencies, there could be several dependant or independant variables necessary to consider and in generally - every emergency situation always have a little different run." In this topic, team of authors presents its proposal of prestaged typized Bayessian network model for specified NATECH scenario
The probability distribution for non-Gaussianity estimators constructed from the CMB trispectrum
Smith, Tristan L
2012-01-01
Considerable recent attention has focussed on the prospects to use the cosmic microwave background (CMB) trispectrum to probe the physics of the early universe. Here we evaluate the probability distribution function (PDF) for the standard estimator tau_nle for the amplitude tau_nl of the CMB trispectrum both for the null-hypothesis (i.e., for Gaussian maps with tau_nl = 0) and for maps with a non-vanishing trispectrum (|tau_nl|>0). We find these PDFs to be highly non-Gaussian in both cases. We also evaluate the variance with which the trispectrum amplitude can be measured, , as a function of its underlying value, tau_nl. We find a strong dependence of this variance on tau_nl. We also find that the variance does not, given the highly non-Gaussian nature of the PDF, effectively characterize the distribution. Detailed knowledge of these PDFs will therefore be imperative in order to properly interpret the implications of any given trispectrum measurement. For example, if a CMB experiment with a maximum multipole ...
Estimating Recovery Failure Probabilities in Off-normal Situations from Full-Scope Simulator Data
Kim, Yochan; Park, Jinkyun; Kim, Seunghwan; Choi, Sun Yeong; Jung, Wondea [Korea Atomic Research Institute, Daejeon (Korea, Republic of)
2016-10-15
As part of this effort, KAERI developed the Human Reliability data EXtraction (HuREX) framework and is collecting full-scope simulator-based human reliability data into the OPERA (Operator PErformance and Reliability Analysis) database. In this study, with the series of estimation research for HEPs or PSF effects, significant information for a quantitative HRA analysis, recovery failure probabilities (RFPs), were produced from the OPERA database. Unsafe acts can occur at any time in safety-critical systems and the operators often manage the systems by discovering their errors and eliminating or mitigating them. To model the recovery processes or recovery strategies, there were several researches that categorize the recovery behaviors. Because the recent human error trends are required to be considered during a human reliability analysis, Jang et al. can be seen as an essential effort of the data collection. However, since the empirical results regarding soft controls were produced from a controlled laboratory environment with student participants, it is necessary to analyze a wide range of operator behaviors using full-scope simulators. This paper presents the statistics related with human error recovery behaviors obtained from the full-scope simulations that in-site operators participated in. In this study, the recovery effects by shift changes or technical support centers were not considered owing to a lack of simulation data.
Jayajit Das '
2015-07-01
Full Text Available A common statistical situation concerns inferring an unknown distribution Q(x from a known distribution P(y, where X (dimension n, and Y (dimension m have a known functional relationship. Most commonly, n ≤ m, and the task is relatively straightforward for well-defined functional relationships. For example, if Y1 and Y2 are independent random variables, each uniform on [0, 1], one can determine the distribution of X = Y1 + Y2; here m = 2 and n = 1. However, biological and physical situations can arise where n > m and the functional relation Y→X is non-unique. In general, in the absence of additional information, there is no unique solution to Q in those cases. Nevertheless, one may still want to draw some inferences about Q. To this end, we propose a novel maximum entropy (MaxEnt approach that estimates Q(x based only on the available data, namely, P(y. The method has the additional advantage that one does not need to explicitly calculate the Lagrange multipliers. In this paper we develop the approach, for both discrete and continuous probability distributions, and demonstrate its validity. We give an intuitive justification as well, and we illustrate with examples.
An Illustration of Inverse Probability Weighting to Estimate Policy-Relevant Causal Effects.
Edwards, Jessie K; Cole, Stephen R; Lesko, Catherine R; Mathews, W Christopher; Moore, Richard D; Mugavero, Michael J; Westreich, Daniel
2016-08-15
Traditional epidemiologic approaches allow us to compare counterfactual outcomes under 2 exposure distributions, usually 100% exposed and 100% unexposed. However, to estimate the population health effect of a proposed intervention, one may wish to compare factual outcomes under the observed exposure distribution to counterfactual outcomes under the exposure distribution produced by an intervention. Here, we used inverse probability weights to compare the 5-year mortality risk under observed antiretroviral therapy treatment plans to the 5-year mortality risk that would had been observed under an intervention in which all patients initiated therapy immediately upon entry into care among patients positive for human immunodeficiency virus in the US Centers for AIDS Research Network of Integrated Clinical Systems multisite cohort study between 1998 and 2013. Therapy-naïve patients (n = 14,700) were followed from entry into care until death, loss to follow-up, or censoring at 5 years or on December 31, 2013. The 5-year cumulative incidence of mortality was 11.65% under observed treatment plans and 10.10% under the intervention, yielding a risk difference of -1.57% (95% confidence interval: -3.08, -0.06). Comparing outcomes under the intervention with outcomes under observed treatment plans provides meaningful information about the potential consequences of new US guidelines to treat all patients with human immunodeficiency virus regardless of CD4 cell count under actual clinical conditions.
Lussana, C.
2013-04-01
The presented work focuses on the investigation of gridded daily minimum (TN) and maximum (TX) temperature probability density functions (PDFs) with the intent of both characterising a region and detecting extreme values. The empirical PDFs estimation procedure has been realised using the most recent years of gridded temperature analysis fields available at ARPA Lombardia, in Northern Italy. The spatial interpolation is based on an implementation of Optimal Interpolation using observations from a dense surface network of automated weather stations. An effort has been made to identify both the time period and the spatial areas with a stable data density otherwise the elaboration could be influenced by the unsettled station distribution. The PDF used in this study is based on the Gaussian distribution, nevertheless it is designed to have an asymmetrical (skewed) shape in order to enable distinction between warming and cooling events. Once properly defined the occurrence of extreme events, it is possible to straightforwardly deliver to the users the information on a local-scale in a concise way, such as: TX extremely cold/hot or TN extremely cold/hot.
Gardi, Jonathan Eyal; Nyengaard, Jens Randel; Gundersen, Hans Jørgen Gottlieb
2008-01-01
of its entirely different sampling strategy, based on known but non-uniform sampling probabilities, the proportionator for the first time allows the real CE at the section level to be automatically estimated (not just predicted), unbiased - for all estimators and at no extra cost to the user.......The proportionator is a novel and radically different approach to sampling with microscopes based on well-known statistical theory (probability proportional to size - PPS sampling). It uses automatic image analysis, with a large range of options, to assign to every field of view in the section......, the desired number of fields are sampled automatically with probability proportional to the weight and presented to the expert observer. Using any known stereological probe and estimator, the correct count in these fields leads to a simple, unbiased estimate of the total amount of structure in the sections...
Suligowski, Roman
2014-05-01
Probable Maximum Precipitation based upon the physical mechanisms of precipitation formation at the Kielce Upland. This estimation stems from meteorological analysis of extremely high precipitation events, which occurred in the area between 1961 and 2007 causing serious flooding from rivers that drain the entire Kielce Upland. Meteorological situation has been assessed drawing on the synoptic maps, baric topography charts, satellite and radar images as well as the results of meteorological observations derived from surface weather observation stations. Most significant elements of this research include the comparison between distinctive synoptic situations over Europe and subsequent determination of typical rainfall generating mechanism. This allows the author to identify the source areas of air masses responsible for extremely high precipitation at the Kielce Upland. Analysis of the meteorological situations showed, that the source areas for humid air masses which cause the largest rainfalls at the Kielce Upland are the area of northern Adriatic Sea and the north-eastern coast of the Black Sea. Flood hazard at the Kielce Upland catchments was triggered by daily precipitation of over 60 mm. The highest representative dew point temperature in source areas of warm air masses (these responsible for high precipitation at the Kielce Upland) exceeded 20 degrees Celsius with a maximum of 24.9 degrees Celsius while precipitable water amounted to 80 mm. The value of precipitable water is also used for computation of factors featuring the system, namely the mass transformation factor and the system effectiveness factor. The mass transformation factor is computed based on precipitable water in the feeding mass and precipitable water in the source area. The system effectiveness factor (as the indicator of the maximum inflow velocity and the maximum velocity in the zone of front or ascending currents, forced by orography) is computed from the quotient of precipitable water in
Pedicini, Piernicola [I.R.C.C.S.-Regional-Cancer-Hospital-C.R.O.B, Unit of Nuclear Medicine, Department of Radiation and Metabolic Therapies, Rionero-in-Vulture (Italy); Department of Radiation and Metabolic Therapies, I.R.C.C.S.-Regional-Cancer-Hospital-C.R.O.B, Unit of Radiotherapy, Rionero-in-Vulture (Italy); Fiorentino, Alba [Sacro Cuore - Don Calabria Hospital, Radiation Oncology Department, Negrar, Verona (Italy); Simeon, Vittorio [I.R.C.C.S.-Regional-Cancer-Hospital-C.R.O.B, Laboratory of Preclinical and Translational Research, Rionero-in-Vulture (Italy); Tini, Paolo; Pirtoli, Luigi [University of Siena and Tuscany Tumor Institute, Unit of Radiation Oncology, Department of Medicine Surgery and Neurological Sciences, Siena (Italy); Chiumento, Costanza [Department of Radiation and Metabolic Therapies, I.R.C.C.S.-Regional-Cancer-Hospital-C.R.O.B, Unit of Radiotherapy, Rionero-in-Vulture (Italy); Salvatore, Marco [I.R.C.C.S. SDN Foundation, Unit of Nuclear Medicine, Napoli (Italy); Storto, Giovanni [I.R.C.C.S.-Regional-Cancer-Hospital-C.R.O.B, Unit of Nuclear Medicine, Department of Radiation and Metabolic Therapies, Rionero-in-Vulture (Italy)
2014-10-15
The aim of this study was to estimate a radiobiological set of parameters from the available clinical data on glioblastoma (GB). A number of clinical trial outcomes from patients affected by GB and treated with surgery and adjuvant radiochemotherapy were analyzed to estimate a set of radiobiological parameters for a tumor control probability (TCP) model. The analytical/graphical method employed to fit the clinical data allowed us to estimate the intrinsic tumor radiosensitivity (α), repair capability (b), and repopulation doubling time (T{sub d}) in a first phase, and subsequently the number of clonogens (N) and kick-off time for accelerated proliferation (T{sub k}). The results were used to formulate a hypothesis for a scheduleexpected to significantly improve local control. The 95 % confidence intervals (CI{sub 95} {sub %}) of all parameters are also discussed. The pooled analysis employed to estimate the parameters summarizes the data of 559 patients, while the studies selected to verify the results summarize data of 104 patients. The best estimates and the CI{sub 95} {sub %} are α = 0.12 Gy{sup -1} (0.10-0.14), b = 0.015 Gy{sup -2} (0.013-0.020), α/b = 8 Gy (5.0-10.8), T{sub d} = 15.4 days (13.2-19.5), N = 1 . 10{sup 4} (1.2 . 10{sup 3} - 1 . 10{sup 5}), and T{sub k} = 37 days (29-46). The dose required to offset the repopulation occurring after 1 day (D{sub prolif}) and starting after T{sub k} was estimated as 0.30 Gy/day (0.22-0.39). The analysis confirms a high value for the α/b ratio. Moreover, a high intrinsic radiosensitivity together with a long kick-off time for accelerated repopulation and moderate repopulation kinetics were found. The results indicate a substantial independence of the duration of the overall treatment and an improvement in the treatment effectiveness by increasing the total dose without increasing the dose fraction. (orig.) [German] Schaetzung eines strahlenbiologischen Parametersatzes auf der Grundlage klinischer Daten bei
Vehicle Trajectory Estimation Using Spatio-Temporal MCMC
Francois Bardet
2010-01-01
Full Text Available This paper presents an algorithm for modeling and tracking vehicles in video sequences within one integrated framework. Most of the solutions are based on sequential methods that make inference according to current information. In contrast, we propose a deferred logical inference method that makes a decision according to a sequence of observations, thus processing a spatio-temporal search on the whole trajectory. One of the drawbacks of deferred logical inference methods is that the solution space of hypotheses grows exponentially related to the depth of observation. Our approach takes into account both the kinematic model of the vehicle and a driver behavior model in order to reduce the space of the solutions. The resulting proposed state model explains the trajectory with only 11 parameters. The solution space is then sampled with a Markov Chain Monte Carlo (MCMC that uses a model-driven proposal distribution in order to control random walk behavior. We demonstrate our method on real video sequences from which we have ground truth provided by a RTK GPS (Real-Time Kinematic GPS. Experimental results show that the proposed algorithm outperforms a sequential inference solution (particle filter.
de Uña-Álvarez, Jacobo; Meira-Machado, Luís
2015-06-01
Multi-state models are often used for modeling complex event history data. In these models the estimation of the transition probabilities is of particular interest, since they allow for long-term predictions of the process. These quantities have been traditionally estimated by the Aalen-Johansen estimator, which is consistent if the process is Markov. Several non-Markov estimators have been proposed in the recent literature, and their superiority with respect to the Aalen-Johansen estimator has been proved in situations in which the Markov condition is strongly violated. However, the existing estimators have the drawback of requiring that the support of the censoring distribution contains the support of the lifetime distribution, which is not often the case. In this article, we propose two new methods for estimating the transition probabilities in the progressive illness-death model. Some asymptotic results are derived. The proposed estimators are consistent regardless the Markov condition and the referred assumption about the censoring support. We explore the finite sample behavior of the estimators through simulations. The main conclusion of this piece of research is that the proposed estimators are much more efficient than the existing non-Markov estimators in most cases. An application to a clinical trial on colon cancer is included. Extensions to progressive processes beyond the three-state illness-death model are discussed.
Walsh, Michael G; Haseeb, M A
2014-01-01
Toxocariasis is increasingly recognized as an important neglected infection of poverty (NIP) in developed countries, and may constitute the most important NIP in the United States (US) given its association with chronic sequelae such as asthma and poor cognitive development. Its potential public health burden notwithstanding, toxocariasis surveillance is minimal throughout the US and so the true burden of disease remains uncertain in many areas. The Third National Health and Nutrition Examination Survey conducted a representative serologic survey of toxocariasis to estimate the prevalence of infection in diverse US subpopulations across different regions of the country. Using the NHANES III surveillance data, the current study applied the predicted probabilities of toxocariasis to the sociodemographic composition of New York census tracts to estimate the local probability of infection across the city. The predicted probability of toxocariasis ranged from 6% among US-born Latino women with a university education to 57% among immigrant men with less than a high school education. The predicted probability of toxocariasis exhibited marked spatial variation across the city, with particularly high infection probabilities in large sections of Queens, and smaller, more concentrated areas of Brooklyn and northern Manhattan. This investigation is the first attempt at small-area estimation of the probability surface of toxocariasis in a major US city. While this study does not define toxocariasis risk directly, it does provide a much needed tool to aid the development of toxocariasis surveillance in New York City.
Michael G Walsh
Full Text Available Toxocariasis is increasingly recognized as an important neglected infection of poverty (NIP in developed countries, and may constitute the most important NIP in the United States (US given its association with chronic sequelae such as asthma and poor cognitive development. Its potential public health burden notwithstanding, toxocariasis surveillance is minimal throughout the US and so the true burden of disease remains uncertain in many areas. The Third National Health and Nutrition Examination Survey conducted a representative serologic survey of toxocariasis to estimate the prevalence of infection in diverse US subpopulations across different regions of the country. Using the NHANES III surveillance data, the current study applied the predicted probabilities of toxocariasis to the sociodemographic composition of New York census tracts to estimate the local probability of infection across the city. The predicted probability of toxocariasis ranged from 6% among US-born Latino women with a university education to 57% among immigrant men with less than a high school education. The predicted probability of toxocariasis exhibited marked spatial variation across the city, with particularly high infection probabilities in large sections of Queens, and smaller, more concentrated areas of Brooklyn and northern Manhattan. This investigation is the first attempt at small-area estimation of the probability surface of toxocariasis in a major US city. While this study does not define toxocariasis risk directly, it does provide a much needed tool to aid the development of toxocariasis surveillance in New York City.
Estimation of GRACE water storage components by temporal decomposition
Andrew, Robert; Guan, Huade; Batelaan, Okke
2017-09-01
The Gravity Recovery and Climate Experiment (GRACE) has been in operation since 2002. Water storage estimates are calculated from gravity anomalies detected by the operating satellites and although not the true resolution, can be presented as 100 km × 100 km data cells if appropriate scaling functions are applied. Estimating total water storage has shown to be highly useful in detecting hydrological variations and trends. However, a limitation is that GRACE does not provide information as to where the water is stored in the vertical profile. We aim to partition the total water storage from GRACE into water storage components. We use a wavelet filter to decompose the GRACE data and partition it into various water storage components including soil water and groundwater. Storage components from the Australian Water Resources Assessment (AWRA) model are used as a reference for the decompositions of total storage data across Australia. Results show a clear improvement in using decomposed GRACE data instead of raw GRACE data when compared against total water storage outputs from the AWRA model. The method has potential to improve GRACE applications including a means to test various large scale hydrological models as well as helping to analyse floods, droughts and other hydrological conditions.
张路平; 王鲁平; 李飚; 赵明
2015-01-01
In order to improve the performance of the probability hypothesis density (PHD) algorithm based particle filter (PF) in terms of number estimation and states extraction of multiple targets, a new probability hypothesis density filter algorithm based on marginalized particle and kernel density estimation is proposed, which utilizes the idea of marginalized particle filter to enhance the estimating performance of the PHD. The state variables are decomposed into linear and non-linear parts. The particle filter is adopted to predict and estimate the nonlinear states of multi-target after dimensionality reduction, while the Kalman filter is applied to estimate the linear parts under linear Gaussian condition. Embedding the information of the linear states into the estimated nonlinear states helps to reduce the estimating variance and improve the accuracy of target number estimation. The meanshift kernel density estimation, being of the inherent nature of searching peak value via an adaptive gradient ascent iteration, is introduced to cluster particles and extract target states, which is independent of the target number and can converge to the local peak position of the PHD distribution while avoiding the errors due to the inaccuracy in modeling and parameters estimation. Experiments show that the proposed algorithm can obtain higher tracking accuracy when using fewer sampling particles and is of lower computational complexity compared with the PF-PHD.
Cross, David; Onof, Christian; Bernardara, Pietro
2016-04-01
heavy rainfall. Tipping bucket raingauge data aggregated to a minimum temporal resolution of 15 minutes have been identified throughout the UK, and the longest records (minimum 30 years duration) have been selected for analysis. Rainfall extremes are estimated for a range of annual exceedance probabilities to a minimum 1e-4 (10,000 year return period) by simulating up to 100,000 years of rainfall and sampling annual maxima and peaks over high threshold. A range of low thresholds are tested for the censored modelling, as well as seasonally varying thresholds, and the results compared with comparable estimates from extreme value analysis. References: Centre for Ecology & Hydrology. (2015) Anonymous's blog. North West floods - Hydrological update. Weblog [Posted 08/12/2015 - 14:18]. Available from: http://www.ceh.ac.uk/news-and-media/blogs/north-west-floods-hydrological-update [Accessed 04/01/2016].
Faith, Daniel P
2008-12-01
New species conservation strategies, including the EDGE of Existence (EDGE) program, have expanded threatened species assessments by integrating information about species' phylogenetic distinctiveness. Distinctiveness has been measured through simple scores that assign shared credit among species for evolutionary heritage represented by the deeper phylogenetic branches. A species with a high score combined with a high extinction probability receives high priority for conservation efforts. Simple hypothetical scenarios for phylogenetic trees and extinction probabilities demonstrate how such scoring approaches can provide inefficient priorities for conservation. An existing probabilistic framework derived from the phylogenetic diversity measure (PD) properly captures the idea of shared responsibility for the persistence of evolutionary history. It avoids static scores, takes into account the status of close relatives through their extinction probabilities, and allows for the necessary updating of priorities in light of changes in species threat status. A hypothetical phylogenetic tree illustrates how changes in extinction probabilities of one or more species translate into changes in expected PD. The probabilistic PD framework provided a range of strategies that moved beyond expected PD to better consider worst-case PD losses. In another example, risk aversion gave higher priority to a conservation program that provided a smaller, but less risky, gain in expected PD. The EDGE program could continue to promote a list of top species conservation priorities through application of probabilistic PD and simple estimates of current extinction probability. The list might be a dynamic one, with all the priority scores updated as extinction probabilities change. Results of recent studies suggest that estimation of extinction probabilities derived from the red list criteria linked to changes in species range sizes may provide estimated probabilities for many different species
Annual Corn Yield Estimation through Multi-temporal MODIS Data
Shao, Y.; Zheng, B.; Campbell, J. B.
2013-12-01
This research employed 13 years of the Moderate Resolution Imaging Spectroradiometer (MODIS) to estimate annual corn yield for the Midwest of the United States. The overall objective of this study was to examine if annual corn yield could be accurately predicted using MODIS time-series NDVI (Normalized Difference Vegetation Index) and ancillary data such monthly precipitation and temperature. MODIS-NDVI 16-Day composite images were acquired from the USGS EROS Data Center for calendar years 2000 to 2012. For the same time-period, county level corn yield statistics were obtained from the National Agricultural Statistics Service (NASS). The monthly precipitation and temperature measures were derived from Precipitation-Elevation Regressions on Independent Slopes Model (PRISM) climate data. A cropland mask was derived using 2006 National Land Cover Database. For each county and within the cropland mask, the MODIS-NDVI time-series data and PRISM climate data were spatially averaged, at their respective time steps. We developed a random forest predictive model with the MODIS-NDVI and climate data as predictors and corn yield as response. To assess the model accuracy, we used twelve years of data as training and the remaining year as hold-out testing set. The training and testing procedures were repeated 13 times. The R2 ranged from 0.72 to 0.83 for testing years. It was also found that the inclusion of climate data did not improve the model predictive performance. MODIS-NDVI time-series data alone might provide sufficient information for county level corn yield prediction.
Tiao, G.C.; Daming, Xu; Pedrick, J.H.; Xiaodong, Zhu (Univ. of Chicago, IL (USA)); Reinsel, G.C. (Univ. of Wisconsin, Madison (USA)); Miller, A.J.; DeLuisi, J.J. (National Oceanic and Atmospheric Administration, Boulder, CO (USA)); Mateer, C.L. (Atmospheric Environment Service, Ottawa, Ontario (Canada)); Wuebbles, D.J. (Lawrence Livermore National Lab., CA (USA))
1990-11-20
This paper is concerned with temporal data requirements for the assessment of trends and for estimating spatial correlations of atmospheric species. The authors examine statistically three basic issues: (1) the effect of autocorrelations in monthly observations and the effect of the length of data record on the precision of trend estimates, (2) the effect of autocorrelations in the daily data on the sampling frequency requirements with respect to the representativeness of monthly averages for trend estimation, and (3) the effect of temporal sampling schemes on estimating spatial correlations of atmospheric species in neighboring stations. The principal findings are (1) the precision of trend estimates depends critically on the magnitude of auto-correlations in the monthly observations, (2) this precision is insensitive to the temporal sampling rates of daily measurements under systematic sampling, and (3) the estimate of spatial correlation between two neighboring stations is insensitive to temporal sampling rate under systematic sampling, but is sensitive to the time lag between measurements taken at the two stations. These results are based on methodological considerations as well as on empirical analysis of total and profile ozone and rawinsonde temperature data from selected ground stations.
Seaver, D.A.; Stillwell, W.G.
1983-03-01
This report describes and evaluates several procedures for using expert judgment to estimate human-error probabilities (HEPs) in nuclear power plant operations. These HEPs are currently needed for several purposes, particularly for probabilistic risk assessments. Data do not exist for estimating these HEPs, so expert judgment can provide these estimates in a timely manner. Five judgmental procedures are described here: paired comparisons, ranking and rating, direct numerical estimation, indirect numerical estimation and multiattribute utility measurement. These procedures are evaluated in terms of several criteria: quality of judgments, difficulty of data collection, empirical support, acceptability, theoretical justification, and data processing. Situational constraints such as the number of experts available, the number of HEPs to be estimated, the time available, the location of the experts, and the resources available are discussed in regard to their implications for selecting a procedure for use.
Nongpiur, Monisha E; Haaland, Benjamin A; Perera, Shamira A; Friedman, David S; He, Mingguang; Sakata, Lisandro M; Baskaran, Mani; Aung, Tin
2014-01-01
To develop a score along with an estimated probability of disease for detecting angle closure based on anterior segment optical coherence tomography (AS OCT) imaging. Cross-sectional study. A total of 2047 subjects 50 years of age and older were recruited from a community polyclinic in Singapore. All subjects underwent standardized ocular examination including gonioscopy and imaging by AS OCT (Carl Zeiss Meditec). Customized software (Zhongshan Angle Assessment Program) was used to measure AS OCT parameters. Complete data were available for 1368 subjects. Data from the right eyes were used for analysis. A stepwise logistic regression model with Akaike information criterion was used to generate a score that then was converted to an estimated probability of the presence of gonioscopic angle closure, defined as the inability to visualize the posterior trabecular meshwork for at least 180 degrees on nonindentation gonioscopy. Of the 1368 subjects, 295 (21.6%) had gonioscopic angle closure. The angle closure score was calculated from the shifted linear combination of the AS OCT parameters. The score can be converted to an estimated probability of having angle closure using the relationship: estimated probability = e(score)/(1 + e(score)), where e is the natural exponential. The score performed well in a second independent sample of 178 angle-closure subjects and 301 normal controls, with an area under the receiver operating characteristic curve of 0.94. A score derived from a single AS OCT image, coupled with an estimated probability, provides an objective platform for detection of angle closure. Copyright © 2014 Elsevier Inc. All rights reserved.
Sawosz, P; Kacprzak, M; Weigl, W; Borowska-Solonynko, A; Krajewski, P; Zolek, N; Ciszek, B; Maniewski, R; Liebert, A
2012-12-07
A time-gated intensified CCD camera was applied for time-resolved imaging of light penetrating in an optically turbid medium. Spatial distributions of light penetration probability in the plane perpendicular to the axes of the source and the detector were determined at different source positions. Furthermore, visiting probability profiles of diffuse reflectance measurement were obtained by the convolution of the light penetration distributions recorded at different source positions. Experiments were carried out on homogeneous phantoms, more realistic two-layered tissue phantoms based on the human skull filled with Intralipid-ink solution and on cadavers. It was noted that the photons visiting probability profiles depend strongly on the source-detector separation, the delay between the laser pulse and the photons collection window and the complex tissue composition of the human head.
S. Vathsal
1994-01-01
Full Text Available This paper provides an error model of the strapped down inertial navigation system in the state space format. A method to estimate the circular error probability is presented using time propagation of error covariance matrix. Numerical results have been obtained for a typical flight trajectory. Sensitivity studies have also been conducted for variation of sensor noise covariances and initial state uncertainty. This methodology seems to work in all the practical cases considered so far. Software has been tested for both the local vertical frame and the inertial frame. The covariance propagation technique provides accurate estimation of dispersions of position at impact. This in turn enables to estimate the circular error probability (CEP very accurately.
Effects of population variability on the accuracy of detection probability estimates
Ordonez Gloria, Alejandro
2011-01-01
Observing a constant fraction of the population over time, locations, or species is virtually impossible. Hence, quantifying this proportion (i.e. detection probability) is an important task in quantitative population ecology. In this study we determined, via computer simulations, the ef- fect of...
How does new evidence change our estimates of probabilities? Carnap's formula revisited
Kreinovich, Vladik; Quintana, Chris
1992-01-01
The formula originally proposed by R. Carnap in his analysis of induction is reviewed and its natural generalization is presented. A situation is considered where the probability of a certain event is determined without using standard statistical methods due to the lack of observation.
A.C.D. Donkers (Bas); T. Lourenco (Tania); B.G.C. Dellaert (Benedict); D.G. Goldstein (Daniel G.)
2013-01-01
textabstract In this paper we propose the use of preferred outcome distributions as a new method to elicit individuals' value and probability weighting functions in decisions under risk. Extant approaches for the elicitation of these two key ingredients of individuals' risk attitude typically rely
Effects of population variability on the accuracy of detection probability estimates
Ordonez Gloria, Alejandro
2011-01-01
Observing a constant fraction of the population over time, locations, or species is virtually impossible. Hence, quantifying this proportion (i.e. detection probability) is an important task in quantitative population ecology. In this study we determined, via computer simulations, the ef- fect of...
Farmer, William H.; Koltun, Greg
2017-01-01
Study regionThe state of Ohio in the United States, a humid, continental climate.Study focusThe estimation of nonexceedance probabilities of daily streamflows as an alternative means of establishing the relative magnitudes of streamflows associated with hydrologic and water-quality observations.New hydrological insights for the regionSeveral methods for estimating nonexceedance probabilities of daily mean streamflows are explored, including single-index methodologies (nearest-neighboring index) and geospatial tools (kriging and topological kriging). These methods were evaluated by conducting leave-one-out cross-validations based on analyses of nearly 7 years of daily streamflow data from 79 unregulated streamgages in Ohio and neighboring states. The pooled, ordinary kriging model, with a median Nash–Sutcliffe performance of 0.87, was superior to the single-site index methods, though there was some bias in the tails of the probability distribution. Incorporating network structure through topological kriging did not improve performance. The pooled, ordinary kriging model was applied to 118 locations without systematic streamgaging across Ohio where instantaneous streamflow measurements had been made concurrent with water-quality sampling on at least 3 separate days. Spearman rank correlations between estimated nonexceedance probabilities and measured streamflows were high, with a median value of 0.76. In consideration of application, the degree of regulation in a set of sample sites helped to specify the streamgages required to implement kriging approaches successfully.
Fang Zheng
2013-04-01
Full Text Available Analysis of knee joint vibration or vibroarthrographic (VAG signals using signal processing and machine learning algorithms possesses high potential for the noninvasive detection of articular cartilage degeneration, which may reduce unnecessary exploratory surgery. Feature representation of knee joint VAG signals helps characterize the pathological condition of degenerative articular cartilages in the knee. This paper used the kernel-based probability density estimation method to model the distributions of the VAG signals recorded from healthy subjects and patients with knee joint disorders. The estimated densities of the VAG signals showed explicit distributions of the normal and abnormal signal groups, along with the corresponding contours in the bivariate feature space. The signal classifications were performed by using the Fisher’s linear discriminant analysis, support vector machine with polynomial kernels, and the maximal posterior probability decision criterion. The maximal posterior probability decision criterion was able to provide the total classification accuracy of 86.67% and the area (Az of 0.9096 under the receiver operating characteristics curve, which were superior to the results obtained by either the Fisher’s linear discriminant analysis (accuracy: 81.33%, Az: 0.8564 or the support vector machine with polynomial kernels (accuracy: 81.33%, Az: 0.8533. Such results demonstrated the merits of the bivariate feature distribution estimation and the superiority of the maximal posterior probability decision criterion for analysis of knee joint VAG signals.
Kline, Jeffrey A; Stubblefield, William B
2014-03-01
Pretest probability helps guide diagnostic testing for patients with suspected acute coronary syndrome and pulmonary embolism. Pretest probability derived from the clinician's unstructured gestalt estimate is easier and more readily available than methods that require computation. We compare the diagnostic accuracy of physician gestalt estimate for the pretest probability of acute coronary syndrome and pulmonary embolism with a validated, computerized method. This was a secondary analysis of a prospectively collected, multicenter study. Patients (N=840) had chest pain, dyspnea, nondiagnostic ECGs, and no obvious diagnosis. Clinician gestalt pretest probability for both acute coronary syndrome and pulmonary embolism was assessed by visual analog scale and from the method of attribute matching using a Web-based computer program. Patients were followed for outcomes at 90 days. Clinicians had significantly higher estimates than attribute matching for both acute coronary syndrome (17% versus 4%; Psyndrome (r(2)=0.15) and pulmonary embolism (r(2)=0.06). Areas under the receiver operating characteristic curve were lower for clinician estimate compared with the computerized method for acute coronary syndrome: 0.64 (95% confidence interval [CI] 0.51 to 0.77) for clinician gestalt versus 0.78 (95% CI 0.71 to 0.85) for attribute matching. For pulmonary embolism, these values were 0.81 (95% CI 0.79 to 0.92) for clinician gestalt and 0.84 (95% CI 0.76 to 0.93) for attribute matching. Compared with a validated machine-based method, clinicians consistently overestimated pretest probability but on receiver operating curve analysis were as accurate for pulmonary embolism but not acute coronary syndrome. Copyright © 2013 American College of Emergency Physicians. Published by Mosby, Inc. All rights reserved.
Rein, Arno; Bauer, S; Dietrich, P
2009-01-01
is present, the concentration variability due to a fluctuating groundwater flow direction varies significantly within the control plane and between the different realizations. Determination of contaminant mass fluxes is also influenced by the temporal variability of the concentration measurement, especially......Monitoring of contaminant concentrations, e.g., for the estimation of mass discharge or contaminant degradation rates. often is based on point measurements at observation wells. In addition to the problem, that point measurements may not be spatially representative. a further complication may arise...... due to the temporal dynamics of groundwater flow, which may cause a concentration measurement to be not temporally representative. This paper presents results from a numerical modeling study focusing on temporal variations of the groundwater flow direction. "Measurements" are obtained from point...
Paolo Casale
2007-06-01
Full Text Available Survival probabilities of loggerhead sea turtles (Caretta caretta are estimated for the first time in the Mediterranean by analysing 3254 tagging and 134 re-encounter data from this region. Most of these turtles were juveniles found at sea. Re-encounters were live resightings and dead recoveries and data were analysed with Barker’s model, a modified version of the Cormack-Jolly-Seber model which can combine recapture, live resighting and dead recovery data. An annual survival probability of 0.73 (CI 95% = 0.67-0.78; n=3254 was obtained, and should be considered as a conservative estimate due to an unknown, though not negligible, tag loss rate. This study makes a preliminary estimate of the survival probabilities of in-water developmental stages for the Mediterranean population of endangered loggerhead sea turtles and provides the first insights into the magnitude of the suspected human-induced mortality in the region. The model used here for the first time on sea turtles could be used to obtain survival estimates from other data sets with few or no true recaptures but with other types of re-encounter data, which are a common output of tagging programmes involving these wide-ranging animals.
Migliorati, Giovanni
2015-08-28
We study the accuracy of the discrete least-squares approximation on a finite dimensional space of a real-valued target function from noisy pointwise evaluations at independent random points distributed according to a given sampling probability measure. The convergence estimates are given in mean-square sense with respect to the sampling measure. The noise may be correlated with the location of the evaluation and may have nonzero mean (offset). We consider both cases of bounded or square-integrable noise / offset. We prove conditions between the number of sampling points and the dimension of the underlying approximation space that ensure a stable and accurate approximation. Particular focus is on deriving estimates in probability within a given confidence level. We analyze how the best approximation error and the noise terms affect the convergence rate and the overall confidence level achieved by the convergence estimate. The proofs of our convergence estimates in probability use arguments from the theory of large deviations to bound the noise term. Finally we address the particular case of multivariate polynomial approximation spaces with any density in the beta family, including uniform and Chebyshev.
O'Donnell, Matthew J.; Horton, Gregg E.; Letcher, Benjamin H.
2010-01-01
Portable passive integrated transponder (PIT) tag antenna systems can be valuable in providing reliable estimates of the abundance of tagged Atlantic salmon Salmo salar in small streams under a wide range of conditions. We developed and employed PIT tag antenna wand techniques in two controlled experiments and an additional case study to examine the factors that influenced our ability to estimate population size. We used Pollock's robust-design capture–mark–recapture model to obtain estimates of the probability of first detection (p), the probability of redetection (c), and abundance (N) in the two controlled experiments. First, we conducted an experiment in which tags were hidden in fixed locations. Although p and c varied among the three observers and among the three passes that each observer conducted, the estimates of N were identical to the true values and did not vary among observers. In the second experiment using free-swimming tagged fish, p and c varied among passes and time of day. Additionally, estimates of N varied between day and night and among age-classes but were within 10% of the true population size. In the case study, we used the Cormack–Jolly–Seber model to examine the variation in p, and we compared counts of tagged fish found with the antenna wand with counts collected via electrofishing. In that study, we found that although p varied for age-classes, sample dates, and time of day, antenna and electrofishing estimates of N were similar, indicating that population size can be reliably estimated via PIT tag antenna wands. However, factors such as the observer, time of day, age of fish, and stream discharge can influence the initial and subsequent detection probabilities.
Segalman, D.; Reese, G.
1998-09-01
The von Mises stress is often used as the metric for evaluating design margins, particularly for structures made of ductile materials. For deterministic loads, both static and dynamic, the calculation of von Mises stress is straightforward, as is the resulting calculation of reliability. For loads modeled as random processes, the task is different; the response to such loads is itself a random process and its properties must be determined in terms of those of both the loads and the system. This has been done in the past by Monte Carlo sampling of numerical realizations that reproduce the second order statistics of the problem. Here, the authors present a method that provides analytic expressions for the probability distributions of von Mises stress which can be evaluated efficiently and with good precision numerically. Further, this new approach has the important advantage of providing the asymptotic properties of the probability distribution.
Courey, Karim; Wright, Clara; Asfour, Shihab; Onar, Arzu; Bayliss, Jon; Ludwig, Larry
2009-01-01
In this experiment, an empirical model to quantify the probability of occurrence of an electrical short circuit from tin whiskers as a function of voltage was developed. This empirical model can be used to improve existing risk simulation models. FIB and TEM images of a tin whisker confirm the rare polycrystalline structure on one of the three whiskers studied. FIB cross-section of the card guides verified that the tin finish was bright tin.
Estimates for the Tail Probability of the Supremum of a Random Walk with Independent Increments
Yang YANG; Kaiyong WANG
2011-01-01
The authors investigate the tail probability of the supremum of a random walk with independent increments and obtain some equivalent assertions in the case that the increments are independent and identically distributed random variables with Osubexponential integrated distributions.A uniform upper bound is derived for the distribution of the supremum of a random walk with independent but non-identically distributed increments,whose tail distributions are dominated by a common tail distribution with an O-subexponential integrated distribution.
Information geometric algorithm for estimating switching probabilities in space-varying HMM.
Nascimento, Jacinto C; Barão, Miguel; Marques, Jorge S; Lemos, João M
2014-12-01
This paper proposes an iterative natural gradient algorithm to perform the optimization of switching probabilities in a space-varying hidden Markov model, in the context of human activity recognition in long-range surveillance. The proposed method is a version of the gradient method, developed under an information geometric viewpoint, where the usual Euclidean metric is replaced by a Riemannian metric on the space of transition probabilities. It is shown that the change in metric provides advantages over more traditional approaches, namely: 1) it turns the original constrained optimization into an unconstrained optimization problem; 2) the optimization behaves asymptotically as a Newton method and yields faster convergence than other methods for the same computational complexity; and 3) the natural gradient vector is an actual contravariant vector on the space of probability distributions for which an interpretation as the steepest descent direction is formally correct. Experiments on synthetic and real-world problems, focused on human activity recognition in long-range surveillance settings, show that the proposed methodology compares favorably with the state-of-the-art algorithms developed for the same purpose.
Joint Spatio-Temporal Filtering Methods for DOA and Fundamental Frequency Estimation
Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Benesty, Jacob
2015-01-01
In this paper, spatio-temporal filtering methods are proposed for estimating the direction-of-arrival (DOA) and fundamental frequency of periodic signals, like those produced by the speech production system and many musical instruments using microphone arrays. This topic has quite recently received...
Improved estimation of the temporal decay function of in vivo metabolite signals
Van Ormondt, D.; De Beer, R.; Van der Veen, J.W.C.; Sima, D.M.; Graveron-Demilly, D.
2015-01-01
MRI-scanners enable non-invasive, in vivo quantitation of metabolites in, e.g., the brain of a patient. Among other things, this requires adequate estimation of the unknown temporal decay function of the complex-valued signal emanating from the metabolites. We propose a method to render a current de
Muhammad Qaiser Shahbaz
2007-01-01
Full Text Available A new approximate formula for sampling variance of Horvitz–Thompson (1952 estimator has been obtained. Empirical study of the approximate formula has been given to see its performance.
Anderson, Christian C; Bauer, Adam Q; Holland, Mark R; Pakula, Michal; Laugier, Pascal; Bretthorst, G Larry; Miller, James G
2010-11-01
Quantitative ultrasonic characterization of cancellous bone can be complicated by artifacts introduced by analyzing acquired data consisting of two propagating waves (a fast wave and a slow wave) as if only one wave were present. Recovering the ultrasonic properties of overlapping fast and slow waves could therefore lead to enhancement of bone quality assessment. The current study uses Bayesian probability theory to estimate phase velocity and normalized broadband ultrasonic attenuation (nBUA) parameters in a model of fast and slow wave propagation. Calculations are carried out using Markov chain Monte Carlo with simulated annealing to approximate the marginal posterior probability densities for parameters in the model. The technique is applied to simulated data, to data acquired on two phantoms capable of generating two waves in acquired signals, and to data acquired on a human femur condyle specimen. The models are in good agreement with both the simulated and experimental data, and the values of the estimated ultrasonic parameters fall within expected ranges.
LI Ying-feng; SHI Zhong-ke; ZHOU Zhi-na
2009-01-01
We made an on-site investigation about pedestrian violation of traffic violation decision and the impact of the number of pedestrians in colony that the probability of pedestrian violation rose with the waiting time for simulating mixed vehicles and pedestrians and used the on-site investigation data to validate the model.When traffic volume is light,the error between the simulated values and the measured ones is 2.47%.When traffic volume is heavy,the error is 3.38%.
Howe, Chanelle J.; Cole, Stephen R.; Chmiel, Joan S.; Muñoz, Alvaro
2011-01-01
In time-to-event analyses, artificial censoring with correction for induced selection bias using inverse probability-of-censoring weights can be used to 1) examine the natural history of a disease after effective interventions are widely available, 2) correct bias due to noncompliance with fixed or dynamic treatment regimens, and 3) estimate survival in the presence of competing risks. Artificial censoring entails censoring participants when they meet a predefined study criterion, such as exp...
Comments on “Estimating Income Variances by Probability Sampling: A Case Study by Shah and Aleem”
Jamal Abdul Nasir
2012-06-01
Full Text Available In this article, we wish to write comments on recently published article “Shah, A.A. and Aleem, M. (2010. Estimating income variances by probability sampling: a case study. Pakistan Journal of Commerce and Social Sciences, 4(2, 194-201”, which suggest improvement as well as criticism on the paper and also contribute effectively towardsjournal repute and ranking.
Probability Density Estimation for Non-flat Functions%非平坦函数概率密度估计
汪洪桥; 蔡艳宁; 付光远; 王仕成
2016-01-01
Aiming at the probability density estimation problem for non-flat functions, this paper constructs a single slack factor multi-scale kernel support vector machine (SVM) probability density estimation model, by improving the form of constraint condition of the traditional SVM model and introducing the multi-scale kernel method. In the model, a single slack factor instead of two types of slack factors is used to control the learning error of SVM, which reduces the computational complexity of model. At the same time, by introducing the multi-scale kernel method, the model can well fit the functions with both the fiercely changed region and the flatly changed region. Through several probability density estimation experiments with typical non-flat functions, the results show that the single slack probability density estimation model has faster learning speed than the common SVM model. And compared with the single kernel method, the multi-scale kernel SVM probability density estimation model has better estimation precision.%针对非平坦函数的概率密度估计问题，通过改进支持向量机（support vector machine，SVM）概率密度估计模型约束条件的形式，并引入多尺度核方法，构建了一种单松弛因子多尺度核支持向量机概率密度估计模型。该模型采用合并的单个松弛因子来控制支持向量机的学习误差，减小了模型的计算复杂度；同时引入了多尺度核方法，使得模型既能适应函数剧烈变化的区域，也能适应平缓变化的区域。基于几种典型非平坦函数进行概率密度估计实验，结果证明，单松弛因子概率密度估计模型比常规支持向量机概率密度估计模型具有更快的学习速度；且相比于单核方法，多尺度核支持向量机概率密度估计模型具有更优的估计精度。
Probability-based Clustering and Its Application to WLAN Location Estimation
ZHANG Ming-hua; ZHANG Shen-sheng; CAO Jian
2008-01-01
Wireless local area networks (WLAN) localization based on received signal strength is becoming an important enabler of location based services. Limited efficiency and accuracy are disadvantages to the deterministic location estimation techniques. The probabilistic techniques show their good accuracy but cost more computation overhead. A Gaussian mixture model based on clustering technique was presented to improve location determination efficiency. The proposed clustering algorithm reduces the number of candidate locations from the whole area to a cluster. Within a cluster, an improved nearest neighbor algorithm was used to estimate user location using signal strength from more access points. Experiments show that the location estimation time is greatly decreased while high accuracy can still be achieved.
Åkerblom, Staffan; Nilsson, Mats; Yu, Jun; Ranneby, Bo; Johansson, Kjell
2012-02-01
Adequate temporal trend analysis of mercury (Hg) in freshwater ecosystems is critical to evaluate if actions from the human society have affected Hg concentrations ([Hg]) in fresh water biota. This study examined temporal change in [Hg] in Northern pike (Esox lucius L.) in Swedish freshwater lakes between 1994 and 2006. To achieve this were lake-specific, multiple-linear-regression models used to estimate pike [Hg], including indicator variables representing time and fish weight and their interactions. This approach permitted estimation of the direction and magnitude of temporal changes in 25 lakes selected from the Swedish national database on Hg in freshwater biota. A significant increase was found in 36% of the studied lakes with an average increase in pike [Hg] of 3.7±6.7% per year that was found to be positively correlated with total organic carbon. For lakes with a significant temporal change the dataset was based on a mean of 30 fish, while for lakes with no temporal change it was based on a mean of 13 fish.
Estimation of Temporal Gait Parameters Using a Wearable Microphone-Sensor-Based System.
Wang, Cheng; Wang, Xiangdong; Long, Zhou; Yuan, Jing; Qian, Yueliang; Li, Jintao
2016-12-17
Most existing wearable gait analysis methods focus on the analysis of data obtained from inertial sensors. This paper proposes a novel, low-cost, wireless and wearable gait analysis system which uses microphone sensors to collect footstep sound signals during walking. This is the first time a microphone sensor is used as a wearable gait analysis device as far as we know. Based on this system, a gait analysis algorithm for estimating the temporal parameters of gait is presented. The algorithm fully uses the fusion of two feet footstep sound signals and includes three stages: footstep detection, heel-strike event and toe-on event detection, and calculation of gait temporal parameters. Experimental results show that with a total of 240 data sequences and 1732 steps collected using three different gait data collection strategies from 15 healthy subjects, the proposed system achieves an average 0.955 F1-measure for footstep detection, an average 94.52% accuracy rate for heel-strike detection and 94.25% accuracy rate for toe-on detection. Using these detection results, nine temporal related gait parameters are calculated and these parameters are consistent with their corresponding normal gait temporal parameters and labeled data calculation results. The results verify the effectiveness of our proposed system and algorithm for temporal gait parameter estimation.
Estimation of Temporal Gait Parameters Using a Wearable Microphone-Sensor-Based System
Cheng Wang
2016-12-01
Full Text Available Most existing wearable gait analysis methods focus on the analysis of data obtained from inertial sensors. This paper proposes a novel, low-cost, wireless and wearable gait analysis system which uses microphone sensors to collect footstep sound signals during walking. This is the first time a microphone sensor is used as a wearable gait analysis device as far as we know. Based on this system, a gait analysis algorithm for estimating the temporal parameters of gait is presented. The algorithm fully uses the fusion of two feet footstep sound signals and includes three stages: footstep detection, heel-strike event and toe-on event detection, and calculation of gait temporal parameters. Experimental results show that with a total of 240 data sequences and 1732 steps collected using three different gait data collection strategies from 15 healthy subjects, the proposed system achieves an average 0.955 F1-measure for footstep detection, an average 94.52% accuracy rate for heel-strike detection and 94.25% accuracy rate for toe-on detection. Using these detection results, nine temporal related gait parameters are calculated and these parameters are consistent with their corresponding normal gait temporal parameters and labeled data calculation results. The results verify the effectiveness of our proposed system and algorithm for temporal gait parameter estimation.
Estimating Effect Sizes and Expected Replication Probabilities from GWAS Summary Statistics
Holland, Dominic; Wang, Yunpeng; Thompson, Wesley K;
2016-01-01
for estimating the degree of polygenicity of the phenotype and predicting the proportion of chip heritability explainable by genome-wide significant SNPs in future studies with larger sample sizes. We apply the model to recent GWAS of schizophrenia (N = 82,315) and putamen volume (N = 12,596), with approximately...... based on linear-regression association coefficients. We estimate the polygenicity of schizophrenia to be 0.037 and the putamen to be 0.001, while the respective sample sizes required to approach fully explaining the chip heritability are 10(6) and 10(5). The model can be extended to incorporate prior...
Jalayer, Fatemeh; Ebrahimian, Hossein
2014-05-01
Introduction The first few days elapsed after the occurrence of a strong earthquake and in the presence of an ongoing aftershock sequence are quite critical for emergency decision-making purposes. Epidemic Type Aftershock Sequence (ETAS) models are used frequently for forecasting the spatio-temporal evolution of seismicity in the short-term (Ogata, 1988). The ETAS models are epidemic stochastic point process models in which every earthquake is a potential triggering event for subsequent earthquakes. The ETAS model parameters are usually calibrated a priori and based on a set of events that do not belong to the on-going seismic sequence (Marzocchi and Lombardi 2009). However, adaptive model parameter estimation, based on the events in the on-going sequence, may have several advantages such as, tuning the model to the specific sequence characteristics, and capturing possible variations in time of the model parameters. Simulation-based methods can be employed in order to provide a robust estimate for the spatio-temporal seismicity forecasts in a prescribed forecasting time interval (i.e., a day) within a post-main shock environment. This robust estimate takes into account the uncertainty in the model parameters expressed as the posterior joint probability distribution for the model parameters conditioned on the events that have already occurred (i.e., before the beginning of the forecasting interval) in the on-going seismic sequence. The Markov Chain Monte Carlo simulation scheme is used herein in order to sample directly from the posterior probability distribution for ETAS model parameters. Moreover, the sequence of events that is going to occur during the forecasting interval (and hence affecting the seismicity in an epidemic type model like ETAS) is also generated through a stochastic procedure. The procedure leads to two spatio-temporal outcomes: (1) the probability distribution for the forecasted number of events, and (2) the uncertainty in estimating the
Edmonds, L. D.
2016-01-01
Since advancing technology has been producing smaller structures in electronic circuits, the floating gates in modern flash memories are becoming susceptible to prompt charge loss from ionizing radiation environments found in space. A method for estimating the risk of a charge-loss event is given.
2013-03-01
Mendenhall , and Sheaffer [25]. For the remainder of this paper, however, we will make use of the Wilcoxon rank sum test for purposes of comparison with the...B. W. Silverman, Density Estimation for Statistics and Data Analysis, Chapman & Hall/CRC, 1986, p. 48. [25] D. D. Wackerly, W. Mendenhall III and R
Estimating the benefits of single value and probability forecasting for flood warning
J. S. Verkade
2011-12-01
Full Text Available Flood risk can be reduced by means of flood forecasting, warning and response systems (FFWRS. These systems include a forecasting sub-system which is imperfect, meaning that inherent uncertainties in hydrological forecasts may result in false alarms and missed events. This forecasting uncertainty decreases the potential reduction of flood risk, but is seldom accounted for in estimates of the benefits of FFWRSs. In the present paper, a method to estimate the benefits of (imperfect FFWRSs in reducing flood risk is presented. The method is based on a hydro-economic model of expected annual damage (EAD due to flooding, combined with the concept of Relative Economic Value (REV. The estimated benefits include not only the reduction of flood losses due to a warning response, but also consider the costs of the warning response itself, as well as the costs associated with forecasting uncertainty. The method allows for estimation of the benefits of FFWRSs that use either deterministic or probabilistic forecasts. Through application to a case study, it is shown that FFWRSs using a probabilistic forecast have the potential to realise higher benefits at all lead-times. However, it is also shown that provision of warning at increasing lead-time does not necessarily lead to an increasing reduction of flood risk, but rather that an optimal lead-time at which warnings are provided can be established as a function of forecast uncertainty and the cost-loss ratio of the user receiving and responding to the warning.
D. L. Bricker
1997-01-01
Full Text Available The problem of assigning cell probabilities to maximize a multinomial likelihood with order restrictions on the probabilies and/or restrictions on the local odds ratios is modeled as a posynomial geometric program (GP, a class of nonlinear optimization problems with a well-developed duality theory and collection of algorithms. (Local odds ratios provide a measure of association between categorical random variables. A constrained multinomial MLE example from the literature is solved, and the quality of the solution is compared with that obtained by the iterative method of El Barmi and Dykstra, which is based upon Fenchel duality. Exploiting the proximity of the GP model of MLE problems to linear programming (LP problems, we also describe as an alternative, in the absence of special-purpose GP software, an easily implemented successive LP approximation method for solving this class of MLE problems using one of the readily available LP solvers.
Estimated Probability of Traumatic Abdominal Injury During an International Space Station Mission
Lewandowski, Beth E.; Brooker, John E.; Weavr, Aaron S.; Myers, Jerry G., Jr.; McRae, Michael P.
2013-01-01
The Integrated Medical Model (IMM) is a decision support tool that is useful to spaceflight mission planners and medical system designers when assessing risks and optimizing medical systems. The IMM project maintains a database of medical conditions that could occur during a spaceflight. The IMM project is in the process of assigning an incidence rate, the associated functional impairment, and a best and a worst case end state for each condition. The purpose of this work was to develop the IMM Abdominal Injury Module (AIM). The AIM calculates an incidence rate of traumatic abdominal injury per person-year of spaceflight on the International Space Station (ISS). The AIM was built so that the probability of traumatic abdominal injury during one year on ISS could be predicted. This result will be incorporated into the IMM Abdominal Injury Clinical Finding Form and used within the parent IMM model.
无
2007-01-01
Normalized difference vegetation index (NDVI) data, obtained from remote sensing information, are essential in the Shuttleworth-Wallace(S-W) model for estimation of evapotranspiration. In order to study the effect of temporal resolution of NDVI on potential evapotranspiration (PET) estimation and hydrological model performance,monthly and 10-day NDVI data set were used to estimate potential evapotranspiration from January 1985 to December 1987 in Huangnizhuang catchment, Anhui Province, China. The differences of the two calculation results were analyzed and used to drive the block-wise use of the TOPMODEL with the Muskingum-Cunge routing (BTOPMC) model to test the effect on model performance. The results show that both annual and monthly PETs estimated by 10-day NDVI are lower than those estimated by monthly NDVI. Annual PET from the vegetation root zone (PETr) lowers 9.77%-13.64% and monthly PETr lowers 3.28%-17.44% in the whole basin. PET from the vegetation interception (PETi) shows the same trend as PETr. In addition, temporal resolution of NDVI has more effect on PETr in summer and on PETi in winter. The correlation between PETr as estimated by 10-day NDVI and pan measurement (R2= 0.835)is better than that between monthly NDVI and pan measurement (R2 = 0.775). The two potential evapotranspiration estimates were used to drive the BTOPMC model and calibrate parameters, and model performance was found to be similar. In summary, the effect of temporal resolution of NDVI on potential evapotranspiration estimation is significant,but trivial on hydrological model performance.
Inverse Probability of Censoring Weighted Estimates of Kendall’s τ for Gap Time Analyses
Lakhal-Chaieb, Lajmi; Cook, Richard J.; Lin, Xihong
2010-01-01
In life history studies interest often lies in the analysis of the inter-event, or gap times and the association between event times. Gap time analyses are challenging however, even when the length of follow-up is determined independently of the event process, since associations between gap times induce dependent censoring for second and subsequent gap times. This paper discusses nonparametric estimation of the association between consecutive gap times based on Kendall’s τ in the presence of ...
Lagroix, Hayley E P; Yanko, Matthew R; Spalek, Thomas M
2012-07-01
Many cognitive and perceptual phenomena, such as iconic memory and temporal integration, require brief displays. A critical requirement is that the image not remain visible after its offset. It is commonly believed that liquid crystal displays (LCD) are unsuitable because of their poor temporal response characteristics relative to cathode-ray-tube (CRT) screens. Remarkably, no psychophysical estimates of visible persistence are available to verify this belief. A series of experiments in which white stimuli on a black background produced discernible persistence on CRT but not on LCD screens, during both dark- and light-adapted viewing, falsified this belief. Similar estimates using black stimuli on a white background produced no visible persistence on either screen. That said, photometric measurements are available that seem to confirm the poor temporal characteristics of LCD screens, but they were obtained before recent advances in LCD technology. Using current LCD screens, we obtained photometric estimates of rise time far shorter (1-6 ms) than earlier estimates (20-150 ms), and approaching those of CRTs (<1 ms). We conclude that LCDs are preferable to CRTs when visible persistence is a concern, except when black-on-white displays are used.
Scheltens, P; Leys, D; Barkhof, F; Huglo, D; Weinstein, H C; Vermersch, P; Kuiper, M; Steinling, M; Wolters, E C; Valk, J
1992-10-01
Magnetic resonance imaging (MRI) has shown a great reduction in medial temporal lobe and hippocampal volume of patients with Alzheimer's disease as compared to controls. Quantitative volumetric measurements are not yet available for routine clinical use. We investigated whether visual assessment of medial temporal lobe atrophy (MTA) on plain MRI films could distinguish patients with Alzheimer's disease (n = 21) from age matched controls (n = 21). The degree of MTA was ascertained with a ranking procedure and validated by linear measurements of the medial temporal lobe including the hippocampal formation and surrounding spaces occupied by cerebrospinal fluid. Patients with Alzheimer's disease showed a significantly higher degree of subjectively assessed MTA than controls (p = 0.0005). Linear measurements correlated highly with subjective assessment of MTA and also showed significant differences between groups. Ventricular indices did not differ significantly between groups. In Alzheimer's disease patients the degree of MTA correlated significantly with scores on the mini-mental state examination and memory tests, but poorly with mental speed tests. This study shows that MTA may be assessed quickly and easily with plain MRI films. MTA shown on MRI strongly supports the clinical diagnosis of Alzheimer's disease, is related to memory function, and seems to occur earlier in the disease process than does generalised brain atrophy.
Vacca, Alessandro; Prato, Carlo Giacomo; Meloni, Italo
2015-01-01
is the dependency of the parameter estimates from the choice set generation technique. Bias introduced in model estimation has been corrected only for the random walk algorithm, which has problematic applicability to large-scale networks. This study proposes a correction term for the sampling probability of routes...... extracted with stochastic route generation. The term is easily applicable to large-scale networks and various environments, given its dependence only on a random number generator and the Dijkstra shortest path algorithm. The implementation for revealed preferences data, which consist of actual route choices...... collected in Cagliari, Italy, shows the feasibility of generating routes stochastically in a high-resolution network and calculating the correction factor. The model estimation with and without correction illustrates how the correction not only improves the goodness of fit but also turns illogical signs...
Howe, Chanelle J; Cole, Stephen R; Chmiel, Joan S; Muñoz, Alvaro
2011-03-01
In time-to-event analyses, artificial censoring with correction for induced selection bias using inverse probability-of-censoring weights can be used to 1) examine the natural history of a disease after effective interventions are widely available, 2) correct bias due to noncompliance with fixed or dynamic treatment regimens, and 3) estimate survival in the presence of competing risks. Artificial censoring entails censoring participants when they meet a predefined study criterion, such as exposure to an intervention, failure to comply, or the occurrence of a competing outcome. Inverse probability-of-censoring weights use measured common predictors of the artificial censoring mechanism and the outcome of interest to determine what the survival experience of the artificially censored participants would be had they never been exposed to the intervention, complied with their treatment regimen, or not developed the competing outcome. Even if all common predictors are appropriately measured and taken into account, in the context of small sample size and strong selection bias, inverse probability-of-censoring weights could fail because of violations in assumptions necessary to correct selection bias. The authors used an example from the Multicenter AIDS Cohort Study, 1984-2008, regarding estimation of long-term acquired immunodeficiency syndrome-free survival to demonstrate the impact of violations in necessary assumptions. Approaches to improve correction methods are discussed.
Skousgaard, Søren Glud; Hjelmborg, Jacob; Skytthe, Axel;
2015-01-01
INTRODUCTION: Primary hip osteoarthritis, radiographic as well as symptomatic, is highly associated with increasing age in both genders. However, little is known about the mechanisms behind this, in particular if this increase is caused by genetic factors. This study examined the risk and heritab......INTRODUCTION: Primary hip osteoarthritis, radiographic as well as symptomatic, is highly associated with increasing age in both genders. However, little is known about the mechanisms behind this, in particular if this increase is caused by genetic factors. This study examined the risk...... and heritability of primary osteoarthritis of the hip leading to a total hip arthroplasty, and if this heritability increased with increasing age. METHODS: In a nationwide population-based follow-up study 118,788 twins from the Danish Twin Register and 90,007 individuals from the Danish Hip Arthroplasty Register...... not have had a total hip arthroplasty at the time of follow-up. RESULTS: There were 94,063 twins eligible for analyses, comprising 835 cases of 36 concordant and 763 discordant twin pairs. The probability increased particularly from 50 years of age. After sex and age adjustment a significant additive...
Probable maximum precipitation 24 hours estimation: A case study of Zanjan province of Iran
Azim Shirdeli
2012-10-01
Full Text Available One of the primary concerns in designing civil structures such as water storage dams and irrigation and drainage networks is to find economic scale based on possibility of natural incidents such as floods, earthquake, etc. Probable maximum precipitation (PMP is one of well known methods, which helps design a civil structure, properly. In this paper, we study the maximum one-day precipitation using 17 to 50 years of information in 13 stations located in province of Zanjan, Iran. The proposed study of this paper uses two Hershfield methods, where the first one yields 18.17 to 18.48 for precipitation where the PMP24 was between 170.14 mm and 255.28 mm. The second method reports precipitation between 2.29 and 4.95 while PMP24 was between 62.33 mm and 92.08 mm. In addition, when the out of range data were deleted from the study of the second method, precipitation rates were calculated between 2.29 and 4.31 while PMP24 was between 76.08 mm and 117.28 mm. The preliminary results indicate that the second Hershfield method provide more stable results than the first one.
A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities
Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.
1999-01-01
A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.
Estimating probabilities of recession in real time using GDP and GDI
Nalewaik, Jeremy J.
2006-01-01
This work estimates Markov switching models on real time data and shows that the growth rate of gross domestic income (GDI), deflated by the GDP deflator, has done a better job recognizing the start of recessions than has the growth rate of real GDP. This result suggests that placing an increased focus on GDI may be useful in assessing the current state of the economy. In addition, the paper shows that the definition of a low-growth phase in the Markov switching models has changed over the pa...
Cheek, Kim A.
2016-09-01
Ideas about temporal (and spatial) scale impact students' understanding across science disciplines. Learners have difficulty comprehending the long time periods associated with natural processes because they have no referent for the magnitudes involved. When people have a good "feel" for quantity, they estimate cardinal number magnitude linearly. Magnitude estimation errors can be explained by confusion about the structure of the decimal number system, particularly in terms of how powers of ten are related to one another. Indonesian children regularly use large currency units. This study investigated if they estimate long time periods accurately and if they estimate those time periods the same way they estimate analogous currency units. Thirty-nine children from a private International Baccalaureate school estimated temporal magnitudes up to 10,000,000,000 years in a two-part study. Artifacts children created were compared to theoretical model predictions previously used in number magnitude estimation studies as reported by Landy et al. (Cognitive Science 37:775-799, 2013). Over one third estimated the magnitude of time periods up to 10,000,000,000 years linearly, exceeding what would be expected based upon prior research with children this age who lack daily experience with large quantities. About half treated successive powers of ten as a count sequence instead of multiplicatively related when estimating magnitudes of time periods. Children generally estimated the magnitudes of long time periods and familiar, analogous currency units the same way. Implications for ways to improve the teaching and learning of this crosscutting concept/overarching idea are discussed.
Cheek, Kim A.
2017-08-01
Ideas about temporal (and spatial) scale impact students' understanding across science disciplines. Learners have difficulty comprehending the long time periods associated with natural processes because they have no referent for the magnitudes involved. When people have a good "feel" for quantity, they estimate cardinal number magnitude linearly. Magnitude estimation errors can be explained by confusion about the structure of the decimal number system, particularly in terms of how powers of ten are related to one another. Indonesian children regularly use large currency units. This study investigated if they estimate long time periods accurately and if they estimate those time periods the same way they estimate analogous currency units. Thirty-nine children from a private International Baccalaureate school estimated temporal magnitudes up to 10,000,000,000 years in a two-part study. Artifacts children created were compared to theoretical model predictions previously used in number magnitude estimation studies as reported by Landy et al. (Cognitive Science 37:775-799, 2013). Over one third estimated the magnitude of time periods up to 10,000,000,000 years linearly, exceeding what would be expected based upon prior research with children this age who lack daily experience with large quantities. About half treated successive powers of ten as a count sequence instead of multiplicatively related when estimating magnitudes of time periods. Children generally estimated the magnitudes of long time periods and familiar, analogous currency units the same way. Implications for ways to improve the teaching and learning of this crosscutting concept/overarching idea are discussed.
Courey, Karim; Wright, Clara; Asfour, Shihab; Bayliss, Jon; Ludwig, Larry
2008-01-01
Existing risk simulations make the assumption that when a free tin whisker has bridged two adjacent exposed electrical conductors, the result is an electrical short circuit. This conservative assumption is made because shorting is a random event that has a currently unknown probability associated with it. Due to contact resistance, electrical shorts may not occur at lower voltage levels. In this experiment, we study the effect of varying voltage on the breakdown of the contact resistance which leads to a short circuit. From this data, we can estimate the probability of an electrical short, as a function of voltage, given that a free tin whisker has bridged two adjacent exposed electrical conductors. In addition, three tin whiskers grown from the same Space Shuttle Orbiter card guide used in the aforementioned experiment were cross sectioned and studied using a focused ion beam (FIB).
Tillery, Anne C.; Matherne, Anne Marie; Verdin, Kristine L.
2012-01-01
In May and June 2012, the Whitewater-Baldy Fire burned approximately 1,200 square kilometers (300,000 acres) of the Gila National Forest, in southwestern New Mexico. The burned landscape is now at risk of damage from postwildfire erosion, such as that caused by debris flows and flash floods. This report presents a preliminary hazard assessment of the debris-flow potential from 128 basins burned by the Whitewater-Baldy Fire. A pair of empirical hazard-assessment models developed by using data from recently burned basins throughout the intermountain Western United States was used to estimate the probability of debris-flow occurrence and volume of debris flows along the burned area drainage network and for selected drainage basins within the burned area. The models incorporate measures of areal burned extent and severity, topography, soils, and storm rainfall intensity to estimate the probability and volume of debris flows following the fire. In response to the 2-year-recurrence, 30-minute-duration rainfall, modeling indicated that four basins have high probabilities of debris-flow occurrence (greater than or equal to 80 percent). For the 10-year-recurrence, 30-minute-duration rainfall, an additional 14 basins are included, and for the 25-year-recurrence, 30-minute-duration rainfall, an additional eight basins, 20 percent of the total, have high probabilities of debris-flow occurrence. In addition, probability analysis along the stream segments can identify specific reaches of greatest concern for debris flows within a basin. Basins with a high probability of debris-flow occurrence were concentrated in the west and central parts of the burned area, including tributaries to Whitewater Creek, Mineral Creek, and Willow Creek. Estimated debris-flow volumes ranged from about 3,000-4,000 cubic meters (m3) to greater than 500,000 m3 for all design storms modeled. Drainage basins with estimated volumes greater than 500,000 m3 included tributaries to Whitewater Creek, Willow
Lu, Dan; Zhang, Guannan; Webster, Clayton; Barbier, Charlotte
2016-12-01
In this work, we develop an improved multilevel Monte Carlo (MLMC) method for estimating cumulative distribution functions (CDFs) of a quantity of interest, coming from numerical approximation of large-scale stochastic subsurface simulations. Compared with Monte Carlo (MC) methods, that require a significantly large number of high-fidelity model executions to achieve a prescribed accuracy when computing statistical expectations, MLMC methods were originally proposed to significantly reduce the computational cost with the use of multifidelity approximations. The improved performance of the MLMC methods depends strongly on the decay of the variance of the integrand as the level increases. However, the main challenge in estimating CDFs is that the integrand is a discontinuous indicator function whose variance decays slowly. To address this difficult task, we approximate the integrand using a smoothing function that accelerates the decay of the variance. In addition, we design a novel a posteriori optimization strategy to calibrate the smoothing function, so as to balance the computational gain and the approximation error. The combined proposed techniques are integrated into a very general and practical algorithm that can be applied to a wide range of subsurface problems for high-dimensional uncertainty quantification, such as a fine-grid oil reservoir model considered in this effort. The numerical results reveal that with the use of the calibrated smoothing function, the improved MLMC technique significantly reduces the computational complexity compared to the standard MC approach. Finally, we discuss several factors that affect the performance of the MLMC method and provide guidance for effective and efficient usage in practice.
Berrino, Jacopo; Berrino, Franco; Francisci, Silvia; Peissel, Bernard; Azzollini, Jacopo; Pensotti, Valeria; Radice, Paolo; Pasanisi, Patrizia; Manoukian, Siranoush
2015-03-01
We have designed the user-friendly COS software with the intent to improve estimation of the probability of a family carrying a deleterious BRCA gene mutation. The COS software is similar to the widely-used Bayesian-based BRCAPRO software, but it incorporates improved assumptions on cancer incidence in women with and without a deleterious mutation, takes into account relatives up to the fourth degree and allows researchers to consider an hypothetical third gene or a polygenic model of inheritance. Since breast cancer incidence and penetrance increase over generations, we estimated birth-cohort-specific incidence and penetrance curves. We estimated breast and ovarian cancer penetrance in 384 BRCA1 and 229 BRCA2 mutated families. We tested the COS performance in 436 Italian breast/ovarian cancer families including 79 with BRCA1 and 27 with BRCA2 mutations. The area under receiver operator curve (AUROC) was 84.4 %. The best probability threshold for offering the test was 22.9 %, with sensitivity 80.2 % and specificity 80.3 %. Notwithstanding very different assumptions, COS results were similar to BRCAPRO v6.0.
Embedded Vehicle Speed Estimation System Using an Asynchronous Temporal Contrast Vision Sensor
D. Bauer
2007-01-01
Full Text Available This article presents an embedded multilane traffic data acquisition system based on an asynchronous temporal contrast vision sensor, and algorithms for vehicle speed estimation developed to make efficient use of the asynchronous high-precision timing information delivered by this sensor. The vision sensor features high temporal resolution with a latency of less than 100 ÃŽÂ¼s, wide dynamic range of 120 dB of illumination, and zero-redundancy, asynchronous data output. For data collection, processing and interfacing, a low-cost digital signal processor is used. The speed of the detected vehicles is calculated from the vision sensor's asynchronous temporal contrast event data. We present three different algorithms for velocity estimation and evaluate their accuracy by means of calibrated reference measurements. The error of the speed estimation of all algorithms is near zero mean and has a standard deviation better than 3% for both traffic flow directions. The results and the accuracy limitations as well as the combined use of the algorithms in the system are discussed.
Gardi, J E; Nyengaard, J R; Gundersen, H J G
2008-03-01
The proportionator is a novel and radically different approach to sampling with microscopes based on the well-known statistical theory (probability proportional to size-PPS sampling). It uses automatic image analysis, with a large range of options, to assign to every field of view in the section a weight proportional to some characteristic of the structure under study. A typical and very simple example, examined here, is the amount of color characteristic for the structure, marked with a stain with known properties. The color may be specific or not. In the recorded list of weights in all fields, the desired number of fields is sampled automatically with probability proportional to the weight and presented to the expert observer. Using any known stereological probe and estimator, the correct count in these fields leads to a simple, unbiased estimate of the total amount of structure in the sections examined, which in turn leads to any of the known stereological estimates including size distributions and spatial distributions. The unbiasedness is not a function of the assumed relation between the weight and the structure, which is in practice always a biased relation from a stereological (integral geometric) point of view. The efficiency of the proportionator depends, however, directly on this relation to be positive. The sampling and estimation procedure is simulated in sections with characteristics and various kinds of noises in possibly realistic ranges. In all cases examined, the proportionator is 2-15-fold more efficient than the common systematic, uniformly random sampling. The simulations also indicate that the lack of a simple predictor of the coefficient of error (CE) due to field-to-field variation is a more severe problem for uniform sampling strategies than anticipated. Because of its entirely different sampling strategy, based on known but non-uniform sampling probabilities, the proportionator for the first time allows the real CE at the section level to
Kronbak, Lone Grønbæk; Jensen, Frank
Within the EU Sixth Framework Programme an ongoing research project, COBECOS, has developed a theory of enforcement and a software code for computer modeling of different fisheries with fisheries enforcement cases. The case of the Danish fishery for Nephrops faces problems with landings...... on fishery enforcement from the COBECOS project to a specific case. It is done by estimations of functional relationships' for describing 1) the fisheries benefit function 2) the shadow value of biomass 3) the connection between the probability of being detected and apprehended for different enforcement...
Zamir, Ehud; Kong, George Y.X.; Kowalski, Tanya; Coote, Michael; Ang, Ghee Soon
2016-01-01
Purpose We hypothesize that: (1) Anterior chamber depth (ACD) is correlated with the relative anteroposterior position of the pupillary image, as viewed from the temporal side. (2) Such a correlation may be used as a simple quantitative tool for estimation of ACD. Methods Two hundred sixty-six phakic eyes had lateral digital photographs taken from the temporal side, perpendicular to the visual axis, and underwent optical biometry (Nidek AL scanner). The relative anteroposterior position of the pupillary image was expressed using the ratio between: (1) lateral photographic temporal limbus to pupil distance (“E”) and (2) lateral photographic temporal limbus to cornea distance (“Z”). In the first chronological half of patients (Correlation Series), E:Z ratio (EZR) was correlated with optical biometric ACD. The correlation equation was then used to predict ACD in the second half of patients (Prediction Series) and compared to their biometric ACD for agreement analysis. Results A strong linear correlation was found between EZR and ACD, R = −0.91, R2 = 0.81. Bland-Altman analysis showed good agreement between predicted ACD using this method and the optical biometric ACD. The mean error was −0.013 mm (range −0.377 to 0.336 mm), standard deviation 0.166 mm. The 95% limits of agreement were ±0.33 mm. Conclusions Lateral digital photography and EZR calculation is a novel method to quantitatively estimate ACD, requiring minimal equipment and training. Translational Relevance EZ ratio may be employed in screening for angle closure glaucoma. It may also be helpful in outpatient medical clinic settings, where doctors need to judge the safety of topical or systemic pupil-dilating medications versus their risk of triggering acute angle closure glaucoma. Similarly, non ophthalmologists may use it to estimate the likelihood of acute angle closure glaucoma in emergency presentations. PMID:27540496
Courey, Karim J.; Asfour, Shihab S.; Onar, Arzu; Bayliss, Jon A.; Ludwig, Larry L.; Wright, Maria C.
2009-01-01
To comply with lead-free legislation, many manufacturers have converted from tin-lead to pure tin finishes of electronic components. However, pure tin finishes have a greater propensity to grow tin whiskers than tin-lead finishes. Since tin whiskers present an electrical short circuit hazard in electronic components, simulations have been developed to quantify the risk of said short circuits occurring. Existing risk simulations make the assumption that when a free tin whisker has bridged two adjacent exposed electrical conductors, the result is an electrical short circuit. This conservative assumption is made because shorting is a random event that had an unknown probability associated with it. Note however that due to contact resistance electrical shorts may not occur at lower voltage levels. In our first article we developed an empirical probability model for tin whisker shorting. In this paper, we develop a more comprehensive empirical model using a refined experiment with a larger sample size, in which we studied the effect of varying voltage on the breakdown of the contact resistance which leads to a short circuit. From the resulting data we estimated the probability distribution of an electrical short, as a function of voltage. In addition, the unexpected polycrystalline structure seen in the focused ion beam (FIB) cross section in the first experiment was confirmed in this experiment using transmission electron microscopy (TEM). The FIB was also used to cross section two card guides to facilitate the measurement of the grain size of each card guide's tin plating to determine its finish.
Wagner, Daniel M.; Krieger, Joshua D.; Veilleux, Andrea G.
2016-08-04
In 2013, the U.S. Geological Survey initiated a study to update regional skew, annual exceedance probability discharges, and regional regression equations used to estimate annual exceedance probability discharges for ungaged locations on streams in the study area with the use of recent geospatial data, new analytical methods, and available annual peak-discharge data through the 2013 water year. An analysis of regional skew using Bayesian weighted least-squares/Bayesian generalized-least squares regression was performed for Arkansas, Louisiana, and parts of Missouri and Oklahoma. The newly developed constant regional skew of -0.17 was used in the computation of annual exceedance probability discharges for 281 streamgages used in the regional regression analysis. Based on analysis of covariance, four flood regions were identified for use in the generation of regional regression models. Thirty-nine basin characteristics were considered as potential explanatory variables, and ordinary least-squares regression techniques were used to determine the optimum combinations of basin characteristics for each of the four regions. Basin characteristics in candidate models were evaluated based on multicollinearity with other basin characteristics (variance inflation factor equations apply only to locations on streams in Arkansas where annual peak discharges are not substantially affected by regulation, diversion, channelization, backwater, or urbanization. The applicability and accuracy of the regional regression equations depend on the basin characteristics measured for an ungaged location on a stream being within range of those used to develop the equations.
Lee, Jong Kyeom; Kim, Tae Yun; Kim, Hyun Su; Chai, Jang Bom; Lee, Jin Woo [Div. of Mechanical Engineering, Ajou University, Suwon (Korea, Republic of)
2016-10-15
This paper presents an advanced estimation method for obtaining the probability density functions of a damage parameter for valve leakage detection in a reciprocating pump. The estimation method is based on a comparison of model data which are simulated by using a mathematical model, and experimental data which are measured on the inside and outside of the reciprocating pump in operation. The mathematical model, which is simplified and extended on the basis of previous models, describes not only the normal state of the pump, but also its abnormal state caused by valve leakage. The pressure in the cylinder is expressed as a function of the crankshaft angle, and an additional volume flow rate due to the valve leakage is quantified by a damage parameter in the mathematical model. The change in the cylinder pressure profiles due to the suction valve leakage is noticeable in the compression and expansion modes of the pump. The damage parameter value over 300 cycles is calculated in two ways, considering advance or delay in the opening and closing angles of the discharge valves. The probability density functions of the damage parameter are compared for diagnosis and prognosis on the basis of the probabilistic features of valve leakage.
A Streaming Algorithm for Online Estimation of Temporal and Spatial Extent of Delays
Kittipong Hiriotappa
2017-01-01
Full Text Available Knowing traffic congestion and its impact on travel time in advance is vital for proactive travel planning as well as advanced traffic management. This paper proposes a streaming algorithm to estimate temporal and spatial extent of delays online which can be deployed with roadside sensors. First, the proposed algorithm uses streaming input from individual sensors to detect a deviation from normal traffic patterns, referred to as anomalies, which is used as an early indication of delay occurrence. Then, a group of consecutive sensors that detect anomalies are used to temporally and spatially estimate extent of delay associated with the detected anomalies. Performance evaluations are conducted using a real-world data set collected by roadside sensors in Bangkok, Thailand, and the NGSIM data set collected in California, USA. Using NGSIM data, it is shown qualitatively that the proposed algorithm can detect consecutive occurrences of shockwaves and estimate their associated delays. Then, using a data set from Thailand, it is shown quantitatively that the proposed algorithm can detect and estimate delays associated with both recurring congestion and incident-induced nonrecurring congestion. The proposed algorithm also outperforms the previously proposed streaming algorithm.
Ali, Hussain; Ahmed, Sajid; Al-Naffouri, Tareq Y.; Sharawi, Mohammad S.; Alouini, Mohamed-S.
2017-01-01
Conventional algorithms used for parameter estimation in colocated multiple-input-multiple-output (MIMO) radars require the inversion of the covariance matrix of the received spatial samples. In these algorithms, the number of received snapshots should be at least equal to the size of the covariance matrix. For large size MIMO antenna arrays, the inversion of the covariance matrix becomes computationally very expensive. Compressive sensing (CS) algorithms which do not require the inversion of the complete covariance matrix can be used for parameter estimation with fewer number of received snapshots. In this work, it is shown that the spatial formulation is best suitable for large MIMO arrays when CS algorithms are used. A temporal formulation is proposed which fits the CS algorithms framework, especially for small size MIMO arrays. A recently proposed low-complexity CS algorithm named support agnostic Bayesian matching pursuit (SABMP) is used to estimate target parameters for both spatial and temporal formulations for the unknown number of targets. The simulation results show the advantage of SABMP algorithm utilizing low number of snapshots and better parameter estimation for both small and large number of antenna elements. Moreover, it is shown by simulations that SABMP is more effective than other existing algorithms at high signal-to-noise ratio.
Bilgel, Murat; Jedynak, Bruno; Wong, Dean F.; Resnick, Susan M.; Prince, Jerry L.
2015-01-01
Cortical β-amyloid deposition begins in Alzheimer’s disease (AD) years before the onset of any clinical symptoms. It is therefore important to determine the temporal trajectories of amyloid deposition in these earliest stages in order to better understand their associations with progression to AD. A method for estimating the temporal trajectories of voxelwise amyloid as measured using longitudinal positron emission tomography (PET) imaging is presented. The method involves the estimation of a score for each subject visit based on the PET data that reflects their amyloid progression. This amyloid progression score allows subjects with similar progressions to be aligned and analyzed together. The estimation of the progression scores and the amyloid trajectory parameters are performed using an expectation-maximization algorithm. The correlations among the voxel measures of amyloid are modeled to reflect the spatial nature of PET images. Simulation results show that model parameters are captured well at a variety of noise and spatial correlation levels. The method is applied to longitudinal amyloid imaging data considering each cerebral hemisphere separately. The results are consistent across the hemispheres and agree with a global index of brain amyloid known as mean cortical DVR. Unlike mean cortical DVR, which depends on a priori defined regions, the progression score extracted by the method is data-driven and does not make assumptions about regional longitudinal changes. Compared to regressing on age at each voxel, the longitudinal trajectory slopes estimated using the proposed method show better localized longitudinal changes. PMID:26221692
Ali, Hussain
2017-01-09
Conventional algorithms used for parameter estimation in colocated multiple-input-multiple-output (MIMO) radars require the inversion of the covariance matrix of the received spatial samples. In these algorithms, the number of received snapshots should be at least equal to the size of the covariance matrix. For large size MIMO antenna arrays, the inversion of the covariance matrix becomes computationally very expensive. Compressive sensing (CS) algorithms which do not require the inversion of the complete covariance matrix can be used for parameter estimation with fewer number of received snapshots. In this work, it is shown that the spatial formulation is best suitable for large MIMO arrays when CS algorithms are used. A temporal formulation is proposed which fits the CS algorithms framework, especially for small size MIMO arrays. A recently proposed low-complexity CS algorithm named support agnostic Bayesian matching pursuit (SABMP) is used to estimate target parameters for both spatial and temporal formulations for the unknown number of targets. The simulation results show the advantage of SABMP algorithm utilizing low number of snapshots and better parameter estimation for both small and large number of antenna elements. Moreover, it is shown by simulations that SABMP is more effective than other existing algorithms at high signal-to-noise ratio.
Vio, R.; Andreani, P.
2016-05-01
The reliable detection of weak signals is a critical issue in many astronomical contexts and may have severe consequences for determining number counts and luminosity functions, but also for optimizing the use of telescope time in follow-up observations. Because of its optimal properties, one of the most popular and widely-used detection technique is the matched filter (MF). This is a linear filter designed to maximise the detectability of a signal of known structure that is buried in additive Gaussian random noise. In this work we show that in the very common situation where the number and position of the searched signals within a data sequence (e.g. an emission line in a spectrum) or an image (e.g. a point-source in an interferometric map) are unknown, this technique, when applied in its standard form, may severely underestimate the probability of false detection. This is because the correct use of the MF relies upon a priori knowledge of the position of the signal of interest. In the absence of this information, the statistical significance of features that are actually noise is overestimated and detections claimed that are actually spurious. For this reason, we present an alternative method of computing the probability of false detection that is based on the probability density function (PDF) of the peaks of a random field. It is able to provide a correct estimate of the probability of false detection for the one-, two- and three-dimensional case. We apply this technique to a real two-dimensional interferometric map obtained with ALMA.
Gronewold, Andrew D; Wolpert, Robert L
2008-07-01
Most probable number (MPN) and colony-forming-unit (CFU) estimates of fecal coliform bacteria concentration are common measures of water quality in coastal shellfish harvesting and recreational waters. Estimating procedures for MPN and CFU have intrinsic variability and are subject to additional uncertainty arising from minor variations in experimental protocol. It has been observed empirically that the standard multiple-tube fermentation (MTF) decimal dilution analysis MPN procedure is more variable than the membrane filtration CFU procedure, and that MTF-derived MPN estimates are somewhat higher on average than CFU estimates, on split samples from the same water bodies. We construct a probabilistic model that provides a clear theoretical explanation for the variability in, and discrepancy between, MPN and CFU measurements. We then compare our model to water quality samples analyzed using both MPN and CFU procedures, and find that the (often large) observed differences between MPN and CFU values for the same water body are well within the ranges predicted by our probabilistic model. Our results indicate that MPN and CFU intra-sample variability does not stem from human error or laboratory procedure variability, but is instead a simple consequence of the probabilistic basis for calculating the MPN. These results demonstrate how probabilistic models can be used to compare samples from different analytical procedures, and to determine whether transitions from one procedure to another are likely to cause a change in quality-based management decisions.
Park, Dong-Uk; Colt, Joanne S; Baris, Dalsu; Schwenn, Molly; Karagas, Margaret R; Armenti, Karla R; Johnson, Alison; Silverman, Debra T; Stewart, Patricia A
2014-01-01
We describe an approach for estimating the probability that study subjects were exposed to metalworking fluids (MWFs) in a population-based case-control study of bladder cancer. Study subject reports on the frequency of machining and use of specific MWFs (straight, soluble, and synthetic/semi-synthetic) were used to estimate exposure probability when available. Those reports also were used to develop estimates for job groups, which were then applied to jobs without MWF reports. Estimates using both cases and controls and controls only were developed. The prevalence of machining varied substantially across job groups (0.1->0.9%), with the greatest percentage of jobs that machined being reported by machinists and tool and die workers. Reports of straight and soluble MWF use were fairly consistent across job groups (generally 50-70%). Synthetic MWF use was lower (13-45%). There was little difference in reports by cases and controls vs. controls only. Approximately, 1% of the entire study population was assessed as definitely exposed to straight or soluble fluids in contrast to 0.2% definitely exposed to synthetic/semi-synthetics. A comparison between the reported use of the MWFs and U.S. production levels found high correlations (r generally >0.7). Overall, the method described here is likely to have provided a systematic and reliable ranking that better reflects the variability of exposure to three types of MWFs than approaches applied in the past. [Supplementary materials are available for this article. Go to the publisher's online edition of Journal of Occupational and Environmental Hygiene for the following free supplemental resources: a list of keywords in the occupational histories that were used to link study subjects to the metalworking fluids (MWFs) modules; recommendations from the literature on selection of MWFs based on type of machining operation, the metal being machined and decade; popular additives to MWFs; the number and proportion of controls who
Heuristics-Based Trust Estimation in Multiagent Systems Using Temporal Difference Learning.
Rishwaraj, G; Ponnambalam, S G; Loo, Chu Kiong
2016-12-20
The application of multiagent system (MAS) is becoming increasing popular as it allows agents in a system to pool resources together to achieve a common objective. A vital part of the MAS is the teamwork cooperation through the sharing of information and resources among the agents to optimize their efforts in accomplishing given objectives. A critical part of the teamwork effort is the ability to trust each other when executing any task to ensure efficient and successful cooperation. This paper presents the development of a trust estimation model that could empirically evaluate the trust of an agent in MAS. The proposed model is developed using temporal difference learning by incorporating the concept of Markov games and heuristics to estimate trust. Simulation experiments are conducted to test and evaluate the performance of the developed model against some of the recently reported model in the literature. The simulation experiments indicate that the developed model performs better in terms of accuracy and efficiency in estimating trust.
Haiganoush K. Preisler
2002-01-01
Full Text Available Statistical approaches for modeling spatially and temporally explicit data are discussed for 79 passive sampler sites and 9 active monitors distributed across the Sierra Nevada, California. A generalized additive regression model was used to estimate spatial patterns and relationships between predicted ozone exposure and explanatory variables, and to predict exposure at nonmonitored sites. The fitted model was also used to estimate probability maps for season average ozone levels exceeding critical (or subcritical levels in the Sierra Nevada region. The explanatory variables — elevation, maximum daily temperature, and precipitation and ozone level at closest active monitor — were significant in the model. There was also a significant mostly east-west spatial trend. The between-site variability had the same magnitude as the error variability. This seems to indicate that there still exist important site features not captured by the variables used in the analysis and that may improve the accuracy of the predictive model in future studies. The fitted model using robust techniques had an overall R2 value of 0.58. The mean standard deviation for a predicted value was 6.68 ppb.
Abdelkader Mokkadem
2011-01-01
Full Text Available Let and denote the location and the size of the mode of a probability density. We study the joint convergence rates of semirecursive kernel estimators of and . We show how the estimation of the size of the mode allows measuring the relevance of the estimation of its location. We also enlighten that, beyond their computational advantage on nonrecursive estimators, the semirecursive estimators are preferable to use for the construction of confidence regions.
Estimating temporal independence of radio-telemetry data on animal activity
Salvatori; Skidmore; Corsi; van der Meer F
1999-06-21
Radio-telemetry is an excellent tool for gathering data on the biology of animals and their interactions with the environment they inhabit. Many methods have been developed for analyses of spatial information, on home range size and utilization density. Activity patterns are often described using radio-tracking data, but no generally accepted method is currently available specifically for determining the temporal independence of this type of data for statistical inference. Activity rhythms have generally been analysed by ecologists with the assumption that data are temporally independent, or by subjectively fixing an independence interval, based on attributes of their ranging behaviour. Although some good approximations of activity patterns can be obtained in these ways, we underline the need for a functionally correct method of estimating independence interval. Here we use semi-variograms to estimate the minimum interval required for the readings to be sequentially independent. This geostatistical tool is applied to the analysis of data on activity of Chilean foxes (Pseudalopex culpaeus) and Chacoan peccaries (Catagonus wagneri). Data were collected in the field by radio-tracking over 24-hr periods, with readings on activity state taken every 15 min. The spatial dimension in which the theory of geostatistics lies has been transferred into the time dimension, so that the correlation interval is expressed in time units (min). Time of independence as estimated by the variogram was 110 min for foxes, while data on peccaries indicated that they have long periods of activity, more suitable for time-series analysis. Copyright 1999 Academic Press.
Ettinger, Susanne; Mounaud, Loïc; Magill, Christina; Yao-Lafourcade, Anne-Françoise; Thouret, Jean-Claude; Manville, Vern; Negulescu, Caterina; Zuccaro, Giulio; De Gregorio, Daniela; Nardone, Stefano; Uchuchoque, Juan Alexis Luque; Arguedas, Anita; Macedo, Luisa; Manrique Llerena, Nélida
2016-10-01
The focus of this study is an analysis of building vulnerability through investigating impacts from the 8 February 2013 flash flood event along the Avenida Venezuela channel in the city of Arequipa, Peru. On this day, 124.5 mm of rain fell within 3 h (monthly mean: 29.3 mm) triggering a flash flood that inundated at least 0.4 km2 of urban settlements along the channel, affecting more than 280 buildings, 23 of a total of 53 bridges (pedestrian, vehicle and railway), and leading to the partial collapse of sections of the main road, paralyzing central parts of the city for more than one week. This study assesses the aspects of building design and site specific environmental characteristics that render a building vulnerable by considering the example of a flash flood event in February 2013. A statistical methodology is developed that enables estimation of damage probability for buildings. The applied method uses observed inundation height as a hazard proxy in areas where more detailed hydrodynamic modeling data is not available. Building design and site-specific environmental conditions determine the physical vulnerability. The mathematical approach considers both physical vulnerability and hazard related parameters and helps to reduce uncertainty in the determination of descriptive parameters, parameter interdependency and respective contributions to damage. This study aims to (1) enable the estimation of damage probability for a certain hazard intensity, and (2) obtain data to visualize variations in damage susceptibility for buildings in flood prone areas. Data collection is based on a post-flood event field survey and the analysis of high (sub-metric) spatial resolution images (Pléiades 2012, 2013). An inventory of 30 city blocks was collated in a GIS database in order to estimate the physical vulnerability of buildings. As many as 1103 buildings were surveyed along the affected drainage and 898 buildings were included in the statistical analysis. Univariate and
基于截尾估计的概率估计方法%Probability Estimation Method Based on Truncated Estimation
李熔
2014-01-01
能否以高概率正确重建稀疏信号是压缩感知理论中的重要研究内容。信号的稀疏度及冗余字典原子间的相关特性是研究该内容的关键因素。文中运用累积增量的概念，提出了一种基于截尾概率的累积增量满足约束界的概率估计的方法。运用该方法，判断能否利用选取的测量矩阵正确重构原始信号。通过Matlab仿真，验证了将高斯随机矩阵作为观测矩阵，在OMP重构算法下，可以高概率地正确重构出原始信号，也验证了文中所提方法的合理性。%It's an important research content in compressive sensing theory whether reconstruct the sparse signals with a high probability. The sparsity of the signals and the relevant characteristics of the atoms in the redundant dictionary are the key factors of the study. In this paper,taking use of the concept of cumulative coherence,propose a probability estimation method to estimate the probability of the cumu-lative coherence which satisfies the constraint boundary that based on the truncated estimation. It can be found whether the selected meas-urement matrix can correctly reconstruct the original signal with this method. The Matlab simulation verifies that the original signal can be reconstructed using OMP algorithm with a high probability by taking the Gaussian random matrix as the measurement matrix,at the same time,it verifies that the proposed method is reasonable.
Hook, E B; Regal, R R
1993-05-15
Capture-recapture methods in epidemiology analyze data from overlapping lists of cases from various sources of ascertainment to generate estimates of missing cases and the total affected. Applications of these methods usually recognize the possibility of, and attempt to adjust for, nonindependent ascertainment by the various sources used. However, separate from the issue of dependencies between sources is the complexity of within source variation in probability of ascertainment of cases, e.g., variation in ascertainment by population subgroups, such as socioeconomic classes, races, or other subdivisions. The authors present a general approach to this issue for the two-source case that takes account of not only biases that arise from such "variable catchability" within sources but also the separate complexity of dependencies between sources. A general formula, (K - delta)/(K + delta), is derived that allows simultaneous calculation of the effects of variable catchability, delta, and source dependencies, delta, upon the accuracy of the two-source estimate. The effect of variable catchability upon accuracy and applications to data by race on the neurodegenerative disorder, Huntington's disease, are presented. In the latter analysis, multiple different two-source estimates of prevalence were made, considering each source versus all others pooled. Most of the likely bias was found to be due to source dependencies; variable catchability contributed relatively little bias. Multiple poolings of all but one source may prove a generally efficient method for overcoming the problem of likely variable catchability, at least when there are data from many distinct sources.
Meister, Reinhard; Schaefer, Christof
2008-09-01
Spontaneous abortion rates are of general interest when investigating pregnancy outcome. In most studies observations are left truncated as pregnant women enter with a delay of several weeks after conception. Apart from spontaneous abortion pregnancy may end in induced abortion or live birth. These outcomes are considered as competing events (risks). Although statistical methods for handling this setting are available since more than 10 years, studies on pregnancy outcome after drug exposure usually report crude rates of spontaneous abortions, ignoring left truncation and competing risks. The authors propose simple methods which remove bias inherent to crude rates. The probability of spontaneous abortion is estimated using an event-history based approach for the subdistribution of competing risks that handles left truncation appropriately. Variance estimation enables the construction of approximate confidence intervals and of a simple test-statistic for comparing rates between different cohorts. The proposed methods are applied to a comparative prospective study on the association of spontaneous abortion and exposure to coumarin derivatives. The naive analysis using crude rates gives substantially different results than those based on the proposed methods, with up to a twofold change. Correctly incorporating left truncation into the analysis may increase the variance of the estimators, relative to an ideal sample where all pregnancies are followed from the time of conception. The consequences of such truncation for study design are discussed. Combining corrections for left truncation and competing risks offers a powerful method for analyzing miscarriage risk.
Ewerlöf, Maria; Larsson, Marcus; Salerud, E. Göran
2017-02-01
Hyperspectral imaging (HSI) can estimate the spatial distribution of skin blood oxygenation, using visible to near-infrared light. HSI oximeters often use a liquid-crystal tunable filter, an acousto-optic tunable filter or mechanically adjustable filter wheels, which has too long response/switching times to monitor tissue hemodynamics. This work aims to evaluate a multispectral snapshot imaging system to estimate skin blood volume and oxygen saturation with high temporal and spatial resolution. We use a snapshot imager, the xiSpec camera (MQ022HG-IM-SM4X4-VIS, XIMEA), having 16 wavelength-specific Fabry-Perot filters overlaid on the custom CMOS-chip. The spectral distribution of the bands is however substantially overlapping, which needs to be taken into account for an accurate analysis. An inverse Monte Carlo analysis is performed using a two-layered skin tissue model, defined by epidermal thickness, haemoglobin concentration and oxygen saturation, melanin concentration and spectrally dependent reduced-scattering coefficient, all parameters relevant for human skin. The analysis takes into account the spectral detector response of the xiSpec camera. At each spatial location in the field-of-view, we compare the simulated output to the detected diffusively backscattered spectra to find the best fit. The imager is evaluated for spatial and temporal variations during arterial and venous occlusion protocols applied to the forearm. Estimated blood volume changes and oxygenation maps at 512x272 pixels show values that are comparable to reference measurements performed in contact with the skin tissue. We conclude that the snapshot xiSpec camera, paired with an inverse Monte Carlo algorithm, permits us to use this sensor for spatial and temporal measurement of varying physiological parameters, such as skin tissue blood volume and oxygenation.
Wilner, J.; Smith, B.; Moore, T.; Campbell, S. W.; Slavin, B. V.; Hollander, J.; Wolf, J.
2015-12-01
The redistribution of winter accumulation from surface melt into firn or deeper layers (i.e. internal accumulation) remains a poorly understood component of glacier mass balance. Winter accumulation is usually quantified prior to summer melt, however the time window between accumulation and the onset of melt is minimal so this is not always possible. Studies which are initiated following the onset of summer melt either neglect sources of internal accumulation or attempt to estimate melt (and therefore winter accumulation uncertainty) through a variety of modeling methods. Here, we used ground-penetrating radar (GPR) repeat common midpoint (CMP) surveys with supporting common offset surveys, mass balance snow pits, and probing to estimate temporal changes in water content within the winter accumulation and firn layers of the southern Juneau Icefield, Alaska. In temperate glaciers, radio-wave velocity is primarily dependent on water content and snow or firn density. We assume density changes are temporally slow relative to water flow through the snow and firn pack, and therefore infer that changing radio-wave velocities measured by successive CMP surveys result from flux in surface melt through deeper layers. Preliminary CMP data yield radio-wave velocities of 0.15 to 0.2 m/ns in snowpack densities averaging 0.56 g cm-3, indicating partially to fully saturated snowpack (4-9% water content). Further spatial-temporal analysis of CMP surveys is being conducted. We recommend that repeat CMP surveys be conducted over a longer time frame to estimate stratigraphic water redistribution between the end of winter accumulation and maximum melt season. This information could be incorporated into surface energy balance models to further understanding of the influence of internal accumulation on glacier mass balance.
Karwowski, Damian; Domański, Marek
2016-01-01
An improved context-based adaptive binary arithmetic coding (CABAC) is presented. The idea for the improvement is to use a more accurate mechanism for estimation of symbol probabilities in the standard CABAC algorithm. The authors' proposal of such a mechanism is based on the context-tree weighting technique. In the framework of a high-efficiency video coding (HEVC) video encoder, the improved CABAC allows 0.7% to 4.5% bitrate saving compared to the original CABAC algorithm. The application of the proposed algorithm marginally affects the complexity of HEVC video encoder, but the complexity of video decoder increases by 32% to 38%. In order to decrease the complexity of video decoding, a new tool has been proposed for the improved CABAC that enables scaling of the decoder complexity. Experiments show that this tool gives 5% to 7.5% reduction of the decoding time while still maintaining high efficiency in the data compression.
Estimation of temporal gait parameters using Bayesian models on acceleration signals.
López-Nava, I H; Muñoz-Meléndez, A; Pérez Sanpablo, A I; Alessi Montero, A; Quiñones Urióstegui, I; Núñez Carrera, L
2016-01-01
The purpose of this study is to develop a system capable of performing calculation of temporal gait parameters using two low-cost wireless accelerometers and artificial intelligence-based techniques as part of a larger research project for conducting human gait analysis. Ten healthy subjects of different ages participated in this study and performed controlled walking tests. Two wireless accelerometers were placed on their ankles. Raw acceleration signals were processed in order to obtain gait patterns from characteristic peaks related to steps. A Bayesian model was implemented to classify the characteristic peaks into steps or nonsteps. The acceleration signals were segmented based on gait events, such as heel strike and toe-off, of actual steps. Temporal gait parameters, such as cadence, ambulation time, step time, gait cycle time, stance and swing phase time, simple and double support time, were estimated from segmented acceleration signals. Gait data-sets were divided into two groups of ages to test Bayesian models in order to classify the characteristic peaks. The mean error obtained from calculating the temporal gait parameters was 4.6%. Bayesian models are useful techniques that can be applied to classification of gait data of subjects at different ages with promising results.
Zhang, Y J; Xue, F X; Bai, Z P
2017-03-06
The impact of maternal air pollution exposure on offspring health has received much attention. Precise and feasible exposure estimation is particularly important for clarifying exposure-response relationships and reducing heterogeneity among studies. Temporally-adjusted land use regression (LUR) models are exposure assessment methods developed in recent years that have the advantage of having high spatial-temporal resolution. Studies on the health effects of outdoor air pollution exposure during pregnancy have been increasingly carried out using this model. In China, research applying LUR models was done mostly at the model construction stage, and findings from related epidemiological studies were rarely reported. In this paper, the sources of heterogeneity and research progress of meta-analysis research on the associations between air pollution and adverse pregnancy outcomes were analyzed. The methods of the characteristics of temporally-adjusted LUR models were introduced. The current epidemiological studies on adverse pregnancy outcomes that applied this model were systematically summarized. Recommendations for the development and application of LUR models in China are presented. This will encourage the implementation of more valid exposure predictions during pregnancy in large-scale epidemiological studies on the health effects of air pollution in China.
Tourian, Mohammad J.; Sneeuw, Nico
2016-04-01
One of the main challenges of hydrological modeling is the poor spatio-temporal coverage of in situ discharge databases. The global network of in situ gauges is declining steadily over the past few decades. It has been demonstrated that altimetry-derived water height over rivers can sensibly be used to deal with the growing lack of in situ discharge data. However, the altimetric discharge is often estimated from a single virtual station with a coarse temporal resolution, dictated by the satellite repeat period (10 or 35 days). In this study, we implement an assimilation scheme that connects all virtual stations of several satellite altimeters along the main stream and tributaries distributed over a catchment. This helps to generate densified water level time series with temporal resolution of less than ~3 days at any given location in the catchment. We then propose a scheme that extends the current one-on-one relationship between a discharge gauge and a nearby (densified) virtual station towards a methodology which links multiple virtual stations to all available gauges. We assess our method over the Amazon river/basin/catchment, where we have access to in situ discharge data from GRDC, and where multiple altimetric water level time series from different missions are available.
Tapiador, Francisco J.; Sanchez, Enrique; Romera, Raquel (Inst. of Environmental Sciences, Univ. of Castilla-La Mancha (UCLM), 45071 Toledo (Spain)). e-mail: francisco.tapiador@uclm.es
2009-07-01
Regional climate models (RCMs) are dynamical downscaling tools aimed to improve the modelling of local physical processes. Ensembles of RCMs are widely used to improve the coarse-grain estimates of global climate models (GCMs) since the use of several RCMs helps to palliate uncertainties arising from different dynamical cores and numerical schemes methods. In this paper, we analyse the differences and similarities in the climate change response for an ensemble of heterogeneous RCMs forced by one GCM (HadAM3H), and one emissions scenario (IPCC's SRES-A2 scenario). As a difference with previous approaches using PRUDENCE database, the statistical description of climate characteristics is made through the spatial and temporal aggregation of the RCMs outputs into probability distribution functions (PDF) of monthly values. This procedure is a complementary approach to conventional seasonal analyses. Our results provide new, stronger evidence on expected marked regional differences in Europe in the A2 scenario in terms of precipitation and temperature changes. While we found an overall increase in the mean temperature and extreme values, we also found mixed regional differences for precipitation
Schyska, Bruno U.; Couto, António; von Bremen, Lueder; Estanqueiro, Ana; Heinemann, Detlev
2017-05-01
Europe is facing the challenge of increasing shares of energy from variable renewable sources. Furthermore, it is heading towards a fully integrated electricity market, i.e. a Europe-wide electricity system. The stable operation of this large-scale renewable power system requires detailed information on the amount of electricity being transmitted now and in the future. To estimate the actual amount of electricity, upscaling algorithms are applied. Those algorithms - until now - however, only exist for smaller regions (e.g. transmission zones and single wind farms). The aim of this study is to introduce a new approach to estimate Europe-wide wind power generation based on spatio-temporal clustering. We furthermore show that training the upscaling model for different prevailing weather situations allows to further reduce the number of reference sites without losing accuracy.
Estimation of Spatial-Temporal Gait Parameters Using a Low-Cost Ultrasonic Motion Analysis System
Yongbin Qi
2014-08-01
Full Text Available In this paper, a low-cost motion analysis system using a wireless ultrasonic sensor network is proposed and investigated. A methodology has been developed to extract spatial-temporal gait parameters including stride length, stride duration, stride velocity, stride cadence, and stride symmetry from 3D foot displacements estimated by the combination of spherical positioning technique and unscented Kalman filter. The performance of this system is validated against a camera-based system in the laboratory with 10 healthy volunteers. Numerical results show the feasibility of the proposed system with average error of 2.7% for all the estimated gait parameters. The influence of walking speed on the measurement accuracy of proposed system is also evaluated. Statistical analysis demonstrates its capability of being used as a gait assessment tool for some medical applications.
Mazzoleni, Maurizio; Brandimarte, Luigia; Barontini, Stefano; Ranzi, Roberto
2014-05-01
Over the centuries many societies have preferred to settle down nearby floodplains area and take advantage of the favorable environmental conditions. Due to changing hydro-meteorological conditions, over time, levee systems along rivers have been raised to protect urbanized area and reduce the impact of floods. As expressed by the so called "levee paradox", many societies might to tend to trust these levee protection systems due to an induced sense of safety and, as a consequence, invest even more in urban developing in levee protected flood prone areas. As a result, considering also the increasing number of population around the world, people living in floodplains is growing. However, human settlements in floodplains are not totally safe and have been continuously endangered by the risk of flooding. In fact, failures of levee system in case of flood event have also produced the most devastating disasters of the last two centuries due to the exposure of the developed floodprone areas to risk. In those cases, property damage is certain, but loss of life can vary dramatically with the extent of the inundation area, the size of the population at risk, and the amount of warning time available. The aim of this study is to propose an innovative methodology to estimate the reliability of a general river levee system in case of piping, considering different sources of uncertainty, and analyze the influence of different discretization of the river reach in sub-reaches in the evaluation of the probability of failure. The reliability analysis, expressed in terms of fragility curve, was performed evaluating the probability of failure, conditioned by a given hydraulic load in case of a certain levee failure mechanism, using a Monte Carlo and First Order Reliability Method. Knowing the information about fragility curve for each discrete levee reach, different fragility indexes were introduced. Using the previous information was then possible to classify the river into sub
Drummond, Alexei J; Nicholls, Geoff K; Rodrigo, Allen G; Solomon, Wiremu
2002-07-01
Molecular sequences obtained at different sampling times from populations of rapidly evolving pathogens and from ancient subfossil and fossil sources are increasingly available with modern sequencing technology. Here, we present a Bayesian statistical inference approach to the joint estimation of mutation rate and population size that incorporates the uncertainty in the genealogy of such temporally spaced sequences by using Markov chain Monte Carlo (MCMC) integration. The Kingman coalescent model is used to describe the time structure of the ancestral tree. We recover information about the unknown true ancestral coalescent tree, population size, and the overall mutation rate from temporally spaced data, that is, from nucleotide sequences gathered at different times, from different individuals, in an evolving haploid population. We briefly discuss the methodological implications and show what can be inferred, in various practically relevant states of prior knowledge. We develop extensions for exponentially growing population size and joint estimation of substitution model parameters. We illustrate some of the important features of this approach on a genealogy of HIV-1 envelope (env) partial sequences.
Impact of temporal resolution on estimating capillary RBC-flux with optical coherence tomography
Li, Baoqiang; Wang, Hui; Fu, Buyin; Wang, Ruopeng; Sakadžić, Sava; Boas, David A.
2017-01-01
Optical coherence tomography (OCT) has been used to measure capillary red blood cell (RBC) flux. However, one important technical issue is that the accuracy of this method is subject to the temporal resolution (Δt) of the repeated RBC-passage B-scans. A ceiling effect arises due to an insufficient Δt limiting the maximum RBC-flux that can be measured. In this letter, we first present simulations demonstrating that Δt=1.5 ms permits measuring RBC-flux up to 150 RBCs/s with an underestimation of 9%. The simulations further show that measurements with Δt=3 and 4.5 ms provide relatively less accurate estimates for typical physiological fluxes. We provide experimental data confirming the simulation results showing that reduced temporal resolution (i.e., a longer Δt) results in an underestimation of mean flux and compresses the distribution of measured fluxes, which potentially confounds physiological interpretation of the results. The results also apply to RBC-passage measurements made with confocal and two-photon microscopy for estimating capillary RBC-flux.
Hong, Ban Zhen; Keong, Lau Kok; Shariff, Azmi Mohd
2016-05-01
The employment of different mathematical models to address specifically for the bubble nucleation rates of water vapour and dissolved air molecules is essential as the physics for them to form bubble nuclei is different. The available methods to calculate bubble nucleation rate in binary mixture such as density functional theory are complicated to be coupled along with computational fluid dynamics (CFD) approach. In addition, effect of dissolved gas concentration was neglected in most study for the prediction of bubble nucleation rates. The most probable bubble nucleation rate for the water vapour and dissolved air mixture in a 2D quasi-stable flow across a cavitating nozzle in current work was estimated via the statistical mean of all possible bubble nucleation rates of the mixture (different mole fractions of water vapour and dissolved air) and the corresponding number of molecules in critical cluster. Theoretically, the bubble nucleation rate is greatly dependent on components' mole fraction in a critical cluster. Hence, the dissolved gas concentration effect was included in current work. Besides, the possible bubble nucleation rates were predicted based on the calculated number of molecules required to form a critical cluster. The estimation of components' mole fraction in critical cluster for water vapour and dissolved air mixture was obtained by coupling the enhanced classical nucleation theory and CFD approach. In addition, the distribution of bubble nuclei of water vapour and dissolved air mixture could be predicted via the utilisation of population balance model.
Annalisa Pezzi
2016-11-01
Full Text Available Abstract Background Randomization procedure in randomized controlled trials (RCTs permits an unbiased estimation of causal effects. However, in clinical practice, differential compliance between arms may cause a strong violation of randomization balance and biased treatment effect among those who comply. We evaluated the effect of the consolidation phase on disease-free survival of patients with multiple myeloma in an RCT designed for another purpose, adjusting for potential selection bias due to different compliance to previous treatment phases. Methods We computed two propensity scores (PS to model two different selection processes: the first to undergo autologous stem cell transplantation, the second to begin consolidation therapy. Combined stabilized inverse probability treatment weights were then introduced in the Cox model to estimate the causal effect of consolidation therapy miming an ad hoc RCT protocol. Results We found that the effect of consolidation therapy was restricted to the first 18 months of the phase (HR: 0.40, robust 95 % CI: 0.17-0.96, after which it disappeared. Conclusions PS-based methods could be a complementary approach within an RCT context to evaluate the effect of the last phase of a complex therapeutic strategy, adjusting for potential selection bias caused by different compliance to the previous phases of the therapeutic scheme, in order to simulate an ad hoc randomization procedure. Trial registration ClinicalTrials.gov: NCT01134484 May 28, 2010 (retrospectively registered EudraCT: 2005-003723-39 December 17, 2008 (retrospectively registered
A new maximum likelihood blood velocity estimator incorporating spatial and temporal correlation
Schlaikjer, Malene; Jensen, Jørgen Arendt
2001-01-01
The blood flow in the human cardiovascular system obeys the laws of fluid mechanics. Investigation of the flow properties reveals that a correlation exists between the velocity in time and space. The possible changes in velocity are limited, since the blood velocity has a continuous profile in time...... of the observations gives a probability measure of the correlation between the velocities. Both the MLE and the STC-MLE have been evaluated on simulated and in-vivo RF-data obtained from the carotid artery. Using the MLE 4.1% of the estimates deviate significantly from the true velocities, when the performance...
Southard, Rodney E.; Veilleux, Andrea G.
2014-01-01
Regression analysis techniques were used to develop a set of equations for rural ungaged stream sites for estimating discharges with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities, which are equivalent to annual flood-frequency recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, respectively. Basin and climatic characteristics were computed using geographic information software and digital geospatial data. A total of 35 characteristics were computed for use in preliminary statewide and regional regression analyses. Annual exceedance-probability discharge estimates were computed for 278 streamgages by using the expected moments algorithm to fit a log-Pearson Type III distribution to the logarithms of annual peak discharges for each streamgage using annual peak-discharge data from water year 1844 to 2012. Low-outlier and historic information were incorporated into the annual exceedance-probability analyses, and a generalized multiple Grubbs-Beck test was used to detect potentially influential low floods. Annual peak flows less than a minimum recordable discharge at a streamgage were incorporated into the at-site station analyses. An updated regional skew coefficient was determined for the State of Missouri using Bayesian weighted least-squares/generalized least squares regression analyses. At-site skew estimates for 108 long-term streamgages with 30 or more years of record and the 35 basin characteristics defined for this study were used to estimate the regional variability in skew. However, a constant generalized-skew value of -0.30 and a mean square error of 0.14 were determined in this study. Previous flood studies indicated that the distinct physical features of the three physiographic provinces have a pronounced effect on the magnitude of flood peaks. Trends in the magnitudes of the residuals from preliminary statewide regression analyses from previous studies confirmed that regional analyses in this study were
Emil Bayramov
2016-05-01
Full Text Available The main goal of this research was to detect oil spills, to determine the oil spill frequencies and to approximate oil leak sources around the Oil Rocks Settlement, the Chilov and Pirallahi Islands in the Caspian Sea using 136 multi-temporal ENVISAT Advanced Synthetic Aperture Radar Wide Swath Medium Resolution images acquired during 2006–2010. The following oil spill frequencies were observed around the Oil Rocks Settlement, the Chilov and Pirallahi Islands: 2–10 (3471.04 sq km, 11–20 (971.66 sq km, 21–50 (692.44 sq km, 51–128 (191.38 sq km. The most critical oil leak sources with the frequency range of 41–128 were observed at the Oil Rocks Settlement. The exponential regression analysis between wind speeds and oil slick areas detected from 136 multi-temporal ENVISAT images revealed the regression coefficient equal to 63%. The regression model showed that larger oil spill areas were observed with decreasing wind speeds. The spatiotemporal patterns of currents in the Caspian Sea explained the multi-directional spatial distribution of oil spills around Oil Rocks Settlement, the Chilov and Pirallahi Islands. The linear regression analysis between detected oil spill frequencies and predicted oil contamination probability by the stochastic model showed the positive trend with the regression coefficient of 30%.
Eichmann, Cordula; Berger, Burkhard; Steinlechner, Martin; Parson, Walther
2005-06-30
Dog DNA-profiling is becoming an important supplementary technology for the investigation of accident and crime, as dogs are intensely integrated in human social life. We investigated 15 highly polymorphic canine STR markers and two sex-related markers of 131 randomly selected dogs from the area around Innsbruck, Tyrol, Austria, which were co-amplified in three PCR multiplex reactions (ZUBECA6, FH2132, FH2087Ua, ZUBECA4, WILMSTF, PEZ15, PEZ6, FH2611, FH2087Ub, FH2054, PEZ12, PEZ2, FH2010, FH2079 and VWF.X). Linkage testing for our set of marker suggested no evidence for linkage between the loci. Heterozygosity (HET), polymorphism information content (PIC) and the probability of identity (P((ID)theoretical), P((ID)unbiased), P((ID)sib)) were calculated for each marker. The HET((exp))-values of the 15 markers lie between 0.6 (VWF.X) and 0.9 (ZUBECA6), P((ID)sib)-values were found to range between 0.49 (VWF.X) and 0.28 (ZUBECA6). Moreover, the P((ID)sib) was computed for sets of loci by sequentially adding single loci to estimate the information content and the usefulness of the selected marker sets for the identification of dogs. The estimated P((ID)sib) value of all 15 markers amounted to 8.5 x 10(-8). The presented estimations turned out to be a helpful approach for a reasonable choice of markers for the individualisation of dogs.
Estimating temporal causal interaction between spike trains with permutation and transfer entropy.
Zhaohui Li
Full Text Available Estimating the causal interaction between neurons is very important for better understanding the functional connectivity in neuronal networks. We propose a method called normalized permutation transfer entropy (NPTE to evaluate the temporal causal interaction between spike trains, which quantifies the fraction of ordinal information in a neuron that has presented in another one. The performance of this method is evaluated with the spike trains generated by an Izhikevich's neuronal model. Results show that the NPTE method can effectively estimate the causal interaction between two neurons without influence of data length. Considering both the precision of time delay estimated and the robustness of information flow estimated against neuronal firing rate, the NPTE method is superior to other information theoretic method including normalized transfer entropy, symbolic transfer entropy and permutation conditional mutual information. To test the performance of NPTE on analyzing simulated biophysically realistic synapses, an Izhikevich's cortical network that based on the neuronal model is employed. It is found that the NPTE method is able to characterize mutual interactions and identify spurious causality in a network of three neurons exactly. We conclude that the proposed method can obtain more reliable comparison of interactions between different pairs of neurons and is a promising tool to uncover more details on the neural coding.
Wagner, Daniel M.; Krieger, Joshua D.; Veilleux, Andrea G.
2016-08-04
In 2013, the U.S. Geological Survey initiated a study to update regional skew, annual exceedance probability discharges, and regional regression equations used to estimate annual exceedance probability discharges for ungaged locations on streams in the study area with the use of recent geospatial data, new analytical methods, and available annual peak-discharge data through the 2013 water year. An analysis of regional skew using Bayesian weighted least-squares/Bayesian generalized-least squares regression was performed for Arkansas, Louisiana, and parts of Missouri and Oklahoma. The newly developed constant regional skew of -0.17 was used in the computation of annual exceedance probability discharges for 281 streamgages used in the regional regression analysis. Based on analysis of covariance, four flood regions were identified for use in the generation of regional regression models. Thirty-nine basin characteristics were considered as potential explanatory variables, and ordinary least-squares regression techniques were used to determine the optimum combinations of basin characteristics for each of the four regions. Basin characteristics in candidate models were evaluated based on multicollinearity with other basin characteristics (variance inflation factor < 2.5) and statistical significance at the 95-percent confidence level (p ≤ 0.05). Generalized least-squares regression was used to develop the final regression models for each flood region. Average standard errors of prediction of the generalized least-squares models ranged from 32.76 to 59.53 percent, with the largest range in flood region D. Pseudo coefficients of determination of the generalized least-squares models ranged from 90.29 to 97.28 percent, with the largest range also in flood region D. The regional regression equations apply only to locations on streams in Arkansas where annual peak discharges are not substantially affected by regulation, diversion, channelization, backwater, or urbanization
Warren B. Cohen; Hans-Erik Andersen; Sean P. Healey; Gretchen G. Moisen; Todd A. Schroeder; Christopher W. Woodall; Grant M. Domke; Zhiqiang Yang; Robert E. Kennedy; Stephen V. Stehman; Curtis Woodcock; Jim Vogelmann; Zhe Zhu; Chengquan. Huang
2015-01-01
We are developing a system that provides temporally consistent biomass estimates for national greenhouse gas inventory reporting to the United Nations Framework Convention on Climate Change. Our model-assisted estimation framework relies on remote sensing to scale from plot measurements to lidar strip samples, to Landsat time series-based maps. As a demonstration, new...
O’Shea, Tuathan P., E-mail: tuathan.oshea@icr.ac.uk; Bamber, Jeffrey C.; Harris, Emma J. [Joint Department of Physics, The Institute of Cancer Research and The Royal Marsden NHS foundation Trust, Sutton, London SM2 5PT (United Kingdom)
2016-01-15
Purpose: Ultrasound-based motion estimation is an expanding subfield of image-guided radiation therapy. Although ultrasound can detect tissue motion that is a fraction of a millimeter, its accuracy is variable. For controlling linear accelerator tracking and gating, ultrasound motion estimates must remain highly accurate throughout the imaging sequence. This study presents a temporal regularization method for correlation-based template matching which aims to improve the accuracy of motion estimates. Methods: Liver ultrasound sequences (15–23 Hz imaging rate, 2.5–5.5 min length) from ten healthy volunteers under free breathing were used. Anatomical features (blood vessels) in each sequence were manually annotated for comparison with normalized cross-correlation based template matching. Five sequences from a Siemens Acuson™ scanner were used for algorithm development (training set). Results from incremental tracking (IT) were compared with a temporal regularization method, which included a highly specific similarity metric and state observer, known as the α–β filter/similarity threshold (ABST). A further five sequences from an Elekta Clarity™ system were used for validation, without alteration of the tracking algorithm (validation set). Results: Overall, the ABST method produced marked improvements in vessel tracking accuracy. For the training set, the mean and 95th percentile (95%) errors (defined as the difference from manual annotations) were 1.6 and 1.4 mm, respectively (compared to 6.2 and 9.1 mm, respectively, for IT). For each sequence, the use of the state observer leads to improvement in the 95% error. For the validation set, the mean and 95% errors for the ABST method were 0.8 and 1.5 mm, respectively. Conclusions: Ultrasound-based motion estimation has potential to monitor liver translation over long time periods with high accuracy. Nonrigid motion (strain) and the quality of the ultrasound data are likely to have an impact on tracking
Justine Ringard
2015-12-01
Full Text Available Satellite precipitation products are a means of estimating rainfall, particularly in areas that are sparsely equipped with rain gauges. The Guiana Shield is a region vulnerable to high water episodes. Flood risk is enhanced by the concentration of population living along the main rivers. A good understanding of the regional hydro-climatic regime, as well as an accurate estimation of precipitation is therefore of great importance. Unfortunately, there are very few rain gauges available in the region. The objective of the study is then to compare satellite rainfall estimation products in order to complement the information available in situ and to perform a regional analysis of four operational precipitation estimates, by partitioning the whole area under study into a homogeneous hydro-climatic region. In this study, four satellite products have been tested, TRMM TMPA (Tropical Rainfall Measuring Mission Multisatellite Precipitation Analysis V7 (Version 7 and RT (real time, CMORPH (Climate Prediction Center (CPC MORPHing technique and PERSIANN (Precipitation Estimation from Remotely-Sensed Information using Artificial Neural Network, for daily rain gauge data. Product performance is evaluated at daily and monthly scales based on various intensities and hydro-climatic regimes from 1 January 2001 to 30 December 2012 and using quantitative statistical criteria (coefficient correlation, bias, relative bias and root mean square error and quantitative error metrics (probability of detection for rainy days and for no-rain days and the false alarm ratio. Over the entire study period, all products underestimate precipitation. The results obtained in terms of the hydro-climate show that for areas with intense convective precipitation, TMPA V7 shows a better performance than other products, especially in the estimation of extreme precipitation events. In regions along the Amazon, the use of PERSIANN is better. Finally, in the driest areas, TMPA V7 and
A temporal window for estimating surface brightness in the Craik-O’Brien-Cornsweet effect
Ayako eMasuda
2014-11-01
Full Text Available The central edge of an opposing pair of luminance gradients (COC edge makes adjoining regions with identical luminance appear to be different. This brightness illusion, called the Craik-O'Brien-Cornsweet effect (COCe, can be explained by low-level spatial filtering mechanisms (Dakin & Bex, 2003. Also, the COCe is greatly reduced when the stimulus lacks a frame element surrounding the COC edge (Purves et al., 1999. This indicates that the COCe can be modulated by extra contextual cues that are related to ideas about lighting priors. In this study, we examined whether processing for contextual modulation could be independent of the main COCe processing mediated by the filtering mechanism. We displayed the COC edge and frame element at physically different times. Then, while varying the onset asynchrony between them and changing the luminance contrast of the frame element, we measured the size of the COCe. We found that the COCe was observed in the temporal range of around 600-800 ms centered at the 0 ms (from around -400 to 400 ms in stimulus onset asynchrony, which was much larger than the range of typical visual persistency. More importantly, this temporal range did not change significantly regardless of differences in the luminance contrast of the frame element (5-100 %, in the durations of COC edge and/or the frame element (50 or 200 ms, in the display condition (interocular or binocular, and in the type of lines constituting the frame element (solid or illusory lines. Results suggest that the visual system can bind the COC edge and frame element with a temporal window of ~ 1 sec to estimate surface brightness. Information from the basic filtering mechanism and information of contextual cue are separately processed and are linked afterwards.
The Iterative Reweighted Mixed-Norm Estimate for Spatio-Temporal MEG/EEG Source Reconstruction.
Strohmeier, Daniel; Bekhti, Yousra; Haueisen, Jens; Gramfort, Alexandre
2016-10-01
Source imaging based on magnetoencephalography (MEG) and electroencephalography (EEG) allows for the non-invasive analysis of brain activity with high temporal and good spatial resolution. As the bioelectromagnetic inverse problem is ill-posed, constraints are required. For the analysis of evoked brain activity, spatial sparsity of the neuronal activation is a common assumption. It is often taken into account using convex constraints based on the l1-norm. The resulting source estimates are however biased in amplitude and often suboptimal in terms of source selection due to high correlations in the forward model. In this work, we demonstrate that an inverse solver based on a block-separable penalty with a Frobenius norm per block and a l0.5-quasinorm over blocks addresses both of these issues. For solving the resulting non-convex optimization problem, we propose the iterative reweighted Mixed Norm Estimate (irMxNE), an optimization scheme based on iterative reweighted convex surrogate optimization problems, which are solved efficiently using a block coordinate descent scheme and an active set strategy. We compare the proposed sparse imaging method to the dSPM and the RAP-MUSIC approach based on two MEG data sets. We provide empirical evidence based on simulations and analysis of MEG data that the proposed method improves on the standard Mixed Norm Estimate (MxNE) in terms of amplitude bias, support recovery, and stability.
Estimation of Remote Sensing and Analysis of Temporal and Spatial Distribution of Grassland LAI
Chengming; SUN; Tao; LIU; Doudou; GUO; Ting; TIAN; Lijian; WANG; Yingying; CHEN; Jianlong; LI
2013-01-01
To estimate the leaf area index(LAI)in large areas,this paper analyzes the relationships between normalized difference vegetation index(NDVI)and the grassland LAI based on MODIS data in the southern grassy mountains and slopes of China.By using nonlinear fitting equation we constructed the basic estimation model of grassland LAI with NDVI as the independent variable and introduced precipitation and temperature as regulatory factors.The model was validated with observed data in different years and the results showed that there was a good correlation between the simulated and observed LAI value with a statistically significant level of R2.RMSE was 0.302 and RRMSE was 0.154.It was also found that the spatial distribution of grassland LAI in south China showed a remarkable zonal characterization,and temporal distribution showed a single peak curve.These results provided a theoretical basis for the effective management of southern grassland resources and the carbon sink estimation of the nationwide grasslands.
Tatsuhiko Sato
Full Text Available We here propose a new model assembly for estimating the surviving fraction of cells irradiated with various types of ionizing radiation, considering both targeted and nontargeted effects in the same framework. The probability densities of specific energies in two scales, which are the cell nucleus and its substructure called a domain, were employed as the physical index for characterizing the radiation fields. In the model assembly, our previously established double stochastic microdosimetric kinetic (DSMK model was used to express the targeted effect, whereas a newly developed model was used to express the nontargeted effect. The radioresistance caused by overexpression of anti-apoptotic protein Bcl-2 known to frequently occur in human cancer was also considered by introducing the concept of the adaptive response in the DSMK model. The accuracy of the model assembly was examined by comparing the computationally and experimentally determined surviving fraction of Bcl-2 cells (Bcl-2 overexpressing HeLa cells and Neo cells (neomycin resistant gene-expressing HeLa cells irradiated with microbeam or broadbeam of energetic heavy ions, as well as the WI-38 normal human fibroblasts irradiated with X-ray microbeam. The model assembly reproduced very well the experimentally determined surviving fraction over a wide range of dose and linear energy transfer (LET values. Our newly established model assembly will be worth being incorporated into treatment planning systems for heavy-ion therapy, brachytherapy, and boron neutron capture therapy, given critical roles of the frequent Bcl-2 overexpression and the nontargeted effect in estimating therapeutic outcomes and harmful effects of such advanced therapeutic modalities.
Sato, Tatsuhiko; Hamada, Nobuyuki
2014-01-01
We here propose a new model assembly for estimating the surviving fraction of cells irradiated with various types of ionizing radiation, considering both targeted and nontargeted effects in the same framework. The probability densities of specific energies in two scales, which are the cell nucleus and its substructure called a domain, were employed as the physical index for characterizing the radiation fields. In the model assembly, our previously established double stochastic microdosimetric kinetic (DSMK) model was used to express the targeted effect, whereas a newly developed model was used to express the nontargeted effect. The radioresistance caused by overexpression of anti-apoptotic protein Bcl-2 known to frequently occur in human cancer was also considered by introducing the concept of the adaptive response in the DSMK model. The accuracy of the model assembly was examined by comparing the computationally and experimentally determined surviving fraction of Bcl-2 cells (Bcl-2 overexpressing HeLa cells) and Neo cells (neomycin resistant gene-expressing HeLa cells) irradiated with microbeam or broadbeam of energetic heavy ions, as well as the WI-38 normal human fibroblasts irradiated with X-ray microbeam. The model assembly reproduced very well the experimentally determined surviving fraction over a wide range of dose and linear energy transfer (LET) values. Our newly established model assembly will be worth being incorporated into treatment planning systems for heavy-ion therapy, brachytherapy, and boron neutron capture therapy, given critical roles of the frequent Bcl-2 overexpression and the nontargeted effect in estimating therapeutic outcomes and harmful effects of such advanced therapeutic modalities.
Mahfouz, Zaher; Verloock, Leen; Joseph, Wout; Tanghe, Emmeric; Gati, Azeddine; Wiart, Joe; Lautru, David; Hanna, Victor Fouad; Martens, Luc
2013-12-01
The influence of temporal daily exposure to global system for mobile communications (GSM) and universal mobile telecommunications systems and high speed downlink packet access (UMTS-HSDPA) is investigated using spectrum analyser measurements in two countries, France and Belgium. Temporal variations and traffic distributions are investigated. Three different methods to estimate maximal electric-field exposure are compared. The maximal realistic (99 %) and the maximal theoretical extrapolation factor used to extrapolate the measured broadcast control channel (BCCH) for GSM and the common pilot channel (CPICH) for UMTS are presented and compared for the first time in the two countries. Similar conclusions are found in the two countries for both urban and rural areas: worst-case exposure assessment overestimates realistic maximal exposure up to 5.7 dB for the considered example. In France, the values are the highest, because of the higher population density. The results for the maximal realistic extrapolation factor at the weekdays are similar to those from weekend days.
Ebrahimian, Hossein; Jalayer, Fatemeh
2017-08-29
In the immediate aftermath of a strong earthquake and in the presence of an ongoing aftershock sequence, scientific advisories in terms of seismicity forecasts play quite a crucial role in emergency decision-making and risk mitigation. Epidemic Type Aftershock Sequence (ETAS) models are frequently used for forecasting the spatio-temporal evolution of seismicity in the short-term. We propose robust forecasting of seismicity based on ETAS model, by exploiting the link between Bayesian inference and Markov Chain Monte Carlo Simulation. The methodology considers the uncertainty not only in the model parameters, conditioned on the available catalogue of events occurred before the forecasting interval, but also the uncertainty in the sequence of events that are going to happen during the forecasting interval. We demonstrate the methodology by retrospective early forecasting of seismicity associated with the 2016 Amatrice seismic sequence activities in central Italy. We provide robust spatio-temporal short-term seismicity forecasts with various time intervals in the first few days elapsed after each of the three main events within the sequence, which can predict the seismicity within plus/minus two standard deviations from the mean estimate within the few hours elapsed after the main event.
Wiegmann, Douglas A.a
2005-01-01
The NASA Aviation Safety Program (AvSP) has defined several products that will potentially modify airline and/or ATC operations, enhance aircraft systems, and improve the identification of potential hazardous situations within the National Airspace System (NAS). Consequently, there is a need to develop methods for evaluating the potential safety benefit of each of these intervention products so that resources can be effectively invested to produce the judgments to develop Bayesian Belief Networks (BBN's) that model the potential impact that specific interventions may have. Specifically, the present report summarizes methodologies for improving the elicitation of probability estimates during expert evaluations of AvSP products for use in BBN's. The work involved joint efforts between Professor James Luxhoj from Rutgers University and researchers at the University of Illinois. The Rutgers' project to develop BBN's received funding by NASA entitled "Probabilistic Decision Support for Evaluating Technology Insertion and Assessing Aviation Safety System Risk." The proposed project was funded separately but supported the existing Rutgers' program.
Painter, Colin C.; Heimann, David C.; Lanning-Rush, Jennifer L.
2017-08-14
A study was done by the U.S. Geological Survey in cooperation with the Kansas Department of Transportation and the Federal Emergency Management Agency to develop regression models to estimate peak streamflows of annual exceedance probabilities of 50, 20, 10, 4, 2, 1, 0.5, and 0.2 percent at ungaged locations in Kansas. Peak streamflow frequency statistics from selected streamgages were related to contributing drainage area and average precipitation using generalized least-squares regression analysis. The peak streamflow statistics were derived from 151 streamgages with at least 25 years of streamflow data through 2015. The developed equations can be used to predict peak streamflow magnitude and frequency within two hydrologic regions that were defined based on the effects of irrigation. The equations developed in this report are applicable to streams in Kansas that are not substantially affected by regulation, surface-water diversions, or urbanization. The equations are intended for use for streams with contributing drainage areas ranging from 0.17 to 14,901 square miles in the nonirrigation effects region and, 1.02 to 3,555 square miles in the irrigation-affected region, corresponding to the range of drainage areas of the streamgages used in the development of the regional equations.
Estimation of probable maximum typhoon wave for coastal nuclear power plant%滨海核电可能最大台风浪的推算
丁赟
2011-01-01
采用当前国际流行的第三代波浪模式SWAN探讨了滨海核电工程可能最大台风浪的计算,并分析了可能最大台风浪与相伴随的可能最大风暴潮成长规律.分析得可能最大台风浪通常滞后可能最大风暴潮增水峰值,推算得到的可能最大台风浪高于遮浪海洋站观测到的最大波高,为滨海核电工程可能最大台风浪的推算提供参考.%The third-generation wave model, SWAN (Simulating Waves Nearshore), was employed to estimate the probable maximum typhoon wave at a coastal engineering area. The relationship between the development of probable maximum typhoon wave and that of probable maximum storm surge was investigated. It is shown that the probable maximum typhoon wave usually occurs later than the probable maximum storm surge. The estimated probable maximum typhoon wave is higher than the historical observational maximum wave height data of Zhelang station. The approach utilized in this study to estimate probable maximum typhoon wave could provide valuable information in design of coastal engineering.
Ennis, Erin J; Foley, Joe P
2016-07-15
A stochastic approach was utilized to estimate the probability of a successful isocratic or gradient separation in conventional chromatography for numbers of sample components, peak capacities, and saturation factors ranging from 2 to 30, 20-300, and 0.017-1, respectively. The stochastic probabilities were obtained under conditions of (i) constant peak width ("gradient" conditions) and (ii) peak width increasing linearly with time ("isocratic/constant N" conditions). The isocratic and gradient probabilities obtained stochastically were compared with the probabilities predicted by Martin et al. [Anal. Chem., 58 (1986) 2200-2207] and Davis and Stoll [J. Chromatogr. A, (2014) 128-142]; for a given number of components and peak capacity the same trend is always observed: probability obtained with the isocratic stochastic approachstochastic approach≤probability predicted by Davis and Stoll stochastic results are applied to conventional HPLC and sequential elution liquid chromatography (SE-LC), the latter is shown to provide much greater probabilities of success for moderately complex samples (e.g., PHPLC=31.2% versus PSE-LC=69.1% for 12 components and the same analysis time). For a given number of components, the density of probability data provided over the range of peak capacities is sufficient to allow accurate interpolation of probabilities for peak capacities not reported, stochastic approach include isothermal and programmed-temperature gas chromatography.
Matthias Deliano
Full Text Available Estimation of learning curves is ubiquitously based on proportions of correct responses within moving trial windows. Thereby, it is tacitly assumed that learning performance is constant within the moving windows, which, however, is often not the case. In the present study we demonstrate that violations of this assumption lead to systematic errors in the analysis of learning curves, and we explored the dependency of these errors on window size, different statistical models, and learning phase. To reduce these errors in the analysis of single-subject data as well as on the population level, we propose adequate statistical methods for the estimation of learning curves and the construction of confidence intervals, trial by trial. Applied to data from an avoidance learning experiment with rodents, these methods revealed performance changes occurring at multiple time scales within and across training sessions which were otherwise obscured in the conventional analysis. Our work shows that the proper assessment of the behavioral dynamics of learning at high temporal resolution can shed new light on specific learning processes, and, thus, allows to refine existing learning concepts. It further disambiguates the interpretation of neurophysiological signal changes recorded during training in relation to learning.
Jonathan Van Beek
2015-08-01
Full Text Available Yield and quality estimations provide vital information to fruit growers, yet require accurate monitoring throughout the growing season. To this end, the temporal dependency of fruit yield and quality estimations through spectral vegetation indices was investigated in irrigated and rainfed pear orchards. Both orchards were monitored throughout three consecutive growing seasons, including spectral measurements (i.e., hyperspectral canopy reflectance measurements as well as yield determination (i.e., total yield and number of fruits per tree and quality assessment (i.e., fruit firmness, total soluble solids and fruit color. The results illustrated a clear association between spectral vegetation indices and both fruit yield and fruit quality (|r| > 0.75; p < 0.001. However, the correlations between vegetation indices and production variables varied throughout the growing season, depending on the phenological stage of fruit development. In the irrigated orchard, index values showed a strong association with production variables near time of harvest (|r| > 0.6; p < 0.001, while in the rainfed orchard, index values acquired during vegetative growth periods presented stronger correlations with fruit parameters (|r| > 0.6; p < 0.001. The improved planning of remote sensing missions during (rainfed orchards and after (irrigated orchards vegetative growth periods could enable growers to more accurately predict production outcomes and improve the production process.
Valero, Antonio; Pasquali, Frédérique; De Cesare, Alessandra; Manfreda, Gerardo
2014-08-01
Current sampling plans assume a random distribution of microorganisms in food. However, food-borne pathogens are estimated to be heterogeneously distributed in powdered foods. This spatial distribution together with very low level of contaminations raises concern of the efficiency of current sampling plans for the detection of food-borne pathogens like Cronobacter and Salmonella in powdered foods such as powdered infant formula or powdered eggs. An alternative approach based on a Poisson distribution of the contaminated part of the lot (Habraken approach) was used in order to evaluate the probability of falsely accepting a contaminated lot of powdered food when different sampling strategies were simulated considering variables such as lot size, sample size, microbial concentration in the contaminated part of the lot and proportion of contaminated lot. The simulated results suggest that a sample size of 100g or more corresponds to the lower number of samples to be tested in comparison with sample sizes of 10 or 1g. Moreover, the number of samples to be tested greatly decrease if the microbial concentration is 1CFU/g instead of 0.1CFU/g or if the proportion of contamination is 0.05 instead of 0.01. Mean contaminations higher than 1CFU/g or proportions higher than 0.05 did not impact on the number of samples. The Habraken approach represents a useful tool for risk management in order to design a fit-for-purpose sampling plan for the detection of low levels of food-borne pathogens in heterogeneously contaminated powdered food. However, it must be outlined that although effective in detecting pathogens, these sampling plans are difficult to be applied since the huge number of samples that needs to be tested. Sampling does not seem an effective measure to control pathogens in powdered food.
Oh, Won Sup; Kim, Yeon-Sook; Yeom, Joon Sup; Choi, Hee Kyoung; Kwak, Yee Gyung; Jun, Jae-Bum; Park, Seong Yeon; Chung, Jin-Won; Rhee, Ji-Young; Kim, Baek-Nam
2016-11-24
Among patients with urinary tract infection (UTI), bacteremic cases show higher mortality rates than do nonbacteremic cases. Early identification of bacteremic cases is crucial for severity assessment of patients with febrile UTI. This study aimed to identify predictors associated with bacteremia in women with community-onset febrile UTI and to develop a prediction model to estimate the probability of bacteremic cases. This cross-sectional study included women consecutively hospitalized with community-onset febrile UTI at 10 hospitals in Korea. Multiple logistic regression identified predictors associated with bacteremia among candidate variables chosen from univariate analysis. A prediction model was developed using all predictors weighted by their regression coefficients. From July to September 2014, 383 women with febrile UTI were included: 115 (30.0%) bacteremic and 268 (70.0%) nonbacteremic cases. A prediction model consisted of diabetes mellitus (1 point), urinary tract obstruction by stone (2), costovertebral angle tenderness (2), a fraction of segmented neutrophils of > 90% (2), thrombocytopenia (2), azotemia (2), and the fulfillment of all criteria for systemic inflammatory response syndrome (2). The c statistic for the model was 0.807 (95% confidence interval [CI], 0.757-0.856). At a cutoff value of ≥ 3, the model had a sensitivity of 86.1% (95% CI, 78.1-91.6%) and a specificity of 54.9% (95% CI, 48.7-91.6%). Our model showed a good discriminatory power for early identification of bacteremic cases in women with community-onset febrile UTI. In addition, our model can be used to identify patients at low risk for bacteremia because of its relatively high sensitivity.
Andersen, H.
2013-12-01
There is increasing interest in the development of statistical sampling designs for aboveground biomass (and carbon) inventory and monitoring programs that can make efficient use of a variety of available data sources, including field plots, airborne lidar sampling, and satellite imagery. While the use of multiple sources, or levels, of remote sensing data can significantly increase the precision of biomass change estimates, especially in remote areas (such as interior Alaska) where it is extremely expensive to establish field plots, it can be challenging to accurately characterize the uncertainty (i.e. variance and bias) of the estimates obtained from these complex multi-level designs. In this study we evaluate a model-based approach to estimate changes in biomass over the western lowlands of the Kenai Peninsula of Alaska during the period 2004-2009 using a combination of field plots, lidar sampling, and satellite imagery. The model-based approach -- where all inferences are conditioned on the model relating the remote-sensing measurements to the inventory parameter of interest (e.g. biomass) - is appropriate for cases where it is cost-prohibitive, or infeasible, to establish a probability sample of field plots that are both spatially and temporally coincident with each remote sensing data set. For example, a model-based approach can be used to obtain biomass estimates over a period of time, even when field data is only available for the current time period. In this study, lidar data were collected in 2004 and 2009 over single swaths that covered 130 Forest Inventory and Analysis (FIA) plots distributed on a regular grid over the entire western Kenai. Field measurements on FIA plots were initially acquired over the period 1999-2003 and fifty-percent of these plots were remeasured in the period 2004-2009. In addition, high-accuracy coordinates (GPS equipment. Changes in biomass (and associated uncertainty) estimated from field remeasurements alone were compared to
Steve Wathen
Full Text Available Evidence for significant losses of species richness or biodiversity, even within protected natural areas, is mounting. Managers are increasingly being asked to monitor biodiversity, yet estimating biodiversity is often prohibitively expensive. As a cost-effective option, we estimated the spatial and temporal distribution of species richness for four taxonomic groups (birds, mammals, herpetofauna (reptiles and amphibians, and plants within Sequoia and Kings Canyon National Parks using only existing biological studies undertaken within the Parks and the Parks' long-term wildlife observation database. We used a rarefaction approach to model species richness for the four taxonomic groups and analyzed those groups by habitat type, elevation zone, and time period. We then mapped the spatial distributions of species richness values for the four taxonomic groups, as well as total species richness, for the Parks. We also estimated changes in species richness for birds, mammals, and herpetofauna since 1980. The modeled patterns of species richness either peaked at mid elevations (mammals, plants, and total species richness or declined consistently with increasing elevation (herpetofauna and birds. Plants reached maximum species richness values at much higher elevations than did vertebrate taxa, and non-flying mammals reached maximum species richness values at higher elevations than did birds. Alpine plant communities, including sagebrush, had higher species richness values than did subalpine plant communities located below them in elevation. These results are supported by other papers published in the scientific literature. Perhaps reflecting climate change: birds and herpetofauna displayed declines in species richness since 1980 at low and middle elevations and mammals displayed declines in species richness since 1980 at all elevations.
Rainfall erosivity estimation based on rainfall data collected over a range of temporal resolutions
S. Yin
2015-05-01
Full Text Available Rainfall erosivity is the power of rainfall to cause soil erosion by water. The rainfall erosivity index for a rainfall event, EI30, is calculated from the total kinetic energy and maximum 30 min intensity of individual events. However, these data are often unavailable in many areas of the world. The purpose of this study was to develop models that relate more commonly available rainfall data resolutions, such as daily or monthly totals, to rainfall erosivity. Eleven stations with one-minute temporal resolution rainfall data collected from 1961 through 2000 in the eastern water-erosion areas of China were used to develop and calibrate 21 models. Seven independent stations, also with one-minute data, were utilized to validate those models, together with 20 previously published equations. Results showed that models in this study performed better or similar to models from previous research to estimate rainfall erosivity for these data. Prediction capabilities, as determined using symmetric mean absolute percentage errors and Nash–Sutcliffe model efficiency coefficients, were demonstrated for the 41 models including those for estimating erosivity at event, daily, monthly, yearly, average monthly and average annual time scales. Prediction capabilities were generally better using higher resolution rainfall data as inputs. For example, models with rainfall amount and maximum 60 min rainfall amount as inputs performed better than models with rainfall amount and maximum daily rainfall amount, which performed better than those with only rainfall amount. Recommendations are made for choosing the appropriate estimation equation, which depend on objectives and data availability.
Wathen, Steve; Thorne, James H; Holguin, Andrew; Schwartz, Mark W
2014-01-01
Evidence for significant losses of species richness or biodiversity, even within protected natural areas, is mounting. Managers are increasingly being asked to monitor biodiversity, yet estimating biodiversity is often prohibitively expensive. As a cost-effective option, we estimated the spatial and temporal distribution of species richness for four taxonomic groups (birds, mammals, herpetofauna (reptiles and amphibians), and plants) within Sequoia and Kings Canyon National Parks using only existing biological studies undertaken within the Parks and the Parks' long-term wildlife observation database. We used a rarefaction approach to model species richness for the four taxonomic groups and analyzed those groups by habitat type, elevation zone, and time period. We then mapped the spatial distributions of species richness values for the four taxonomic groups, as well as total species richness, for the Parks. We also estimated changes in species richness for birds, mammals, and herpetofauna since 1980. The modeled patterns of species richness either peaked at mid elevations (mammals, plants, and total species richness) or declined consistently with increasing elevation (herpetofauna and birds). Plants reached maximum species richness values at much higher elevations than did vertebrate taxa, and non-flying mammals reached maximum species richness values at higher elevations than did birds. Alpine plant communities, including sagebrush, had higher species richness values than did subalpine plant communities located below them in elevation. These results are supported by other papers published in the scientific literature. Perhaps reflecting climate change: birds and herpetofauna displayed declines in species richness since 1980 at low and middle elevations and mammals displayed declines in species richness since 1980 at all elevations.
Benndorf, Matthias; Neubauer, Jakob; Langer, Mathias; Kotter, Elmar
2017-03-01
In the diagnostic process of primary bone tumors, patient age, tumor localization and to a lesser extent sex affect the differential diagnosis. We therefore aim to develop a pretest probability calculator for primary malignant bone tumors based on population data taking these variables into account. We access the SEER (Surveillance, Epidemiology and End Results Program of the National Cancer Institute, 2015 release) database and analyze data of all primary malignant bone tumors diagnosed between 1973 and 2012. We record age at diagnosis, tumor localization according to the International Classification of Diseases (ICD-O-3) and sex. We take relative probability of the single tumor entity as a surrogate parameter for unadjusted pretest probability. We build a probabilistic (naïve Bayes) classifier to calculate pretest probabilities adjusted for age, tumor localization and sex. We analyze data from 12,931 patients (647 chondroblastic osteosarcomas, 3659 chondrosarcomas, 1080 chordomas, 185 dedifferentiated chondrosarcomas, 2006 Ewing's sarcomas, 281 fibroblastic osteosarcomas, 129 fibrosarcomas, 291 fibrous malignant histiocytomas, 289 malignant giant cell tumors, 238 myxoid chondrosarcomas, 3730 osteosarcomas, 252 parosteal osteosarcomas, 144 telangiectatic osteosarcomas). We make our probability calculator accessible at http://ebm-radiology.com/bayesbone/index.html . We provide exhaustive tables for age and localization data. Results from tenfold cross-validation show that in 79.8 % of cases the pretest probability is correctly raised. Our approach employs population data to calculate relative pretest probabilities for primary malignant bone tumors. The calculator is not diagnostic in nature. However, resulting probabilities might serve as an initial evaluation of probabilities of tumors on the differential diagnosis list.
Haigh, Ivan D.; Wijeratne, E. M. S.; MacPherson, Leigh R.; Pattiaratchi, Charitha B.; Mason, Matthew S.; Crompton, Ryan P.; George, Steve
2014-01-01
The occurrence of extreme water levels along low-lying, highly populated and/or developed coastlines can lead to considerable loss of life and billions of dollars of damage to coastal infrastructure. Therefore it is vitally important that the exceedance probabilities of extreme water levels are accurately evaluated to inform risk-based flood management, engineering and future land-use planning. This ensures the risk of catastrophic structural failures due to under-design or expensive wastes due to over-design are minimised. This paper estimates for the first time present day extreme water level exceedence probabilities around the whole coastline of Australia. A high-resolution depth averaged hydrodynamic model has been configured for the Australian continental shelf region and has been forced with tidal levels from a global tidal model and meteorological fields from a global reanalysis to generate a 61-year hindcast of water levels. Output from this model has been successfully validated against measurements from 30 tide gauge sites. At each numeric coastal grid point, extreme value distributions have been fitted to the derived time series of annual maxima and the several largest water levels each year to estimate exceedence probabilities. This provides a reliable estimate of water level probabilities around southern Australia; a region mainly impacted by extra-tropical cyclones. However, as the meteorological forcing used only weakly includes the effects of tropical cyclones, extreme water level probabilities are underestimated around the western, northern and north-eastern Australian coastline. In a companion paper we build on the work presented here and more accurately include tropical cyclone-induced surges in the estimation of extreme water level. The multi-decadal hindcast generated here has been used primarily to estimate extreme water level exceedance probabilities but could be used more widely in the future for a variety of other research and practical
Weng, Q.
2007-12-01
Impervious surface is a key indicator of urban environmental quality and urbanization degree. Therefore, estimation and mapping of impervious surfaces in urban areas has attracted more and more attention recently by using remote sensing digital images. In this paper, satellite images with various spectral, spatial, and temporal resolutions are employed to examine the effects of these remote sensing data characteristics on mapping accuracy of urban impervious surfaces. The study area was the city proper of Indianapolis (Marion County), Indiana, United States. Linear spectral mixture analysis was applied to generate high albedo, low albedo, vegetation, and soil fraction images (endmembers) from the satellite images, and impervious surfaces were then estimated by adding high albedo and low albedo fraction images. A comparison of EO-1 ALI (multispectral) and Hyperion (hyperspectral) images indicates that the Hyperion image was more effective in discerning low albedo surface materials, especially the spectral bands in the mid-infrared region. Linear spectral mixing modeling was found more useful for medium spatial resolution images, such as Landsat TM/ETM+ and ASTER images, due to the existence of a large amount of mixed pixels in the urban areas. The model, however, may not be suitable for high spatial resolution images, such as IKONOS images, because of less influence from the mixing pixel. The shadow problem in the high spatial resolution images, caused by tall buildings and large tree crowns, is a challenge in impervious surface extraction. Alternative image processing algorithms such as decision tree classifier may be more appropriate to achieve high mapping accuracy. For mid-latitude cities, seasonal vegetation phenology has a significant effect on the spectral response of terrestrial features, and therefore, image analysis must take into account of this environmental characteristic. Three ASTER images, acquired on April 5, 2004, June 16, 2001, and October 3, 2000
Huddleston, Lisa L.; Roeder, William; Merceret, Francis J.
2010-01-01
A technique has been developed to calculate the probability that any nearby lightning stroke is within any radius of any point of interest. In practice, this provides the probability that a nearby lightning stroke was within a key distance of a facility, rather than the error ellipses centered on the stroke. This process takes the current bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to get the probability that the stroke is inside any specified radius. This new facility-centric technique will be much more useful to the space launch customers and may supersede the lightning error ellipse approach discussed in [5], [6].
Zheng, Yuanfan; Weng, Qihao
2017-07-22
Anthropogenic heat flux (Qf), which originates through energy consumption from buildings, industrial plants, vehicle exhausts, and human metabolism releases, is an important component in the urban Surface Energy Balance (SEB) system, and is key to understanding of many urban environmental issues. The present study provided a hybrid Qf modeling approach, which combined the inventory and GIS approach to create a 365-day hourly Qf profile at 120 m spatial resolution in Los Angeles County, California, USA. Qf was estimated by separate calculation of heat release from buildings, traffics, and human metabolism, respectively. The results indicated that Qf showed different magnitudes and diurnal patterns between workdays (dual-peak shape) and weekends/holidays, and also varied with seasons, and land use types. Qf yielded the highest values in the summer workdays, with its maximum value of 7.76 w/m(2). Qf in hot summer workdays was obviously higher than that in the average summer workdays, which caused by higher demands for space cooling in buildings, and can reach 8.14 w/m(2) at maximum. Building energy consumption was identified as the dominant contributor to the Qf in Downtown Los Angeles, which was found to have the largest mean Qf throughout the year among all neighborhoods. It can be concluded that Qf in the downtown was more significant in workdays than that in non-workdays, and its maximum value can reach 100 w/m(2). It is suggested that our approach may have wider applicability for Qf estimation in large areas compared with the existing studies, as all the data used were available to the public. A high spatial and temporal Qf profile, which can readily be incorporated into urban energy balance and Urban Heat Island (UHI) studies, provides valuable data and information for pertinent government agencies and researchers. Copyright © 2017 Elsevier Ltd. All rights reserved.
Lee, Duncan; Mukhopadhyay, Sabyasachi; Rushworth, Alastair; Sahu, Sujit K
2017-04-01
In the United Kingdom, air pollution is linked to around 40000 premature deaths each year, but estimating its health effects is challenging in a spatio-temporal study. The challenges include spatial misalignment between the pollution and disease data; uncertainty in the estimated pollution surface; and complex residual spatio-temporal autocorrelation in the disease data. This article develops a two-stage model that addresses these issues. The first stage is a spatio-temporal fusion model linking modeled and measured pollution data, while the second stage links these predictions to the disease data. The methodology is motivated by a new five-year study investigating the effects of multiple pollutants on respiratory hospitalizations in England between 2007 and 2011, using pollution and disease data relating to local and unitary authorities on a monthly time scale. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Identifying the spatial and temporal distribution of crop water requirements is a key for successful management of water resources in the dry areas. Climatic data were obtained from three automated weather stations to estimate reference evapotranspiration (ETO) in the Jordan Valley according to the...
Hofmann, K.M.; Gavrilla, D.M.
2009-01-01
We present a system for the estimation of unconstrained 3D human upper body movement from multiple cameras. Its main novelty lies in the integration of three components: single frame pose recovery, temporal integration and model adaptation. Single frame pose recovery consists of a hypothesis generat
Hofmann, K.M.; Gavrila, D.M.
2009-01-01
We present a system for the estimation of unconstrained 3D human upper body movement from multiple cameras. Its main novelty lies in the integration of three components: single-frame pose recovery, temporal integration and model adaptation. Single-frame pose recovery consists of a hypothesis generat
Brus, D.J.; Gruijter, de J.J.
2012-01-01
This paper launches a hybrid sampling approach, entailing a design-based approach in space followed by a model-based approach in time, for estimating temporal trends of spatial means or totals. The underlying space–time process that generated the soil data is only partly described, viz. by a linear
Ssematimba, A.; Elbers, A.R.W.; Hagenaars, T.H.J.; Jong, de M.C.M.
2012-01-01
Estimates of the per-contact probability of transmission between farms of Highly Pathogenic Avian Influenza virus of H7N7 subtype during the 2003 epidemic in the Netherlands are important for the design of better control and biosecurity strategies. We used standardized data collected during the epid
Analyzing Spatial and Temporal Variation in Precipitation Estimates in a Coupled Model
Tomkins, C. D.; Springer, E. P.; Costigan, K. R.
2001-12-01
the LADHS and RAMS cumulative precipitation reveal a disassociation over time, with R equal to 0.74 at day eight and R equal to 0.52 at day 31. Linear correlation coefficients (Pearson) returned a stronger initial correlation of 0.97, decreasing to 0.68. The standard deviations for the 2500 LADHS cells underlying each 5km RAMS cell range from 8 mm to 695 mm in the Sangre de Cristo Mountains and 2 mm to 112 mm in the San Luis Valley. Comparatively, the standard deviations of the RAMS estimates in these regions are 247 mm and 30 mm respectively. The LADHS standard deviations provide a measure of the variability introduced through the downscaling routine, which exceeds RAMS regional variability by a factor of 2 to 4. The coefficient of variation for the average LADHS grid cell values and the RAMS cell values in the Sangre de Cristo Mountains are 0.66 and 0.27, respectively, and 0.79 and 0.75 in the San Luis Valley. The coefficients of variation evidence the uniformity of the higher precipitation estimates in the mountains, especially for RAMS, and also the lower means and variability found in the valley. Additionally, Kolmogorov-Smirnov tests indicate clear spatial and temporal differences in mean simulated precipitation across the grid.
Statistical model estimating the occurrence of otitis media from temporal bone pneumatization
Homøe, P; Lynnerup, N; Rasmussen, N
1994-01-01
In order to investigate the relationship between the pneumatization of temporal bones and the occurrence of otitis media in Greenlandic Inuit, 36 Greenlandic Inuit were examined by radiography of the temporal bones. The pneumatized cell area was measured planimetrically. All subjects answered a q...
Reutter, Bryan W.; Gullberg, Grant T.; Huesman, Ronald H.
2001-04-30
Artifacts can result when reconstructing a dynamic image sequence from inconsistent single photon emission computed tomography (SPECT) projections acquired by a slowly rotating gantry. The artifacts can lead to biases in kinetic parameters estimated from time-activity curves generated by overlaying volumes of interest on the images. To overcome these biases in conventional image based dynamic data analysis, we have been investigating the estimation of time-activity curves and kinetic model parameters directly from dynamic SPECT projection data by modeling the spatial and temporal distribution of the radiopharmaceutical throughout the projected field of view. In previous work we developed computationally efficient methods for fully four-dimensional (4-D) direct estimation of spatiotemporal distributions [1] and their statistical uncertainties [2] from dynamic SPECT projection data, using a spatial segmentation and temporal B-splines. In addition, we studied the bias that results from modeling various orders of temporal continuity and using various time samplings [1]. In the present work, we use the methods developed in [1, 2] and Monte Carlo simulations to study the effects of the temporal modeling on the statistical variability of the reconstructed distributions.
John R. Squires; Lucretia E. Olson; David L. Turner; Nicholas J. DeCesare; Jay A. Kolbe
2012-01-01
We used snow-tracking surveys to determine the probability of detecting Canada lynx Lynx canadensis in known areas of lynx presence in the northern Rocky Mountains, Montana, USA during the winters of 2006 and 2007. We used this information to determine the minimum number of survey replicates necessary to infer the presence and absence of lynx in areas of similar lynx...
Aalto, Juha; Pirinen, Pentti; Jylhä, Kirsti
2016-04-01
Long-term time series of key climate variables with a relevant spatiotemporal resolution are essential for environmental science. Moreover, such spatially continuous data, based on weather observations, are commonly used in, e.g., downscaling and bias correcting of climate model simulations. Here we conducted a comprehensive spatial interpolation scheme where seven climate variables (daily mean, maximum, and minimum surface air temperatures, daily precipitation sum, relative humidity, sea level air pressure, and snow depth) were interpolated over Finland at the spatial resolution of 10 × 10 km2. More precisely, (1) we produced daily gridded time series (FMI_ClimGrid) of the variables covering the period of 1961-2010, with a special focus on evaluation and permutation-based uncertainty estimates, and (2) we investigated temporal trends in the climate variables based on the gridded data. National climate station observations were supplemented by records from the surrounding countries, and kriging interpolation was applied to account for topography and water bodies. For daily precipitation sum and snow depth, a two-stage interpolation with a binary classifier was deployed for an accurate delineation of areas with no precipitation or snow. A robust cross-validation indicated a good agreement between the observed and interpolated values especially for the temperature variables and air pressure, although the effect of seasons was evident. Permutation-based analysis suggested increased uncertainty toward northern areas, thus identifying regions with suboptimal station density. Finally, several variables had a statistically significant trend indicating a clear but locally varying signal of climate change during the last five decades.
V. Wirz
2014-02-01
Full Text Available Knowledge of processes and factors affecting slope instability is essential for detecting and monitoring potentially hazardous slopes. Knowing the timing of acceleration or deceleration of slope movements can help to identify important controls and hence to increase our process understanding. For this methods to derive reliable velocity estimations are important. The aim of this study was to develop and test a method to derive velocities based on noisy GPS data of various movement patterns and variable signal-to-noise-ratio (SNR. Derived velocities represent reliable average velocities representative for a given period. The applied smoothing windows directly depends on the SNR of the data, which is modeled using Monte Carlo simulation. Hence, all obtained velocities have a SNR above a predefined threshold and for each velocity period the SNR is known, which helps to interpret the temporal variability. In sensitivity tests with synthetic time-series the method was compared to established methods to derive velocities based on GPS positions, including spline and Kernel regression smoothing. Those sensitivity tests clearly demonstrated that methods are required that adopt the time window to the underlying error of the position data. The presented method performs well, even for a high noise levels and variable SNR. Different methods were further applied to investigate the inter-annual variability of permafrost slope movements based on daily GPS- and inclinometer data. In the framework of the new method, we further analyzed the error caused by a rotation of the GPS mast (hmast = 1.5 m. If the tilting is higher than its uncertainty, the rotational movement can be separated and the direction of movement became more uniform. At one GPS station, more than 12% of the measured displacement at the antenna was caused by the rotation of the station.
Ishiyama, Gail; Geiger, Christopher; Lopez, Ivan A; Ishiyama, Akira
2011-03-15
The objective of this study was to make direct comparisons of the estimates of spiral and vestibular neuronal number in human archival temporal bone specimens using design-based stereology with those using the assumption-based Abercrombie method. Archival human temporal bone specimens from subjects ranging in age from 16 to 80 years old were used. The number of spiral and vestibular ganglia neurons within the counting areas was estimated using the stereology-optical disector technique and compared with estimates obtained using the assumption-based Abercrombie method on the same specimens. Using the optical disector method, there was an average of 41,480 (coefficient of variation=0.12) spiral ganglia neurons and 28,930 (coefficient of variation=0.15) vestibular ganglia neurons. The mean coefficient of error was 0.076 for the spiral ganglion estimates, and 0.091 for the vestibular ganglion estimates. Using the Abercrombie correction method of two-dimensional analysis, an average of 23,110 (coefficient of variation of 0.08) spiral ganglia neurons, and 16,225 vestibular ganglia neurons (coefficient of variation of 0.15) was obtained. We found that there was a large disparity between the estimates with a significant 44% underestimation of the spiral and vestibular ganglion counts obtained using the Abercrombie method when compared with estimates using the optical disector method.
Bloch, Sune Land; Sørensen, Mads Sølvsten
2014-01-01
BACKGROUND: It has been suggested that Paget's disease of bone and otosclerosis may share a myxoviral etiology. However, the association between virus infection and pathologic bone remodeling is still controversial. The aim of this study was to estimate the spatial distribution of pagetic bone...... remodeling around the inner ear space and to compare it with that of otosclerosis in a contemporary context of temporal bone dynamics. MATERIALS AND METHODS: From the temporal bone collection of Massachusetts Eye and Ear Infirmary, 15 of 29 temporal bones with Paget's disease were selected to obtain...... is similar to the normal distribution of perilabyrinthine bone remodeling but entirely different from the spatial location of otosclerosis, which are focal and centripetally distributed around the inner ear space. CONCLUSION: In Paget's disease, the antiresorptive barrier around the inner ear space becomes...
Sakurai, Takeo; Serizawa, Shigeko; Kobayashi, Jun; Kodama, Keita; Lee, Jeong-Hoon; Maki, Hideaki; Zushi, Yasuyuki; Sevilla-Nastor, Janice Beltran; Imaizumi, Yoshitaka; Suzuki, Noriyuki; Horiguchi, Toshihiro; Shiraishi, Hiroaki
2016-01-01
We estimated inflow rates of perfluorooctanesulfonate (PFOS) and perfluorooctanoate (PFOA) to Tokyo Bay, Japan, between February 2004 and February 2011 by a receptor-oriented approach based on quarterly samplings of the bay water. Temporal trends in these inflow rates are an important basis for evaluating changes in PFOS and PFOA emissions in the Tokyo Bay catchment basin. A mixing model estimated the average concentrations of these compounds in the freshwater inflow to the bay, which were then multiplied by estimated freshwater inflow rates to obtain the inflow rates of these compounds. The receptor-oriented approach enabled us to comprehensively cover inflow to the bay, including inflow via direct discharge to the bay. On a logarithmic basis, the rate of inflow for PFOS decreased gradually, particularly after 2006, whereas that for PFOA exhibited a marked stepwise decrease from 2006 to 2007. The rate of inflow for PFOS decreased from 730kg/y during 2004-2006 to 160kg/y in 2010, whereas that for PFOA decreased from 2000kg/y during 2004-2006 to 290kg/y in 2010. These reductions probably reflected reductions in the use and emission of these compounds and their precursors in the Tokyo Bay catchment basin. Our estimated per-person inflow rates (i.e., inflow rates divided by the estimated population in the basin) for PFOS were generally comparable to previously reported per-person waterborne emission rates in Japan and other countries, whereas those for PFOA were generally higher than previously reported per-person waterborne emission rates. A comparison with previous estimates of household emission rates of these compounds suggested that our inflow estimates included a considerable contribution from point industrial sources.
Ben Issaid, Chaouki
2017-07-28
When assessing the performance of the free space optical (FSO) communication systems, the outage probability encountered is generally very small, and thereby the use of nave Monte Carlo simulations becomes prohibitively expensive. To estimate these rare event probabilities, we propose in this work an importance sampling approach which is based on the exponential twisting technique to offer fast and accurate results. In fact, we consider a variety of turbulence regimes, and we investigate the outage probability of FSO communication systems, under a generalized pointing error model based on the Beckmann distribution, for both single and multihop scenarios. Selected numerical simulations are presented to show the accuracy and the efficiency of our approach compared to naive Monte Carlo.
Ezure, Hideo
1988-09-01
Effective combination of measured data with theoretical analysis has permitted deriving a mehtod for more accurately estimating the power distribution in BWRs. Use is made of least squares method for the combination between relationship of the power distribution with measured values and the model used in FLARE or in the three-dimensional two-group diffusion code. Trial application of the new method to estimating the power distribution in JPDR-1 has proved the method to provide reliable results.
Siettos, Constantinos I.; Anastassopoulou, Cleo; Russo, Lucia; Grigoras, Christos; Mylonakis, Eleftherios
2016-06-01
Based on multiscale agent-based computations we estimated the per-contact probability of transmission by age of the Ebola virus disease (EVD) that swept through Liberia from May 2014 to March 2015. For the approximation of the epidemic dynamics we have developed a detailed agent-based model with small-world interactions between individuals categorized by age. For the estimation of the structure of the evolving contact network as well as the per-contact transmission probabilities by age group we exploited the so called Equation-Free framework. Model parameters were fitted to official case counts reported by the World Health Organization (WHO) as well as to recently published data of key epidemiological variables, such as the mean time to death, recovery and the case fatality rate.
Cho, Hyunyi; Shen, Lijiang; Wilson, Kari M
2013-03-01
Perceived lack of realism in alcohol advertising messages promising positive outcomes and antialcohol and antidrug messages portraying negative outcomes of alcohol consumption has been a cause for public health concern. This study examined the effects of perceived realism dimensions on personal probability estimation through identification and message minimization. Data collected from college students in U.S. Midwest in 2010 (N = 315) were analyzed with multilevel structural equation modeling. Plausibility and narrative consistency mitigated message minimization, but they did not influence identification. Factuality and perceptual quality influenced both message minimization and identification, but their effects were smaller than those of typicality. Typicality was the strongest predictor of probability estimation. Implications of the results and suggestions for future research are provided.
Fetterly, Kenneth A.; Favazza, Christopher P.
2016-08-01
Channelized Hotelling model observer (CHO) methods were developed to assess performance of an x-ray angiography system. The analytical methods included correction for known bias error due to finite sampling. Detectability indices ({{d}\\prime} ) corresponding to disk-shaped objects with diameters in the range 0.5-4 mm were calculated. Application of the CHO for variable detector target dose (DTD) in the range 6-240 nGy frame-1 resulted in {{d}\\prime} estimates which were as much as 2.9× greater than expected of a quantum limited system. Over-estimation of {{d}\\prime}Hotelling model observers due to temporally variable non-stationary noise and correct this bias when the temporally variable non-stationary noise is independent and additive with respect to the test object signal.
Parsons, T.
2009-12-01
After a large earthquake, our concern immediately moves to the likelihood that another large shock could be triggered, threatening an already weakened building stock. A key question is whether it is best to map out Coulomb stress change calculations shortly after mainshocks to potentially highlight the most likely aftershock locations, or whether it is more prudent to wait until the best information is available. It has been shown repeatedly that spatial aftershock patterns can be matched with Coulomb stress change calculations a year or more after mainshocks. However, with the onset of rapid source slip model determinations, the method has produced encouraging results like the M=8.7 earthquake that was forecast using stress change calculations from 2004 great Sumatra earthquake by McCloskey et al. [2005]. Here, I look back at two additional prospective calculations published shortly after the 2005 M=7.6 Kashmir and 2008 M=8.0 Wenchuan earthquakes. With the benefit of 1.5-4 years of additional seismicity, it is possible to assess the performance of rapid Coulomb stress change calculations. In the second part of the talk, within the context of the ongoing Working Group on California Earthquake Probabilities (WGCEP) assessments, uncertainties associated with time-dependent probability calculations are convolved with uncertainties inherent to Coulomb stress change calculations to assess the strength of signal necessary for a physics-based calculation to merit consideration into a formal earthquake forecast. Conclusions are as follows: (1) subsequent aftershock occurrence shows that prospective static stress change calculations both for Kashmir and Wenchuan examples failed to adequately predict the spatial post-mainshock earthquake distributions. (2) For a San Andreas fault example with relatively well-understood recurrence, a static stress change on the order of 30 to 40 times the annual stressing rate would be required to cause a significant (90%) perturbation to the
Prah, Philip; Hickson, Ford; Bonell, Chris; McDaid, Lisa M; Johnson, Anne M; Wayal, Sonali; Clifton, Soazig; Sonnenberg, Pam; Nardone, Anthony; Erens, Bob; Copas, Andrew J; Riddell, Julie; Weatherburn, Peter; Mercer, Catherine H
2016-09-01
To examine sociodemographic and behavioural differences between men who have sex with men (MSM) participating in recent UK convenience surveys and a national probability sample survey. We compared 148 MSM aged 18-64 years interviewed for Britain's third National Survey of Sexual Attitudes and Lifestyles (Natsal-3) undertaken in 2010-2012, with men in the same age range participating in contemporaneous convenience surveys of MSM: 15 500 British resident men in the European MSM Internet Survey (EMIS); 797 in the London Gay Men's Sexual Health Survey; and 1234 in Scotland's Gay Men's Sexual Health Survey. Analyses compared men reporting at least one male sexual partner (past year) on similarly worded questions and multivariable analyses accounted for sociodemographic differences between the surveys. MSM in convenience surveys were younger and better educated than MSM in Natsal-3, and a larger proportion identified as gay (85%-95% vs 62%). Partner numbers were higher and same-sex anal sex more common in convenience surveys. Unprotected anal intercourse was more commonly reported in EMIS. Compared with Natsal-3, MSM in convenience surveys were more likely to report gonorrhoea diagnoses and HIV testing (both past year). Differences between the samples were reduced when restricting analysis to gay-identifying MSM. National probability surveys better reflect the population of MSM but are limited by their smaller samples of MSM. Convenience surveys recruit larger samples of MSM but tend to over-represent MSM identifying as gay and reporting more sexual risk behaviours. Because both sampling strategies have strengths and weaknesses, methods are needed to triangulate data from probability and convenience surveys. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Li-wen ZHANG; Jing-feng HUANG; Rui-fang GUO; Xin-xing LI; Wen-bo SUN; Xiu-zhen WANG
2013-01-01
The accumulation of thermal time usually represents the local heat resources to drive crop growth.Maps of temperature-based agro-meteorological indices are commonly generated by the spatial interpolation of data collected from meteorological stations with coarse geographic continuity.To solve the critical problems of estimating air temperature(Ta)and filling in missing pixels due to cloudy and low-quality images in growing degree days(GDDs)calculation from remotely sensed data,a novel spatio-temporal algorithm for Ta estimation from Terra and Aqua moderate resolution imaging spectroradiometer(MODIS)data was proposed.This is a preliminary study to calculate heat accumulation,expressed in accumulative growing degree days(AGDDs)above 10 ℃,from reconstructed Ta based on MODIS land surface temperature(LST)data.The verification results of maximum Ta,minimum Ta,GDD,and AGDD from MODIS-derived data to meteorological calculation were all satisfied with high correlations over 0.01 significant levels.Overall,MODIS-derived AGDD was slightly underestimated with almost 10％ relative error.However,the feasibility of employing AGDD anomaly maps to characterize the 2001-2010 spatio-temporal variability of heat accumulation and estimating the 2011 heat accumulation distribution using only MODIS data was finally demonstrated in the current paper.Our study may supply a novel way to calculate AGDD in heat-related study concerning crop growth monitoring,agricultural climatic regionalization,and agro-meteorological disaster detection at the regional scale.
Enserink, Scott Warren
2012-01-01
The first part of this work covers the effect of the troposphere onKa band (20-30 GHz) satellite signals. The second part deals withthe estimation of the capacity and outage probability forterrestrial links when constrained to quadrature amplitudemodulations.The desire for higher data rates and the need for availablebandwidth has pushed satellite communications into the Ka band(20-30 GHz). At these higher carrier frequencies the effects ofscintillation and rain attenuation are increased. In...
Eash, David A.
2015-01-01
Traditionally, the Iowa Department of Transportation has used the Iowa Runoff Chart and single-variable regional-regression equations (RREs) from a U.S. Geological Survey report (published in 1987) as the primary methods to estimate annual exceedance-probability discharge (AEPD) for small (20 square miles or less) drainage basins in Iowa. With the publication of new multi- and single-variable RREs by the U.S. Geological Survey (published in 2013), the Iowa Department of Transportation needs to determine which methods of AEPD estimation provide the best accuracy and the least bias for small drainage basins in Iowa.
Ferrari, Alberto; Ginis, Pieter; Hardegger, Michael; Casamassima, Filippo; Rocchi, Laura; Chiari, Lorenzo
2016-01-01
Gait impairments are among the most disabling symptoms in several musculoskeletal and neurological conditions, severely limiting personal autonomy. Wearable gait sensors have been attracting attention as diagnostic tool for gait and are emerging as promising tool for tutoring and guiding gait execution. If their popularity is continuously growing, still there is room for improvement, especially towards more accurate solutions for spatio-temporal gait parameters estimation. We present an imple...
Chen, Chu-Chih; Wu, Chang-Fu; Yu, Hwa-Lung; Chan, Chang-Chuan; Cheng, Tsun-Jen
2012-07-01
Short-term exposure estimation of daily air pollution levels incorporating geographic information system (GIS) into spatiotemporal modeling remains a great challenge for assessing corresponding acute adverse health effects. Due to daily meteorological effects on the dispersion of pollutants, explanatory spatial covariables and their coefficients may not be the same as in classical land-use regression (LUR) modeling for long-term exposure. In this paper, we propose a two-stage spatiotemporal model for daily fine particulate matter (PM2.5) concentration prediction: first, daily nonlinear temporal trends are estimated through a generalized additive model, and second, GIS covariates are used to predict spatial variation in the temporal trend-removed residuals. To account for spatial dependence on meteorological conditions, the dates of the study period are divided by the sill of the daily empirical variogram into approximately temporal-invariant subgroups. Within each subgroup, daily PM2.5 estimations are obtained by combining the temporal and spatial parts of the estimations from the two stages. The proposed method is applied to the modeling of spatiotemporal PM2.5 concentrations observed at 18 ambient air monitoring stations in Taipei metropolitan area during 2006-2008. The results showed that the PM2.5 concentrations decreased whereas the relative humidity and wind speed increased with the sill subgroups, which may be due to the effects of daily meteorological conditions on the dispersions of the particles. Also, the covariates and their coefficients of the LUR models varied with subgroups and had in general higher adjusted R-squares and smaller root mean square errors in prediction than those of a single overall LUR model.
Arun R Antony
Full Text Available This project aimed to determine if a correlation-based measure of functional connectivity can identify epileptogenic zones from intracranial EEG signals, as well as to investigate the prognostic significance of such a measure on seizure outcome following temporal lobe lobectomy. To this end, we retrospectively analyzed 23 adult patients with intractable temporal lobe epilepsy (TLE who underwent an invasive stereo-EEG (SEEG evaluation between January 2009 year and January 2012. A follow-up of at least one year was required. The primary outcome measure was complete seizure-freedom at last follow-up. Functional connectivity between two areas in the temporal lobe that were sampled by two SEEG electrode contacts was defined as Pearson's correlation coefficient of interictal activity between those areas. SEEG signals were filtered between 5 and 50 Hz prior to computing this correlation. The mean and standard deviation of the off diagonal elements in the connectivity matrix were also calculated. Analysis of the mean and standard deviation of the functional connections for each patient reveals that 90% of the patients who had weak and homogenous connections were seizure free one year after temporal lobectomy, whereas 85% of the patients who had stronger and more heterogeneous connections within the temporal lobe had recurrence of seizures. This suggests that temporal lobectomy is ineffective in preventing seizure recurrence for patients in whom the temporal lobe is characterized by weakly connected, homogenous networks. This pilot study shows promising potential of a simple measure of functional brain connectivity to identify epileptogenicity and predict the outcome of epilepsy surgery.
Trofimov, Vyacheslav A.; Peskov, Nikolay V.; Kirillov, Dmitry A.
2012-10-01
One of the problems arising in Time-Domain THz spectroscopy for the problem of security is the developing the criteria for assessment of probability for the detection and identification of the explosive and drugs. We analyze the efficiency of using the correlation function and another functional (more exactly, spectral norm) for this aim. These criteria are applied to spectral lines dynamics. For increasing the reliability of the assessment we subtract the averaged value of THz signal during time of analysis of the signal: it means deleting the constant from this part of the signal. Because of this, we can increase the contrast of assessment. We compare application of the Fourier-Gabor transform with unbounded (for example, Gaussian) window, which slides along the signal, for finding the spectral lines dynamics with application of the Fourier transform in short time interval (FTST), in which the Fourier transform is applied to parts of the signals, for the same aim. These methods are close each to other. Nevertheless, they differ by series of frequencies which they use. It is important for practice that the optimal window shape depends on chosen method for obtaining the spectral dynamics. The probability enhancements if we can find the train of pulses with different frequencies, which follow sequentially. We show that there is possibility to get pure spectral lines dynamics even under the condition of distorted spectrum of the substance response on the action of the THz pulse.
Yang, Tao; Wang, Chao; Yu, Zhongbo; Xu, Feng
2013-10-01
Since the launch in March 2002, the Gravity Recovery and Climate Experiment (GRACE) satellite mission has provided us with a new method to estimate terrestrial water storage (TWS) variations by measuring earth gravity change with unprecedented accuracy. Thus far, a number of standardized GRACE-born TWS products are published by different international research teams. However, no characterization of spatio-temporal patterns for different GRACE hydrology products from the global perspective could be found. It is still a big challenge for the science community to identify the reliable global measurement of TWS anomalies due to our limited knowledge on the true value. Hence, it is urgently necessary to evaluate the uncertainty for various global estimates of the GRACE-born TWS changes by a number of international research organizations. Toward this end, this article presents an in-depth analysis for various GRACE-born and GLDAS-based estimates for changes of global terrestrial water storage. The work characterizes the inter-annual and intra-annual variability, probability density variations, and spatial patterns among different GRACE-born TWS estimates over six major continents, and compares them with results from GLDAS simulations. The underlying causes of inconsistency between GRACE- and GLDAS-born TWS estimates are thoroughly analyzed with an aim to improve our current knowledge in monitoring global TWS change. With a comprehensive consideration of the advantages and disadvantages among GRACE- and GLDAS-born TWS anomalies, a summary is thereafter recommended as a rapid reference for scientists, end-users, and policy-makers in the practices of global TWS change research. To our best knowledge, this work is the first attempt to characterize difference and uncertainty among various GRACE-born terrestrial water storage changes over the major continents estimated by a number of international research organizations. The results can provide beneficial reference to usage of
Barrett, Kirsten
2016-04-01
Reliable estimates of biomass combusted during wildfires can be obtained from satellite observations of fire radiative power (FRP). Total fire radiative energy (FRE) is typically estimated by integrating instantaneous measurements of fire radiative power (FRP) at the time of orbital satellite overpass or geostationary observation. Remotely-sensed FRP products from orbital satellites are usually global in extent, requiring several thresholding and filtering operations to reduce the number of false fire detections. Some filters required for a global product may not be appropriate to fire detection in the boreal forest resulting in errors of omission and increased data processing times. We evaluate the effect of a boreal-specific active fire detection algorithm and estimates of FRP/FRE. Boreal fires are more likely to escape detection due to lower intensity smouldering combustion and sub canopy fires, therefore improvements in boreal fire detection could substantially reduce the uncertainty of emissions from biomass combustion in the region. High temporal resolution data from geostationary satellites have led to improvements in FRE estimation in tropical and temperate forests, but such a perspective is not possible for high latitude ecosystems given the equatorial orbit of geostationary observation. The increased density of overpasses in high latitudes from polar-orbiting satellites, however, may provide adequate temporal sampling for estimating FRE.
Kuznetsov, NV; Popov, VD; Khamidullina, NM
2005-01-01
When designing the radio-electronic equipment for long-term operation in a space environment, one of the most important problems is a correct estimation of radiation stability of its electric and radio components (ERC) against radiation-stimulated doze failures and one-particle effects (upsets). The
Kim, Seung-Woo; Suh, Kyung-Duck; Burcharth, Hans F.
2010-01-01
The breakwaters are designed by considering the cost optimization because a human risk is seldom considered. Most breakwaters, however, were constructed without considering the cost optimization. In this study, the optimum return period, target failure probability and the partial safety factors...... were evaluated by applying the cost optimization to the rubble mound breakwaters in Korea. The applied method was developed by Hans F. Burcharth and John D. Sorensen in relation to the PIANC Working Group 47. The optimum return period was determined as 50 years in many cases and was found as 100 years...... of the national design standard and then the overall safety factor is calculated as 1.09. It is required that the nominal diameter and weight of armor are respectively 9% and 30% larger than those of the existing design method. Moreover, partial safety factors considering the cost optimization were compared...
Sichani, Mahdi Teimouri
order statistical moments. The results obtained by extrapolation of the extreme values to the stipulated design period of the wind turbine depend strongly on the relevance of these adopted extreme value distributions. The problem is that this relevance cannot be decided from the data obtained....... The solution of the Fokker-Planck-Kolmogorov (FPK) equation for systems governed by a stochastic differential equation driven by Gaussian white noise will give the sought time variation of the probability density function. However the analytical solution of the FPK is available for only a few dynamic systems...... and the numerical solution is difficult for dynamic problem of more than 2-3 degrees of freedom. This confines the applicability of the FPK to a very narrow range of problems. On the other hand the recently introduced Generalized Density Evolution Method (GDEM), has opened a new way toward realization...
Stevens, Michael R.; Flynn, Jennifer L.; Stephens, Verlin C.; Verdin, Kristine L.
2011-01-01
During 2009, the U.S. Geological Survey, in cooperation with Gunnison County, initiated a study to estimate the potential for postwildfire debris flows to occur in the drainage basins occupied by Carbonate, Slate, Raspberry, and Milton Creeks near Marble, Colorado. Currently (2010), these drainage basins are unburned but could be burned by a future wildfire. Empirical models derived from statistical evaluation of data collected from recently burned basins throughout the intermountain western United States were used to estimate the probability of postwildfire debris-flow occurrence and debris-flow volumes for drainage basins occupied by Carbonate, Slate, Raspberry, and Milton Creeks near Marble. Data for the postwildfire debris-flow models included drainage basin area; area burned and burn severity; percentage of burned area; soil properties; rainfall total and intensity for the 5- and 25-year-recurrence, 1-hour-duration-rainfall; and topographic and soil property characteristics of the drainage basins occupied by the four creeks. A quasi-two-dimensional floodplain computer model (FLO-2D) was used to estimate the spatial distribution and the maximum instantaneous depth of the postwildfire debris-flow material during debris flow on the existing debris-flow fans that issue from the outlets of the four major drainage basins. The postwildfire debris-flow probabilities at the outlet of each drainage basin range from 1 to 19 percent for the 5-year-recurrence, 1-hour-duration rainfall, and from 3 to 35 percent for 25-year-recurrence, 1-hour-duration rainfall. The largest probabilities for postwildfire debris flow are estimated for Raspberry Creek (19 and 35 percent), whereas estimated debris-flow probabilities for the three other creeks range from 1 to 6 percent. The estimated postwildfire debris-flow volumes at the outlet of each creek range from 7,500 to 101,000 cubic meters for the 5-year-recurrence, 1-hour-duration rainfall, and from 9,400 to 126,000 cubic meters for
Improving Ranking Using Quantum Probability
Melucci, Massimo
2011-01-01
The paper shows that ranking information units by quantum probability differs from ranking them by classical probability provided the same data used for parameter estimation. As probability of detection (also known as recall or power) and probability of false alarm (also known as fallout or size) measure the quality of ranking, we point out and show that ranking by quantum probability yields higher probability of detection than ranking by classical probability provided a given probability of ...
Sergeant, E.S.G.; Nielsen, Søren S.; Toft, Nils
2008-01-01
. Using this model, five herd-testing strategies were evaluated: (1) milk-ELISA on all lactating cows; (2) milk-ELISA on lactating cows 4 years old; (4) faecal culture on all lactating cows; and (5) milk-ELISA plus faecal culture in series on all lactating cows. The five testing strategies were evaluated...... using observed milk-ELISA results from 19 Danish dairy herds as well as for simulated results from the same herds assuming that they were uninfected. Whole-herd milk-ELISA was the preferred strategy, and considered the most cost-effective strategy of the five alternatives. The five strategies were all...... efficient in detecting infection, i.e. estimating a low Pr-Low in infected herds, however, Pr-Low estimates for milk-ELISA on age-cohorts were too low in simulated uninfected herds and the strategies involving faecal culture were too expensive to be of practical interest. For simulated uninfected herds...
Spatial and Temporal Dynamics and Value of Nature-Based Recreation, Estimated via Social Media.
Sonter, Laura J; Watson, Keri B; Wood, Spencer A; Ricketts, Taylor H
2016-01-01
Conserved lands provide multiple ecosystem services, including opportunities for nature-based recreation. Managing this service requires understanding the landscape attributes underpinning its provision, and how changes in land management affect its contribution to human wellbeing over time. However, evidence from both spatially explicit and temporally dynamic analyses is scarce, often due to data limitations. In this study, we investigated nature-based recreation within conserved lands in Vermont, USA. We used geotagged photographs uploaded to the photo-sharing website Flickr to quantify visits by in-state and out-of-state visitors, and we multiplied visits by mean trip expenditures to show that conserved lands contributed US $1.8 billion (US $0.18-20.2 at 95% confidence) to Vermont's tourism industry between 2007 and 2014. We found eight landscape attributes explained the pattern of visits to conserved lands; visits were higher in larger conserved lands, with less forest cover, greater trail density and more opportunities for snow sports. Some of these attributes differed from those found in other locations, but all aligned with our understanding of recreation in Vermont. We also found that using temporally static models to inform conservation decisions may have perverse outcomes for nature-based recreation. For example, static models suggest conserved land with less forest cover receive more visits, but temporally dynamic models suggest clearing forests decreases, rather than increases, visits to these sites. Our results illustrate the importance of understanding both the spatial and temporal dynamics of ecosystem services for conservation decision-making.
Ee, R. van; Erkelens, Casper J.
2001-01-01
We investigated temporal aspects of stereoscopically perceived slant produced by the following transformations: horizontal scale, horizontal shear, vertical scale, vertical shear, divergence and rotation, between the half-images of a stereogram. Six subjects viewed large field stimuli (70 deg diamet
P. Ala-aho
2014-07-01
Full Text Available Climate change and land use are rapidly changing the amount and temporal distribution of recharge in northern aquifers. This paper presents a novel method for distributing Monte Carlo simulations of 1-D soil profile spatially to estimate transient recharge in an unconfined esker aquifer. The modeling approach uses data-based estimates for the most important parameters controlling the total amount (canopy cover and timing (depth of the unsaturated zone of groundwater recharge. Scots pine canopy was parameterized to leaf area index (LAI using forestry inventory data. Uncertainty in the parameters controlling soil hydraulic properties and evapotranspiration was carried over from the Monte Carlo runs to the final recharge estimates. Different mechanisms for lake, soil, and snow evaporation and transpiration were used in the model set-up. Finally, the model output was validated with independent recharge estimates using the water table fluctuation method and baseflow estimation. The results indicated that LAI is important in controlling total recharge amount, and the modeling approach successfully reduced model uncertainty by allocating the LAI parameter spatially in the model. Soil evaporation compensated for transpiration for areas with low LAI values, which may be significant in optimal management of forestry and recharge. Different forest management scenarios tested with the model showed differences in annual recharge of up to 100 mm. The uncertainty in recharge estimates arising from the simulation parameters was lower than the interannual variation caused by climate conditions. It proved important to take unsaturated depth and vegetation cover into account when estimating spatially and temporally distributed recharge in sandy unconfined aquifers.
Elen, An; Loeckx, Dirk; Choi, Hon Fai; Gao, Hang; Claus, Piet; Maes, Frederik; Suetens, Paul; D'hooge, Jan
2008-03-01
Current ultrasound methods for measuring myocardial strain are often limited to measurements in one or two dimensions. Spatio-temporal elastic registration of 3D cardiac ultrasound data can however be used to estimate the 3D motion and full 3D strain tensor. In this work, the spatio-temporal elastic registration method was validated for both non-scanconverted and scanconverted images. This was done using simulated 3D pyramidal ultrasound data sets based on a thick-walled deforming ellipsoid and an adapted convolution model. A B-spline based frame-to-frame elastic registration method was applied to both the scanconverted and non-scanconverded data sets and the accuracy of the resulting deformation fields was quantified. The mean accuracy of the estimated displacement was very similar for the scanconverted and non-scanconverted data sets and thus, it was shown that 3D elastic registration to estimate the cardiac deformation from ultrasound images can be performed on non-scanconverted images, but that avoiding of the scanconversion step does not significantly improve the results of the displacement estimation.
Vio, Roberto
2016-01-01
The detection reliability of weak signals is a critical issue in many astronomical contexts and may have severe consequences for determining number counts and luminosity functions, but also for optimising the use of telescope time in follow-up observations. Because of its optimal properties, one of the most popular and widely-used detection technique is the matched filter (MF). This is a linear filter designed to maximise the detectability of a signal of known structure that is buried in additive Gaussian random noise. In this work we show that in the very common situation where the number and position of the searched signals within a data sequence (e.g. an emission line in a spectrum) or an image (e.g. a point-source in an interferometric map) are unknown, this technique, when applied in its standard form, may severely underestimate the probability of false detection. This is because the correct use of the MF relies upon a-priori knowledge of the position of the signal of interest. In the absence of this inf...
Huddleston, Lisa; Roeder, WIlliam P.; Merceret, Francis J.
2011-01-01
A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station. Future applications could include forensic meteorology.
Kuznetsov, N. V.; Popov, V. D.; Khamidullina, N. M.
2005-05-01
When designing the radio-electronic equipment for long-term operation in a space environment, one of the most important problems is a correct estimation of radiation stability of its electric and radio components (ERC) against radiation-stimulated doze failures and one-particle effects (upsets). These problems are solved in this paper for the integrated microcircuits (IMC) of various types that are to be installed onboard the Fobos-Grunt spacecraft designed at the Federal State Unitary Enterprise “Lavochkin Research and Production Association.” The launching of this spacecraft is planned for 2009.
Julio Cesar de Oliveira
2014-04-01
Full Text Available MODerate resolution Imaging Spectroradiometer (MODIS data are largely used in multitemporal analysis of various Earth-related phenomena, such as vegetation phenology, land use/land cover change, deforestation monitoring, and time series analysis. In general, the MODIS products used to undertake multitemporal analysis are composite mosaics of the best pixels over a certain period of time. However, it is common to find bad pixels in the composition that affect the time series analysis. We present a filtering methodology that considers the pixel position (location in space and time (position in the temporal data series to define a new value for the bad pixel. This methodology, called Window Regression (WR, estimates the value of the point of interest, based on the regression analysis of the data selected by a spatial-temporal window. The spatial window is represented by eight pixels neighboring the pixel under evaluation, and the temporal window selects a set of dates close to the date of interest (either earlier or later. Intensities of noises were simulated over time and space, using the MOD13Q1 product. The method presented and other techniques (4253H twice, Mean Value Iteration (MVI and Savitzky–Golay were evaluated using the Mean Absolute Percentage Error (MAPE and Akaike Information Criteria (AIC. The tests revealed the consistently superior performance of the Window Regression approach to estimate new Normalized Difference Vegetation Index (NDVI values irrespective of the intensity of the noise simulated.
Improving Ranking Using Quantum Probability
Melucci, Massimo
2011-01-01
The paper shows that ranking information units by quantum probability differs from ranking them by classical probability provided the same data used for parameter estimation. As probability of detection (also known as recall or power) and probability of false alarm (also known as fallout or size) measure the quality of ranking, we point out and show that ranking by quantum probability yields higher probability of detection than ranking by classical probability provided a given probability of false alarm and the same parameter estimation data. As quantum probability provided more effective detectors than classical probability within other domains that data management, we conjecture that, the system that can implement subspace-based detectors shall be more effective than a system which implements a set-based detectors, the effectiveness being calculated as expected recall estimated over the probability of detection and expected fallout estimated over the probability of false alarm.
Chisholm, Ryan A; Condit, Richard; Rahman, K Abd; Baker, Patrick J; Bunyavejchewin, Sarayudh; Chen, Yu-Yun; Chuyong, George; Dattaraja, H S; Davies, Stuart; Ewango, Corneille E N; Gunatilleke, C V S; Nimal Gunatilleke, I A U; Hubbell, Stephen; Kenfack, David; Kiratiprayoon, Somboon; Lin, Yiching; Makana, Jean-Remy; Pongpattananurak, Nantachai; Pulla, Sandeep; Punchi-Manage, Ruwan; Sukumar, Raman; Su, Sheng-Hsin; Sun, I-Fang; Suresh, H S; Tan, Sylvester; Thomas, Duncan; Yap, Sandra
2014-07-01
Long-term surveys of entire communities of species are needed to measure fluctuations in natural populations and elucidate the mechanisms driving population dynamics and community assembly. We analysed changes in abundance of over 4000 tree species in 12 forests across the world over periods of 6-28 years. Abundance fluctuations in all forests are large and consistent with population dynamics models in which temporal environmental variance plays a central role. At some sites we identify clear environmental drivers, such as fire and drought, that could underlie these patterns, but at other sites there is a need for further research to identify drivers. In addition, cross-site comparisons showed that abundance fluctuations were smaller at species-rich sites, consistent with the idea that stable environmental conditions promote higher diversity. Much community ecology theory emphasises demographic variance and niche stabilisation; we encourage the development of theory in which temporal environmental variance plays a central role.
Abrunhosa, Luís; Morales, Héctor; Soares, Célia; Calado, Thalita; Vila-Chã, Ana Sofia; Pereira, Martinha; Venâncio, Armando
2016-01-01
Mycotoxins are toxic secondary metabolites produced by filamentous fungi that occur naturally in agricultural commodities worldwide. Aflatoxins, ochratoxin A, patulin, fumonisins, zearalenone, trichothecenes, and ergot alkaloids are presently the most important for food and feed safety. These compounds are produced by several species that belong to the Aspergillus, Penicillium, Fusarium, and Claviceps genera and can be carcinogenic, mutagenic, teratogenic, cytotoxic, neurotoxic, nephrotoxic, estrogenic, and immunosuppressant. Human and animal exposure to mycotoxins is generally assessed by taking into account data on the occurrence of mycotoxins in food and feed as well as data on the consumption patterns of the concerned population. This evaluation is crucial to support measures to reduce consumer exposure to mycotoxins. This work reviews the occurrence and levels of mycotoxins in Portuguese food and feed to provide a global overview of this issue in Portugal. With the information collected, the exposure of the Portuguese population to those mycotoxins is assessed, and the estimated dietary intakes are presented.
Spatial and Temporal Dynamics and Value of Nature-Based Recreation, Estimated via Social Media
Watson, Keri B.; Wood, Spencer A.; Ricketts, Taylor H.
2016-01-01
Conserved lands provide multiple ecosystem services, including opportunities for nature-based recreation. Managing this service requires understanding the landscape attributes underpinning its provision, and how changes in land management affect its contribution to human wellbeing over time. However, evidence from both spatially explicit and temporally dynamic analyses is scarce, often due to data limitations. In this study, we investigated nature-based recreation within conserved lands in Vermont, USA. We used geotagged photographs uploaded to the photo-sharing website Flickr to quantify visits by in-state and out-of-state visitors, and we multiplied visits by mean trip expenditures to show that conserved lands contributed US $1.8 billion (US $0.18–20.2 at 95% confidence) to Vermont’s tourism industry between 2007 and 2014. We found eight landscape attributes explained the pattern of visits to conserved lands; visits were higher in larger conserved lands, with less forest cover, greater trail density and more opportunities for snow sports. Some of these attributes differed from those found in other locations, but all aligned with our understanding of recreation in Vermont. We also found that using temporally static models to inform conservation decisions may have perverse outcomes for nature-based recreation. For example, static models suggest conserved land with less forest cover receive more visits, but temporally dynamic models suggest clearing forests decreases, rather than increases, visits to these sites. Our results illustrate the importance of understanding both the spatial and temporal dynamics of ecosystem services for conservation decision-making. PMID:27611325
A Simple Fusion Method for Image Time Series Based on the Estimation of Image Temporal Validity
Mar Bisquert
2015-01-01
Full Text Available High-spatial-resolution satellites usually have the constraint of a low temporal frequency, which leads to long periods without information in cloudy areas. Furthermore, low-spatial-resolution satellites have higher revisit cycles. Combining information from high- and low- spatial-resolution satellites is thought a key factor for studies that require dense time series of high-resolution images, e.g., crop monitoring. There are several fusion methods in the bibliography, but they are time-consuming and complicated to implement. Moreover, the local evaluation of the fused images is rarely analyzed. In this paper, we present a simple and fast fusion method based on a weighted average of two input images (H and L, which are weighted by their temporal validity to the image to be fused. The method was applied to two years (2009–2010 of Landsat and MODIS (MODerate Imaging Spectroradiometer images that were acquired over a cropped area in Brazil. The fusion method was evaluated at global and local scales. The results show that the fused images reproduced reliable crop temporal profiles and correctly delineated the boundaries between two neighboring fields. The greatest advantages of the proposed method are the execution time and ease of use, which allow us to obtain a fused image in less than five minutes.
Probability-summation model of multiple laser-exposure effects.
Menendez, A R; Cheney, F E; Zuclich, J A; Crump, P
1993-11-01
A probability-summation model is introduced to provide quantitative criteria for discriminating independent from interactive effects of multiple laser exposures on biological tissue. Data that differ statistically from predictions of the probability-summation model indicate the action of sensitizing (synergistic/positive) or desensitizing (hardening/negative) biophysical interactions. Interactions are indicated when response probabilities vary with changes in the spatial or temporal separation of exposures. In the absence of interactions, probability-summation parsimoniously accounts for "cumulative" effects. Data analyzed using the probability-summation model show instances of both sensitization and desensitization of retinal tissue by laser exposures. Other results are shown to be consistent with probability-summation. The relevance of the probability-summation model to previous laser-bioeffects studies, models, and safety standards is discussed and an appeal is made for improved empirical estimates of response probabilities for single exposures.
Spatial-temporal models for improved county-level annual estimates
Francis Roesch
2009-01-01
The consumers of data derived from extensive forest inventories often seek annual estimates at a finer spatial scale than that which the inventory was designed to provide. This paper discusses a few model-based and model-assisted estimators to consider for county level attributes that can be applied when the sample would otherwise be inadequate for producing low-...
Blood velocity estimation using spatio-temporal encoding based on frequency division approach
Gran, Fredrik; Nikolov, Svetoslav; Jensen, Jørgen Arendt
2005-01-01
spectral support. By assigning one band to one virtual source, all virtual sources can be excited simultaneously. The received echoes are beamformed using Synthetic Transmit Aperture beamforming. The velocity of the moving blood is estimated using a cross- correlation estimator. The simulation tool Field...
Cross, Robert
2005-01-01
Until Solid Rocket Motor ignition, the Space Shuttle is mated to the Mobil Launch Platform in part via eight (8) Solid Rocket Booster (SRB) hold-down bolts. The bolts are fractured using redundant pyrotechnics, and are designed to drop through a hold-down post on the Mobile Launch Platform before the Space Shuttle begins movement. The Space Shuttle program has experienced numerous failures where a bolt has hung up. That is, it did not clear the hold-down post before liftoff and was caught by the SRBs. This places an additional structural load on the vehicle that was not included in the original certification requirements. The Space Shuttle is currently being certified to withstand the loads induced by up to three (3) of eight (8) SRB hold-down experiencing a "hang-up". The results of loads analyses performed for (4) stud hang-ups indicate that the internal vehicle loads exceed current structural certification limits at several locations. To determine the risk to the vehicle from four (4) stud hang-ups, the likelihood of the scenario occurring must first be evaluated. Prior to the analysis discussed in this paper, the likelihood of occurrence had been estimated assuming that the stud hang-ups were completely independent events. That is, it was assumed that no common causes or factors existed between the individual stud hang-up events. A review of the data associated with the hang-up events, showed that a common factor (timing skew) was present. This paper summarizes a revised likelihood evaluation performed for the four (4) stud hang-ups case considering that there are common factors associated with the stud hang-ups. The results show that explicitly (i.e. not using standard common cause methodologies such as beta factor or Multiple Greek Letter modeling) taking into account the common factor of timing skew results in an increase in the estimated likelihood of four (4) stud hang-ups of an order of magnitude over the independent failure case.
Spatio-Temporal Audio Enhancement Based on IAA Noise Covariance Matrix Estimates
Nørholm, Sidsel Marie; Jensen, Jesper Rindom; Christensen, Mads Græsbøll
2014-01-01
to an amplitude and phase estimation (APES) based filter. For a fixed number of samples, the performance in terms of signal-to-noise ratio can be increased by using the IAA method, whereas if the filter size is fixed and the number of samples in the APES based filter is increased, the APES based filter performs......A method for estimating the noise covariance matrix in a mul- tichannel setup is proposed. The method is based on the iter- ative adaptive approach (IAA), which only needs short seg- ments of data to estimate the covariance matrix. Therefore, the method can be used for fast varying signals....... The method is based on an assumption of the desired signal being harmonic, which is used for estimating the noise covariance matrix from the covariance matrix of the observed signal. The noise co- variance estimate is used in the linearly constrained minimum variance (LCMV) filter and compared...
潘晓春
2012-01-01
It is necessary to describe the statistical properties of wind speed using three-parameter Weibull distribution for offshore wind energy resource assessment and utilization.According to the functional relation between parameters and probability-weighted moments（PWM）,the functions were fitted with the shape parameter and PWM using logistic curve.Two formulae of parameter estimation were studied out based on low-order insufficient and exceeding PWM.Accuracy test results show that these formulae had higher precision in large-scale range.Through comparative analysis with high-order PWM method for example,the author believes the low-order PWM methods in this paper are worth popularizing.%为便于进行海上风能资源评估与利用,采用三参数Weibull分布来描述风的统计特性是必要的。根据Weibull分布的三参数与概率权重矩（probability-weighted moment,PWM）的关系,应用罗吉斯蒂曲线拟合形状参数与PWM的函数关系,提出低阶不及PWM和超过PWM 2种参数估计方法。精度检验显示,文中方法在较大范围内均具有较高的精度。通过算例分析比较,认为提出的低阶PWM法值得推广使用。
Zo, Il-Sung; Jee, Joon-Bum; Lee, Kyu-Tae; Kim, Bu-Yo
2016-08-01
Preliminary analysis with a solar radiation model is generally performed for photovoltaic power generation projects. Therefore, model accuracy is extremely important. The temporal and spatial resolutions used in previous studies of the Korean Peninsula were 1 km × 1 km and 1-h, respectively. However, calculating surface solar radiation at 1-h intervals does not ensure the accuracy of the geographical effects, and this parameter changes owing to atmospheric elements (clouds, aerosol, ozone, etc.). Thus, a change in temporal resolution is required. In this study, one-year (2013) analysis was conducted using Chollian geostationary meteorological satellite data from observations recorded at 15-min intervals. Observation data from the intensive solar site at Gangneung-Wonju National University (GWNU) showed that the coefficient of determination (R²), which was estimated for each month and season, increased, whereas the standard error (SE) decreased when estimated in 15-min intervals over those obtained in 1-h intervals in 2013. When compared with observational data from 22 solar sites of the Korean Meteorological Administration (KMA), R2 was 0.9 or higher on average, and over- or under-simulated sites did not exceed 3 sites. The model and 22 solar sites showed similar values of annual accumulated solar irradiation, and their annual mean was similar at 4,998 MJ m-2 (3.87 kWh m-2). These results show a difference of approximately ± 70 MJ m-2 (± 0.05 kWh m-2) from the distribution of the Korean Peninsula estimated in 1-h intervals and a higher correlation at higher temporal resolution.
Burr, Tom L; Gattiker, James R; Gerrish, Philip J
2003-05-15
This is an investigation of significant error sources and their impact in estimating the time to the most recent common ancestor (MRCA) of spatially and temporally distributed human immunodeficiency virus (HIV) sequences. We simulate an HIV epidemic under a range of assumptions with known time to the MRCA (tMRCA). We then apply a range of baseline (known) evolutionary models to generate sequence data. We next estimate or assume one of several misspecified models and use the chosen model to estimate the time to the MRCA. Random effects and the extent of model misspecification determine the magnitude of error sources that could include: neglected heterogeneity in substitution rates across lineages and DNA sites; uncertainty in HIV isolation times; uncertain magnitude and type of population subdivision; uncertain impacts of host/viral transmission dynamics, and unavoidable model estimation errors. Our results suggest that confidence intervals will rarely have the nominal coverage probability for tMRCA. Neglected effects lead to errors that are unaccounted for in most analyses, resulting in optimistically narrow confidence intervals (CI). Using real HIV sequences having approximately known isolation times and locations, we present possible confidence intervals for several sets of assumptions. In general, we cannot be certain how much to broaden a stated confidence interval for tMRCA. However, we describe the impact of candidate error sources on CI width. We also determine which error sources have the most impact on CI width and demonstrate that the standard bootstrap method will underestimate the CI width. Copyright 2003 John Wiley & Sons, Ltd.
Kim, Sun Mo, E-mail: Sunmo.Kim@rmp.uhn.on.ca [Radiation Medicine Program, Princess Margaret Hospital/University Health Network, Toronto, Ontario M5G 2M9 (Canada); Haider, Masoom A. [Department of Medical Imaging, Sunnybrook Health Sciences Centre, Toronto, Ontario M4N 3M5, Canada and Department of Medical Imaging, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Jaffray, David A. [Radiation Medicine Program, Princess Margaret Hospital/University Health Network, Toronto, Ontario M5G 2M9, Canada and Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Yeung, Ivan W. T. [Radiation Medicine Program, Princess Margaret Hospital/University Health Network, Toronto, Ontario M5G 2M9 (Canada); Department of Medical Physics, Stronach Regional Cancer Centre, Southlake Regional Health Centre, Newmarket, Ontario L3Y 2P9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9 (Canada)
2016-01-15
Purpose: A previously proposed method to reduce radiation dose to patient in dynamic contrast-enhanced (DCE) CT is enhanced by principal component analysis (PCA) filtering which improves the signal-to-noise ratio (SNR) of time-concentration curves in the DCE-CT study. The efficacy of the combined method to maintain the accuracy of kinetic parameter estimates at low temporal resolution is investigated with pixel-by-pixel kinetic analysis of DCE-CT data. Methods: The method is based on DCE-CT scanning performed with low temporal resolution to reduce the radiation dose to the patient. The arterial input function (AIF) with high temporal resolution can be generated with a coarsely sampled AIF through a previously published method of AIF estimation. To increase the SNR of time-concentration curves (tissue curves), first, a region-of-interest is segmented into squares composed of 3 × 3 pixels in size. Subsequently, the PCA filtering combined with a fraction of residual information criterion is applied to all the segmented squares for further improvement of their SNRs. The proposed method was applied to each DCE-CT data set of a cohort of 14 patients at varying levels of down-sampling. The kinetic analyses using the modified Tofts’ model and singular value decomposition method, then, were carried out for each of the down-sampling schemes between the intervals from 2 to 15 s. The results were compared with analyses done with the measured data in high temporal resolution (i.e., original scanning frequency) as the reference. Results: The patients’ AIFs were estimated to high accuracy based on the 11 orthonormal bases of arterial impulse responses established in the previous paper. In addition, noise in the images was effectively reduced by using five principal components of the tissue curves for filtering. Kinetic analyses using the proposed method showed superior results compared to those with down-sampling alone; they were able to maintain the accuracy in the
Casas-Castillo, M. Carmen; Rodríguez-Solà, Raúl; Navarro, Xavier; Russo, Beniamino; Lastra, Antonio; González, Paula; Redaño, Angel
2016-11-01
The fractal behavior of extreme rainfall intensities registered between 1940 and 2012 by the Retiro Observatory of Madrid (Spain) has been examined, and a simple scaling regime ranging from 25 min to 3 days of duration has been identified. Thus, an intensity-duration-frequency (IDF) master equation of the location has been constructed in terms of the simple scaling formulation. The scaling behavior of probable maximum precipitation (PMP) for durations between 5 min and 24 h has also been verified. For the statistical estimation of the PMP, an envelope curve of the frequency factor (k m ) based on a total of 10,194 station-years of annual maximum rainfall from 258 stations in Spain has been developed. This curve could be useful to estimate suitable values of PMP at any point of the Iberian Peninsula from basic statistical parameters (mean and standard deviation) of its rainfall series.
B. Forte
2013-04-01
Full Text Available The impact of space weather events on satellite-based technologies (e.g. satellite navigation and precise positioning is typically quantified on the basis of the total electron content (TEC and temporal fluctuations associated with it. GNSS (global navigation satellite systems TEC measurements are integrated over a long distance and thus may include contributions from different regions of the ionised atmosphere which may prevent the resolution of the mechanisms ultimately responsible for given observations. The purpose of the experiment presented here was to compare TEC estimates from EISCAT and GPS measurements. The EISCAT measurements were obtained along the same line of sight of a given GPS satellite observed from Tromsø. The present analyses focussed on the comparison of temporal fluctuations in the TEC between aligned GPS and EISCAT measurements. A reasonably good agreement was found between temporal fluctuations in TEC observed by EISCAT and those observed by a co-located GPS ionospheric monitor along the same line of sight, indicating a contribution from structures at E and F altitudes mainly to the total TEC in the presence of ionisation enhancements possibly caused by particle precipitation in the nighttime sector. The experiment suggests the great potential in the measurements to be performed by the future EISCAT_3D system, limited only in the localised geographic region to be covered.
Elmore, Stacey A; Huyvaert, Kathryn P; Bailey, Larissa L; Iqbal, Asma; Su, Chunlei; Dixon, Brent R; Alisauskas, Ray T; Gajadhar, Alvin A; Jenkins, Emily J
2016-08-01
Increasingly, birds are recognised as important hosts for the ubiquitous parasite Toxoplasma gondii, although little experimental evidence exists to determine which tissues should be tested to maximise the detection probability of T. gondii. Also, Arctic-nesting geese are suspected to be important sources of T. gondii in terrestrial Arctic ecosystems, but the parasite has not previously been reported in the tissues of these geese. Using a domestic goose model, we applied a multi-scale occupancy framework to demonstrate that the probability of detection of T. gondii was highest in the brain (0.689, 95% confidence interval=0.486, 0.839) and the heart (0.809, 95% confidence interval=0.693, 0.888). Inoculated geese had an estimated T. gondii infection probability of 0.849, (95% confidence interval=0.643, 0.946), highlighting uncertainty in the system, even under experimental conditions. Guided by these results, we tested the brains and hearts of wild Ross's Geese (Chen rossii, n=50) and Lesser Snow Geese (Chen caerulescens, n=50) from Karrak Lake, Nunavut, Canada. We detected 51 suspected positive tissue samples from 33 wild geese using real-time PCR with melt-curve analysis. The wild goose prevalence estimates generated by our multi-scale occupancy analysis were higher than the naïve estimates of prevalence, indicating that multiple PCR repetitions on the same organs and testing more than one organ could improve T. gondii detection. Genetic characterisation revealed Type III T. gondii alleles in six wild geese and Sarcocystis spp. in 25 samples. Our study demonstrates that Arctic nesting geese are capable of harbouring T. gondii in their tissues and could transport the parasite from their southern overwintering grounds into the Arctic region. We demonstrate how a multi-scale occupancy framework can be used in a domestic animal model to guide resource-limited sample collection and tissue analysis in wildlife. Secondly, we confirm the value of traditional occupancy in
Imprecise Estimation for Conditional Outage Probabilities of Power Components%电力设备停运概率的非精确条件估计
刁浩然; 杨明; 韩学山; 马世英; 刘道伟; 王剑辉
2016-01-01
By using limited outage samples, it is difficult to obtain the operational reliability indexes of power components accurately. Therefore, estimating the fluctuation ranges of the indexes can provide a more objective decision-making basis for power system operational risk control. In this paper, a novel approach was proposed to estimate the imprecise conditional outage probabilities of power components based on the credal network (CN), which could evaluate the interval ranges of conditional probabilities. Based on the historical outage statistics and the component operational conditions at the target period, a credal network, which used the imprecise Dirichlet model (IDM) to get the imprecise probabilistic dependencies, was set up to estimate the imprecise conditional outage probabilities. Therefore, the imprecise outage probabilities of components can be obtained by using the reasoning algorithm of the credal network. The proposed approach can reflect the variation of the conditional outage probabilities with respected to the operational conditions of the power components, and breaks a new way for the reliability assessment of power components with limited outage observations. Results on estimating the imprecise conditional outage probabilities of LGJ-300 transmission lines located in Shandong province illustrate the effectiveness of the proposed approach.%在小样本条件下，电力设备的运行可靠性参数难以精确获取，对其波动范围进行估计，可为电力系统运行提供更为客观、全面的评估与决策依据。由此，该文提出了一种基于信度网络(credal network, CN)的电力设备停运概率的非精确条件估计方法，对暴露型设备运行中条件相依的停运概率指标的区间范围进行估计。该方法基于设备停运的历史统计数据和估计目标时段的运行工况条件，构建了处理非精确条件概率推断问题的信度网络，并利用多状态随机变量的
Holbrook, Christopher M.; Johnson, Nicholas S.; Steibel, Juan P.; Twohey, Michael B.; Binder, Thomas R.; Krueger, Charles C.; Jones, Michael L.
2014-01-01
Improved methods are needed to evaluate barriers and traps for control and assessment of invasive sea lamprey (Petromyzon marinus) in the Great Lakes. A Bayesian state-space model provided reach-specific probabilities of movement, including trap capture and dam passage, for 148 acoustic tagged invasive sea lamprey in the lower Cheboygan River, Michigan, a tributary to Lake Huron. Reach-specific movement probabilities were combined to obtain estimates of spatial distribution and abundance needed to evaluate a barrier and trap complex for sea lamprey control and assessment. Of an estimated 21 828 – 29 300 adult sea lampreys in the river, 0%–2%, or 0–514 untagged lampreys, could have passed upstream of the dam, and 46%–61% were caught in the trap. Although no tagged lampreys passed above the dam (0/148), our sample size was not sufficient to consider the lock and dam a complete barrier to sea lamprey. Results also showed that existing traps are in good locations because 83%–96% of the population was vulnerable to existing traps. However, only 52%–69% of lampreys vulnerable to traps were caught, suggesting that traps can be improved. The approach used in this study was a novel use of Bayesian state-space models that may have broader applications, including evaluation of barriers for other invasive species (e.g., Asian carp (Hypophthalmichthys spp.)) and fish passage structures for other diadromous fishes.
Annambhotla, Pallavi D; Gurbaxani, Brian M; Kuehnert, Matthew J; Basavaraju, Sridhar V
2017-04-01
In 2013, guidelines were released for reducing the risk of viral bloodborne pathogen transmission through organ transplantation. Eleven criteria were described that result in a donor being designated at increased infectious risk. Human immunodeficiency virus (HIV) and hepatitis C virus (HCV) transmission risk from an increased-risk donor (IRD), despite negative nucleic acid testing (NAT), likely varies based on behavior type and timing. We developed a Monte Carlo risk model to quantify probability of HIV among IRDs. The model included NAT performance, viral load dynamics, and per-act risk of acquiring HIV by each behavior. The model also quantifies the probability of HCV among IRDs by non-medical intravenous drug use (IVDU). Highest risk is among donors with history of unprotected, receptive anal male-to-male intercourse with partner of unknown HIV status (MSM), followed by sex with an HIV-infected partner, IVDU, and sex with a commercial sex worker. With NAT screening, the estimated risk of undetected HIV remains small even at 1 day following a risk behavior. The estimated risk for HCV transmission through IVDU is likewise small and decreases quicker with time owing to the faster viral growth dynamics of HCV compared with HIV. These findings may allow for improved organ allocation, utilization, and recipient informed consent. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Jang, Cheng-Shin
2016-01-01
The Tamsui River watershed situated in Northern Taiwan provides a variety of water recreational opportunities such as riverbank park activities, fishing, cruising, rowing, sailing, and swimming. However, river water quality strongly affects water recreational quality. Moreover, the health of recreationists who are partially or fully exposed to polluted river water may be jeopardized. A river pollution index (RPI) composed of dissolved oxygen, biochemical oxygen demand, suspended solids, and ammonia nitrogen is typically used to gauge the river water quality and regulate the water body use in Taiwan. The purpose of this study was to probabilistically determine the RPI categories in the Tamsui River watershed and to assess the urban water recreational quality on the basis of the estimated RPI categories. First, according to various RPI categories, one-dimensional indicator kriging (IK) was adopted to estimate the occurrence probabilities of the RPI categories. The maximum occurrence probability among the categories was then employed to determine the most suitable RPI category. Finally, the most serious categories and seasonal variations of RPI were adopted to evaluate the quality of current water recreational opportunities in the Tamsui River watershed. The results revealed that the midstream and downstream sections of the Tamsui River and its tributaries with poor river water quality afford low water recreational quality, and water recreationists should avoid full or limited exposure to these bodies of water. However, the upstream sections of the Tamsui River watershed with high river water quality are suitable for all water recreational activities.
Popov, V. D.; Khamidullina, N. M.
2006-10-01
In developing radio-electronic devices (RED) of spacecraft operating in the fields of ionizing radiation in space, one of the most important problems is the correct estimation of their radiation tolerance. The “weakest link” in the element base of onboard microelectronic devices under radiation effect is the integrated microcircuits (IMC), especially of large scale (LSI) and very large scale (VLSI) degree of integration. The main characteristic of IMC, which is taken into account when making decisions on using some particular type of IMC in the onboard RED, is the probability of non-failure operation (NFO) at the end of the spacecraft’s lifetime. It should be noted that, until now, the NFO has been calculated only from the reliability characteristics, disregarding the radiation effect. This paper presents the so-called “reliability” approach to determination of radiation tolerance of IMC, which allows one to estimate the probability of non-failure operation of various types of IMC with due account of radiation-stimulated dose failures. The described technique is applied to RED onboard the Spektr-R spacecraft to be launched in 2007.
领域本体中概念间语义相关度的概率估计%Probability estimation for semantic association on domain ontology
田萱; 李冬梅
2011-01-01
A probability model based on Bayesian principles is given to measure the semantic association from a concept to its direct-related concept in domain ontology.The model is based on different semantic relationships,and is estimated according to maximum likelihood estimation. Semantic distance is used to estimate semantic relationships in estimating period.Based on the proposed model,a method to measure semantic association of any two concepts in ontology is given.Experiment results of semantic retrieval on open data show that semantic query expansion performs better than classic semantic query expansion.%根据贝叶斯定理提出一种衡量领域本体中概念间语义相关度的概率模型.该模型定义在不同语义关系之上,基于极大似然估计法利用语义距离来对语义关系进行参数估计.并在此基础给出一种计算任意两个概念之间语义相关度的方法.公开数据集上的实验结果表明该方法估计出的概念语义相关度具有相当的有效性,应用在语义查询扩展中可明显提高检索效果.
Furuya, Hiroyuki
2015-11-01
The first autochthonous case of dengue fever in Japan since 1945 was reported on August 27, 2014. Infection was transmitted by Aedes albopictus mosquitoes in Tokyo's Yoyogi Park. A total of 65 cases with no history of overseas travel and who may have been infected around the park were reported as of September 5, 2014. To quantify infection risk of the local epidemic, the reproduction number and vector density per person at the onset of the epidemic were estimated. The estimated probability distribution and the number of female mosquitoes per person (MPP) were determined from the data of the initial epidemic. The estimated distribution R(0i) for the initial epidemic was fitted to a Gamma distribution using location parameter 4.25, scale parameter 0.19, and shape parameter 7.76 with median 7.78 and IQR (7.21-8.40). The MPP was fitted to a normal distribution with mean 5.71 and standard deviation 0.53. Both estimated reproduction number and vector density per person at the onset of the epidemic were higher than previously reported values. These results indicate the potential for dengue outbreaks in places with elevated vector density per person, even in dengue non-endemic countries. To investigate the cause of this outbreak, further studies will be needed, including assessments of social, behavioral, and environmental factors that may have contributed to this epidemic by altering host and vector conditions in the park.
Alfieri, Joseph G.; Anderson, Martha C.; Kustas, William P.; Cammalleri, Carmelo
2017-01-01
Accurate spatially distributed estimates of actual evapotranspiration (ET) derived from remotely sensed data are critical to a broad range of practical and operational applications. However, due to lengthy return intervals and cloud cover, data acquisition is not continuous over time, particularly for satellite sensors operating at medium ( ˜ 100 m) or finer resolutions. To fill the data gaps between clear-sky data acquisitions, interpolation methods that take advantage of the relationship between ET and other environmental properties that can be continuously monitored are often used. This study sought to evaluate the accuracy of this approach, which is commonly referred to as temporal upscaling, as a function of satellite revisit interval. Using data collected at 20 Ameriflux sites distributed throughout the contiguous United States and representing four distinct land cover types (cropland, grassland, forest, and open-canopy) as a proxy for perfect retrievals on satellite overpass dates, this study assesses daily ET estimates derived using five different reference quantities (incident solar radiation, net radiation, available energy, reference ET, and equilibrium latent heat flux) and three different interpolation methods (linear, cubic spline, and Hermite spline). Not only did the analyses find that the temporal autocorrelation, i.e., persistence, of all of the reference quantities was short, it also found that those land cover types with the greatest ET exhibited the least persistence. This carries over to the error associated with both the various scaled quantities and flux estimates. In terms of both the root mean square error (RMSE) and mean absolute error (MAE), the errors increased rapidly with increasing return interval following a logarithmic relationship. Again, those land cover types with the greatest ET showed the largest errors. Moreover, using a threshold of 20 % relative error, this study indicates that a return interval of no more than 5 days is
Cantuaria, Manuella Lech; Løfstrøm, Per; Blanes-Vidal, Victoria
2017-12-15
The assessment of air pollution exposures in epidemiological studies does not always account for spatio-temporal variability of pollutants concentrations. In the case of odor studies, a common approach is to use yearly averaged odorant exposure estimates with low spatial resolution, which may not capture the spatio-temporal variability of emissions and therefore distort the epidemiological results. This study explores the use of different exposure assessment methods for time-variant ammonia exposures with high spatial resolution, in rural communities exposed to odors from agricultural and livestock farming activities. Exposure estimations were based on monthly ammonia concentrations from emission-dispersion models. Seven time-dependent residential NH3 exposures variables were investigated: 1) Annual mean of NH3 exposures; 2) Maximum annual NH3 exposure; 3) Area under the exposure curve; 4) Peak area; 5) Peak-to-mean ratio; 6) Area above the baseline (annual mean of NH3 exposures); and 7) Maximum positive slope of the exposure curve. We developed binomial and multinomial logistic regression models for frequency of odor perception and odor annoyance responses based on each temporal exposure variable. Odor responses estimates, goodness of fit and predictive abilities derived from each model were compared. All time-dependent NH3 exposure variables, except peak-to-mean ratio, were positively associated with odor perception and odor annoyance, although the results differ considerably in terms of magnitude and precision. The best goodness of fit of the predictive binomial models was obtained when using maximum monthly NH3 exposure as exposure assessment variable, both for odor perception and annoyance. The best predictive performance for odor perception was found when annual mean was used as exposure variable (accuracy=71.82%, Cohen's Kappa=0.298) whereas odor annoyance was better predicted when using peak area (accuracy=68.07%, Cohen's Kappa=0.290). Our study highlights
A study of temporal estimation from the perspective of the Mental Clock Model.
Carmeci, Floriana; Misuraca, Raffaella; Cardaci, Maurizio
2009-04-01
M. Cardaci's (2000) Mental Clock Model maintains that a task requiring a low mental workload is associated with an acceleration of perceived time, whereas a task requiring a high mental workload is associated with a deceleration. The authors examined the predictions of this model in a musical listening condition in which musical pieces were audible in several structural complexities. To measure the effects of musical complexity on time estimation, the authors used retrospective and prospective time-estimation paradigms. For the retrospective paradigm, the authors invited participants to listen to a musical piece and then estimate its duration. For the prospective paradigm, the authors invited participants to stop the musical reproduction after a certain interval of time. Results show that the variations of musical complexity yielded the empirical effects that the Mental Clock Model predicted for both paradigms.
Vsevolozhskaya, Olga A; Anthony, James C
2016-06-29
Measured as elapsed time from first use to dependence syndrome onset, the estimated "induction interval" for cocaine is thought to be short relative to the cannabis interval, but little is known about risk of becoming dependent during first months after onset of use. Virtually all published estimates for this facet of drug dependence epidemiology are from life histories elicited years after first use. To improve estimation, we turn to new month-wise data from nationally representative samples of newly incident drug users identified via probability sampling and confidential computer-assisted self-interviews for the United States National Surveys on Drug Use and Health, 2004-2013. Standardized modules assessed first and most recent use, and dependence syndromes, for each drug subtype. A four-parameter Hill function depicts the drug dependence transition for subgroups defined by units of elapsed time from first to most recent use, with an expectation of greater cocaine dependence transitions for cocaine versus cannabis. This study's novel estimates for cocaine users one month after first use show 2-4% with cocaine dependence; 12-17% are dependent when use has persisted. Corresponding cannabis estimates are 0-1% after one month, but 10-23% when use persists. Duration or persistence of cannabis smoking beyond an initial interval of a few months of use seems to be a signal of noteworthy risk for, or co-occurrence of, rapid-onset cannabis dependence, not too distant from cocaine estimates, when we sort newly incident users into subgroups defined by elapsed time from first to most recent use. Copyright © 2016 John Wiley & Sons, Ltd.
Vishal Diwan
Full Text Available The presence of antibiotics in the environment and their subsequent impact on resistance development has raised concerns globally. Hospitals are a major source of antibiotics released into the environment. To reduce these residues, research to improve knowledge of the dynamics of antibiotic release from hospitals is essential. Therefore, we undertook a study to estimate seasonal and temporal variation in antibiotic release from two hospitals in India over a period of two years. For this, 6 sampling sessions of 24 hours each were conducted in the three prominent seasons of India, at all wastewater outlets of the two hospitals, using continuous and grab sampling methods. An in-house wastewater sampler was designed for continuous sampling. Eight antibiotics from four major antibiotic groups were selected for the study. To understand the temporal pattern of antibiotic release, each of the 24-hour sessions were divided in three sub-sampling sessions of 8 hours each. Solid phase extraction followed by liquid chromatography/tandem mass spectrometry (LC-MS/MS was used to determine the antibiotic residues. Six of the eight antibiotics studied were detected in the wastewater samples. Both continuous and grab sampling methods indicated that the highest quantities of fluoroquinolones were released in winter followed by the rainy season and the summer. No temporal pattern in antibiotic release was detected. In general, in a common timeframe, continuous sampling showed less concentration of antibiotics in wastewater as compared to grab sampling. It is suggested that continuous sampling should be the method of choice as grab sampling gives erroneous results, it being indicative of the quantities of antibiotics present in wastewater only at the time of sampling. Based on our studies, calculations indicate that from hospitals in India, an estimated 89, 1 and 25 ng/L/day of fluroquinolones, metronidazole and sulfamethoxazole respectively, might be getting
Diwan, Vishal; Stålsby Lundborg, Cecilia; Tamhankar, Ashok J.
2013-01-01
The presence of antibiotics in the environment and their subsequent impact on resistance development has raised concerns globally. Hospitals are a major source of antibiotics released into the environment. To reduce these residues, research to improve knowledge of the dynamics of antibiotic release from hospitals is essential. Therefore, we undertook a study to estimate seasonal and temporal variation in antibiotic release from two hospitals in India over a period of two years. For this, 6 sampling sessions of 24 hours each were conducted in the three prominent seasons of India, at all wastewater outlets of the two hospitals, using continuous and grab sampling methods. An in-house wastewater sampler was designed for continuous sampling. Eight antibiotics from four major antibiotic groups were selected for the study. To understand the temporal pattern of antibiotic release, each of the 24-hour sessions were divided in three sub-sampling sessions of 8 hours each. Solid phase extraction followed by liquid chromatography/tandem mass spectrometry (LC-MS/MS) was used to determine the antibiotic residues. Six of the eight antibiotics studied were detected in the wastewater samples. Both continuous and grab sampling methods indicated that the highest quantities of fluoroquinolones were released in winter followed by the rainy season and the summer. No temporal pattern in antibiotic release was detected. In general, in a common timeframe, continuous sampling showed less concentration of antibiotics in wastewater as compared to grab sampling. It is suggested that continuous sampling should be the method of choice as grab sampling gives erroneous results, it being indicative of the quantities of antibiotics present in wastewater only at the time of sampling. Based on our studies, calculations indicate that from hospitals in India, an estimated 89, 1 and 25 ng/L/day of fluroquinolones, metronidazole and sulfamethoxazole respectively, might be getting released into the
Bianchi, Federica; Santurette, Sébastien; Fereczkowski, Michal
2015-01-01
Recent physiological studies in animals showed that noise-induced sensorineural hearing loss (SNHL) increased the amplitude of envelope coding in single auditory-nerve fibers. The present study investigated whether SNHL in human listeners was associated with enhanced temporal envelope coding......, whether this enhancement affected pitch discrimination performance, and whether loss of compression following SNHL was a potential factor in envelope coding enhancement. Envelope processing was assessed in normal-hearing (NH) and hearing-impaired (HI) listeners in a behavioral amplitude...... resolvability. For the unresolved conditions, all five HI listeners performed as good as or better than NH listeners with matching musical experience. Two HI listeners showed lower amplitude-modulation detection thresholds than NH listeners for low modulation rates, and one of these listeners also showed a loss...
Estimation of 2N(e)s from temporal allele frequency data
Bollback, Jonathan Paul; York, Thomas L.; Nielsen, Rasmus
2008-01-01
, and assuming independent binomial sampling from this diffusion process at each time point. We apply the method in two example applications. First, we estimate selection coefficients acting on the CCR5-{Delta}32 mutation on the basis of published samples of contemporary and ancient human DNA. We show...... that the data are compatible with the assumption of s = 0, although moderate amounts of selection acting on this mutation cannot be excluded. In our second example, we estimate the selection coefficient acting on a mutation segregating in an experimental phage population. We show that the selection coefficient...... acting on this mutation is ~0.43....
Stegmann, Mikkel Bille; Pedersen, Dorthe
2005-01-01
Rapid and unsupervised quantitative analysis is of utmost importance to ensure clinical acceptance of many examinations using cardiac magnetic resonance imaging (MRI). We present a framework that aims at fulfilling these goals for the application of left ventricular ejection fraction estimation i...
Estimation of Corrosion Probability for Steel Tank Floor-plates%钢质油罐底板腐蚀概率估计
杨廷鸿; 何超; 吴松林; 王春林
2013-01-01
This paper focuses on the floor⁃plate corrosion,which is the main issue of corrosion of steel oil tank. Firstly,it analyses the characteristic of data from engineering detection of oil tank floor⁃plate corrosion and the statistical analysis theory of corrosion test data,and finds that the information of engineering detection data for floor⁃plate corrosion is incomplete for corrosion probability estimation. Then based on the above analysis and theory,the model of corrosion probability estimation is established by the estimation of slight corrosion area and with the goal of maximal corrosion depth prediction. Finally the correctness and feasibility of the model and the maximal corrosion depth of oil tank floor⁃plate are tested through 27 tanks in Guangzhou and other areas of approximately 900 engineering detection data. The maximal relative error is less than 45%,and about 80%of relative error is less than 30%.% 针对钢质油罐底板腐蚀，首先分析了油罐底板腐蚀工程检测数据的特点和腐蚀试验数据统计分析理论；然后针对工程检测信息不完全的特性，以最大腐蚀深度的预测估计为目标，建立了以轻微腐蚀面积估计来实现腐蚀概率修正估计的模型；最后利用广州等地27个罐约900条检测数据估计了油罐底板的最大腐蚀深度。其最大相对误差小于45%，约80%的相对误差优于30%。
Olson, Scott A.; with a section by Veilleux, Andrea G.
2014-01-01
This report provides estimates of flood discharges at selected annual exceedance probabilities (AEPs) for streamgages in and adjacent to Vermont and equations for estimating flood discharges at AEPs of 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent (recurrence intervals of 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-years, respectively) for ungaged, unregulated, rural streams in Vermont. The equations were developed using generalized least-squares regression. Flood-frequency and drainage-basin characteristics from 145 streamgages were used in developing the equations. The drainage-basin characteristics used as explanatory variables in the regression equations include drainage area, percentage of wetland area, and the basin-wide mean of the average annual precipitation. The average standard errors of prediction for estimating the flood discharges at the 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent AEP with these equations are 34.9, 36.0, 38.7, 42.4, 44.9, 47.3, 50.7, and 55.1 percent, respectively. Flood discharges at selected AEPs for streamgages were computed by using the Expected Moments Algorithm. To improve estimates of the flood discharges for given exceedance probabilities at streamgages in Vermont, a new generalized skew coefficient was developed. The new generalized skew for the region is a constant, 0.44. The mean square error of the generalized skew coefficient is 0.078. This report describes a technique for using results from the regression equations to adjust an AEP discharge computed from a streamgage record. This report also describes a technique for using a drainage-area adjustment to estimate flood discharge at a selected AEP for an ungaged site upstream or downstream from a streamgage. The final regression equations and the flood-discharge frequency data used in this study will be available in StreamStats. StreamStats is a World Wide Web application providing automated regression-equation solutions for user-selected sites on streams.
On ARMA Probability Density Estimation.
1981-12-01
definitions of the constants B0k=Ol ,...,q) and ak(kl,...,p) will be given which, for a given function f(.), uniquely define an approximator f pCq .) for each...satisfied. When using f pCq () for approximation purposes it is thus important to always verify whether or not this condition is met. In concluding
E. Delogu
2012-08-01
Full Text Available Evapotranspiration estimates can be derived from remote sensing data and ancillary, mostly meterorological, information. For this purpose, two types of methods are classically used: the first type estimates a potential evapotranspiration rate from vegetation indices, and adjusts this rate according to water availability derived from either a surface temperature index or a first guess obtained from a rough estimate of the water budget, while the second family of methods relies on the link between the surface temperature and the latent heat flux through the surface energy budget. The latter provides an instantaneous estimate at the time of satellite overpass. In order to compute daily evapotranspiration, one needs an extrapolation algorithm. Since no image is acquired during cloudy conditions, these methods can only be applied during clear sky days. In order to derive seasonal evapotranspiration, one needs an interpolation method. Two combined interpolation/extrapolation methods based on the self preservation of evaporative fraction and the stress factor are compared to reconstruct seasonal evapotranspiration from instantaneous measurements acquired in clear sky conditions. Those measurements are taken from instantaneous latent heat flux from 11 datasets in Southern France and Morocco. Results show that both methods have comparable performances with a clear advantage for the evaporative fraction for datasets with several water stress events. Both interpolation algorithms tend to underestimate evapotranspiration due to the energy limiting conditions that prevail during cloudy days. Taking into account the diurnal variations of the evaporative fraction according to an empirical relationship derived from a previous study improved the performance of the extrapolation algorithm and therefore the retrieval of the seasonal evapotranspiration for all but one datasets.
Grinand, C.; Maire, G. Le; Vieilledent, G.; Razakamanarivo, H.; Razafimbelo, T.; Bernoux, M.
2017-02-01
Soil organic carbon (SOC) plays an important role in climate change regulation notably through release of CO2 following land use change such a deforestation, but data on stock change levels are lacking. This study aims to empirically assess SOC stocks change between 1991 and 2011 at the landscape scale using easy-to-access spatially-explicit environmental factors. The study area was located in southeast Madagascar, in a region that exhibits very high rate of deforestation and which is characterized by both humid and dry climates. We estimated SOC stock on 0.1 ha plots for 95 different locations in a 43,000 ha reference area covering both dry and humid conditions and representing different land cover including natural forest, cropland, pasture and fallows. We used the Random Forest algorithm to find out the environmental factors explaining the spatial distribution of SOC. We then predicted SOC stocks for two soil layers at 30 cm and 100 cm over a wider area of 395,000 ha. By changing the soil and vegetation indices derived from remote sensing images we were able to produce SOC maps for 1991 and 2011. Those estimates and their related uncertainties where combined in a post-processing step to map estimates of significant SOC variations and we finally compared the SOC change map with published deforestation maps. Results show that the geologic variables, precipitation, temperature, and soil-vegetation status were strong predictors of SOC distribution at regional scale. We estimated an average net loss of 10.7% and 5.2% for the 30 cm and the 100 cm layers respectively for deforested areas in the humid area. Our results also suggest that these losses occur within the first five years following deforestation. No significant variations were observed for the dry region. This study provides new solutions and knowledge for a better integration of soil threats and opportunities in land management policies.
Effect of Temporal Residual Correlation on Estimation of Model Averaging Weights
Ye, M.; Lu, D.; Curtis, G. P.; Meyer, P. D.; Yabusaki, S.
2010-12-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are always calculated using model selection criteria such as AIC, AICc, BIC, and KIC. However, this method sometimes leads to an unrealistic situation in which one model receives overwhelmingly high averaging weight (even 100%), which cannot be justified by available data and knowledge. It is found in this study that the unrealistic situation is due partly, if not solely, to ignorance of residual correlation when estimating the negative log-likelihood function common to all the model selection criteria. In the context of maximum-likelihood or least-square inverse modeling, the residual correlation is accounted for in the full covariance matrix; when the full covariance matrix is replaced by its diagonal counterpart, it assumes data independence and ignores the correlation. As a result, treating the correlated residuals as independent distorts the distance between observations and simulations of alternative models. As a result, it may lead to incorrect estimation of model selection criteria and model averaging weights. This is illustrated for a set of surface complexation models developed to simulate uranium transport based on a series of column experiments. The residuals are correlated in time, and the time correlation is addressed using a second-order autoregressive model. The modeling results reveal importance of considering residual correlation in the estimation of model averaging weights.
Slow potentials in time estimation: The role of temporal accumulation and habituation
Tadeusz W. Kononowicz
2011-09-01
Full Text Available Numerous studies have shown that contingent negative variation (CNV measured at fronto-central and parietal-central areas is closely related to interval timing. However, the exact nature of the relation between CNV and the underlying timing mechanisms is still a topic of discussion. On the one hand, it has been proposed that the CNV measured at supplementary motor area (SMA is a direct reflection of the unfolding of time since a perceived onset, whereas other work has suggested that the increased amplitude reflects decision processes involved in interval timing. Strong evidence for the first view has been reported by Macar, Vidal and Casini (1999, who showed that variations in temporal performance were reflected in the measured CNV amplitude. If the CNV measured at SMA is a direct function of the passing of time, habituation effects are not expected. Here we report two replication studies, which both failed to replicate the expected performance-dependent variations. Even more powerful linear-mixed effect analyses failed to find any performance related effects on the CNV amplitude, whereas habituation effects were found. These studies therefore suggest that the CNV amplitude does not directly reflect the unfolding of time
Ganju, N.K.; Knowles, N.; Schoellhamer, D.H.
2008-01-01
In this study we used hydrologic proxies to develop a daily sediment load time-series, which agrees with decadal sediment load estimates, when integrated. Hindcast simulations of bathymetric change in estuaries require daily sediment loads from major tributary rivers, to capture the episodic delivery of sediment during multi-day freshwater flow pulses. Two independent decadal sediment load estimates are available for the Sacramento/San Joaquin River Delta, California prior to 1959, but they must be downscaled to a daily interval for use in hindcast models. Daily flow and sediment load data to the Delta are available after 1930 and 1959, respectively, but bathymetric change simulations for San Francisco Bay prior to this require a method to generate daily sediment load estimates into the Delta. We used two historical proxies, monthly rainfall and unimpaired flow magnitudes, to generate monthly unimpaired flows to the Sacramento/San Joaquin Delta for the 1851-1929 period. This step generated the shape of the monthly hydrograph. These historical monthly flows were compared to unimpaired monthly flows from the modern era (1967-1987), and a least-squares metric selected a modern water year analogue for each historical water year. The daily hydrograph for the modern analogue was then assigned to the historical year and scaled to match the flow volume estimated by dendrochronology methods, providing the correct total flow for the year. We applied a sediment rating curve to this time-series of daily flows, to generate daily sediment loads for 1851-1958. The rating curve was calibrated with the two independent decadal sediment load estimates, over two distinct periods. This novel technique retained the timing and magnitude of freshwater flows and sediment loads, without damping variability or net sediment loads to San Francisco Bay. The time-series represents the hydraulic mining period with sustained periods of increased sediment loads, and a dramatic decrease after 1910
孙道德
2003-01-01
Basing on the multi nomial distribution and its nature, the paper conducts analysis of method of non-equal probability classified cluster sample survey and the un-bias estimation of mean, and further researches and provides the deviation square of the estimation and the un-bias estimation of the deviation square.
Boiselet, Aurelien; Scotti, Oona; Lyon-Caen, Hélène
2014-05-01
-SISCOR Working Group. On the basis of this consensual logic tree, median probability of occurrences of M>=6 events were computed for the region of study. Time-dependent models (Brownian Passage time and Weibull probability distributions) were also explored. The probability of a M>=6.0 event is found to be greater in the western region compared to the eastern part of the Corinth rift, whether a fault-based or a classical seismotectonic approach is used. Percentile probability estimates are also provided to represent the range of uncertainties in the results. The percentile results show that, in general, probability estimates following the classical approach (based on the definition of seismotectonic source zones), cover the median values estimated following the fault-based approach. On the contrary, the fault-based approach in this region is still affected by a high degree of uncertainty, because of the poor constraints on the 3D geometries of the faults and the high uncertainties in their slip rates.
Hjertqvist Marika
2011-02-01
Full Text Available Abstract Background The fox tapeworm Echinococcus multilocularis has foxes and other canids as definitive host and rodents as intermediate hosts. However, most mammals can be accidental intermediate hosts and the larval stage may cause serious disease in humans. The parasite has never been detected in Sweden, Finland and mainland Norway. All three countries require currently an anthelminthic treatment for dogs and cats prior to entry in order to prevent introduction of the parasite. Documentation of freedom from E. multilocularis is necessary for justification of the present import requirements. Methods The probability that Sweden, Finland and mainland Norway were free from E. multilocularis and the sensitivity of the surveillance systems were estimated using scenario trees. Surveillance data from five animal species were included in the study: red fox (Vulpes vulpes, raccoon dog (Nyctereutes procyonoides, domestic pig, wild boar (Sus scrofa and voles and lemmings (Arvicolinae. Results The cumulative probability of freedom from EM in December 2009 was high in all three countries, 0.98 (95% CI 0.96-0.99 in Finland and 0.99 (0.97-0.995 in Sweden and 0.98 (0.95-0.99 in Norway. Conclusions Results from the model confirm that there is a high probability that in 2009 the countries were free from E. multilocularis. The sensitivity analyses showed that the choice of the design prevalences in different infected populations was influential. Therefore more knowledge on expected prevalences for E. multilocularis in infected populations of different species is desirable to reduce residual uncertainty of the results.
Gudder, Stanley P
2014-01-01
Quantum probability is a subtle blend of quantum mechanics and classical probability theory. Its important ideas can be traced to the pioneering work of Richard Feynman in his path integral formalism.Only recently have the concept and ideas of quantum probability been presented in a rigorous axiomatic framework, and this book provides a coherent and comprehensive exposition of this approach. It gives a unified treatment of operational statistics, generalized measure theory and the path integral formalism that can only be found in scattered research articles.The first two chapters survey the ne
Asmussen, Søren; Albrecher, Hansjörg
The book gives a comprehensive treatment of the classical and modern ruin probability theory. Some of the topics are Lundberg's inequality, the Cramér-Lundberg approximation, exact solutions, other approximations (e.g., for heavy-tailed claim size distributions), finite horizon ruin probabilities......, extensions of the classical compound Poisson model to allow for reserve-dependent premiums, Markov-modulation, periodicity, change of measure techniques, phase-type distributions as a computational vehicle and the connection to other applied probability areas, like queueing theory. In this substantially...
Gastón S. Milanesi
2016-11-01
probabilities of financial distress. The exotic barrier options make an alternative approach for predicting financial distress, and its structure fits better to the firm valuevolatility relationship. The paper proposes a “naive” barrier option model, because it simplifies the estimation of the unobservable variables, like firm asset’s value and risk. First, a simple call and barrier option models are developed in order to value the firm’s capital and estimate the financial distress probability. Using an hypothetical case, it is proposed a sensibility exercise over period and volatility. Similar exercise is applied to estimate the capital value and financial distress probability over two firms of Argentinian capitals, with different leverage degree, confirming the consistency in the relationship between volatility-value-financial distress probability of the proposed model. Finally, the main conclusions are shown.
Temporal scaling in information propagation.
Huang, Junming; Li, Chao; Wang, Wen-Qiang; Shen, Hua-Wei; Li, Guojie; Cheng, Xue-Qi
2014-06-18
For the study of information propagation, one fundamental problem is uncovering universal laws governing the dynamics of information propagation. This problem, from the microscopic perspective, is formulated as estimating the propagation probability that a piece of information propagates from one individual to another. Such a propagation probability generally depends on two major classes of factors: the intrinsic attractiveness of information and the interactions between individuals. Despite the fact that the temporal effect of attractiveness is widely studied, temporal laws underlying individual interactions remain unclear, causing inaccurate prediction of information propagation on evolving social networks. In this report, we empirically study the dynamics of information propagation, using the dataset from a population-scale social media website. We discover a temporal scaling in information propagation: the probability a message propagates between two individuals decays with the length of time latency since their latest interaction, obeying a power-law rule. Leveraging the scaling law, we further propose a temporal model to estimate future propagation probabilities between individuals, reducing the error rate of information propagation prediction from 6.7% to 2.6% and improving viral marketing with 9.7% incremental customers.
Temporal scaling in information propagation
Huang, Junming; Li, Chao; Wang, Wen-Qiang; Shen, Hua-Wei; Li, Guojie; Cheng, Xue-Qi
2014-06-01
For the study of information propagation, one fundamental problem is uncovering universal laws governing the dynamics of information propagation. This problem, from the microscopic perspective, is formulated as estimating the propagation probability that a piece of information propagates from one individual to another. Such a propagation probability generally depends on two major classes of factors: the intrinsic attractiveness of information and the interactions between individuals. Despite the fact that the temporal effect of attractiveness is widely studied, temporal laws underlying individual interactions remain unclear, causing inaccurate prediction of information propagation on evolving social networks. In this report, we empirically study the dynamics of information propagation, using the dataset from a population-scale social media website. We discover a temporal scaling in information propagation: the probability a message propagates between two individuals decays with the length of time latency since their latest interaction, obeying a power-law rule. Leveraging the scaling law, we further propose a temporal model to estimate future propagation probabilities between individuals, reducing the error rate of information propagation prediction from 6.7% to 2.6% and improving viral marketing with 9.7% incremental customers.
Dittrich, Eva; Riklin Raviv, Tammy; Kasprian, Gregor; Donner, René; Brugger, Peter C; Prayer, Daniela; Langs, Georg
2014-01-01
Prenatal neuroimaging requires reference models that reflect the normal spectrum of fetal brain development, and summarize observations from a representative sample of individuals. Collecting a sufficiently large data set of manually annotated data to construct a comprehensive in vivo atlas of rapidly developing structures is challenging but necessary for large population studies and clinical application. We propose a method for the semi-supervised learning of a spatio-temporal latent atlas of fetal brain development, and corresponding segmentations of emerging cerebral structures, such as the ventricles or cortex. The atlas is based on the annotation of a few examples, and a large number of imaging data without annotation. It models the morphological and developmental variability across the population. Furthermore, it serves as basis for the estimation of a structures' morphological age, and its deviation from the nominal gestational age during the assessment of pathologies. Experimental results covering the gestational period of 20-30 gestational weeks demonstrate segmentation accuracy achievable with minimal annotation, and precision of morphological age estimation. Age estimation results on fetuses suffering from lissencephaly demonstrate that they detect significant differences in the age offset compared to a control group. Copyright © 2013. Published by Elsevier B.V.
Ferrari, Alberto; Ginis, Pieter; Hardegger, Michael; Casamassima, Filippo; Rocchi, Laura; Chiari, Lorenzo
2016-07-01
Gait impairments are among the most disabling symptoms in several musculoskeletal and neurological conditions, severely limiting personal autonomy. Wearable gait sensors have been attracting attention as diagnostic tool for gait and are emerging as promising tool for tutoring and guiding gait execution. If their popularity is continuously growing, still there is room for improvement, especially towards more accurate solutions for spatio-temporal gait parameters estimation. We present an implementation of a zero-velocity-update gait analysis system based on a Kalman filter and off-the-shelf shoe-worn inertial sensors. The algorithms for gait events and step length estimation were specifically designed to comply with pathological gait patterns. More so, an Android app was deployed to support fully wearable and stand-alone real-time gait analysis. Twelve healthy subjects were enrolled to preliminarily tune the algorithms; afterwards sixteen persons with Parkinson's disease were enrolled for a validation study. Over the 1314 strides collected on patients at three different speeds, the total root mean square difference on step length estimation between this system and a gold standard was 2.9%. This shows that the proposed method allows for an accurate gait analysis and paves the way to a new generation of mobile devices usable anywhere for monitoring and intervention.
Yunxiang Jin
2014-02-01
Full Text Available Grassland biomass is essential for maintaining grassland ecosystems. Moreover, biomass is an important characteristic of grassland. In this study, we combined field sampling with remote sensing data and calculated five vegetation indices (VIs. Using this combined information, we quantified a remote sensing estimation model and estimated biomass in a temperate grassland of northern China. We also explored the dynamic spatio-temporal variation of biomass from 2006 to 2012. Our results indicated that all VIs investigated in the study were strongly correlated with biomass (α < 0.01. The precision of the model for estimating biomass based on ground data and remote sensing was greater than 73%. Additionally, the results of our analysis indicated that the annual average biomass was 11.86 million tons and that the average yield was 604.5 kg/ha. The distribution of biomass exhibited substantial spatial heterogeneity, and the biomass decreased from the eastern portion of the study area to the western portion. The interannual biomass exhibited strong fluctuations during 2006–2012, with a coefficient of variation of 26.95%. The coefficient of variation of biomass differed among the grassland types. The highest coefficient of variation was found for the desert steppe, followed by the typical steppe and the meadow steppe.
Mass estimation of MAXI J1659-152 during spectral and temporal analsyis with TCAF and POS models
Molla, Aslam Ali; Debnath, Dipak; Chakrabarti, Sandip Kumar; Mondal, Santanu; Jana, Arghajit; Chatterjee, Debjit
2016-07-01
The Galactic transient black hole candidate (BHC) MAXI J1659-152 showed its first X-ray outburst on 25th Spet. 2010. We make a detailed spectral and temporal study of this outburst with RXTE/PCA data. The spectral analysis was made with Two Component Advective Flow (TCAF) model fits file as an additive table model in XSPEC. While fitting spectra with TCAF, we note that model fitted normalization (N) remains almost constant (129.7 - 146.3) which lead us to calculate mass of the black hole (BH). We then refitted all the spectra with fixed normalization value of 139 (calculated from weighted averaging of the N values), and found that mass of the BH comes in the range of 4.69-7.75 M_Sun. It is to be noted that in TCAF model fits file, mass is an input parameter. We also calculted mass of the BH, with our study of the QPO frequency evolution during declining phase of the outburst with the Propagating Oscillatory Shock (POS) model. We observe that in the declining phase of the outburst the shock moves away from the black hole as the QPO frequency decreases. We obtain our best fit of QPO evolution by using mass of the BH at 6 M_Sun and acceptable fit (reduced chisq value < 1.5) for the mass range of 5.08-7.38 M_Sun, which lie within the range of mass obtained from our spectral fit. So, from the study of spectral and temporal variability of this source we conclude the probable mass range of the black hole to be 4.69 - 7.75 M_Sun.
Earth Data Analysis Center, University of New Mexico — USFS, State Forestry, BLM, and DOI fire occurrence point locations from 1987 to 2008 were combined and converted into a fire occurrence probability or density grid...
Shiryaev, Albert N
2016-01-01
This book contains a systematic treatment of probability from the ground up, starting with intuitive ideas and gradually developing more sophisticated subjects, such as random walks, martingales, Markov chains, the measure-theoretic foundations of probability theory, weak convergence of probability measures, and the central limit theorem. Many examples are discussed in detail, and there are a large number of exercises. The book is accessible to advanced undergraduates and can be used as a text for independent study. To accommodate the greatly expanded material in the third edition of Probability, the book is now divided into two volumes. This first volume contains updated references and substantial revisions of the first three chapters of the second edition. In particular, new material has been added on generating functions, the inclusion-exclusion principle, theorems on monotonic classes (relying on a detailed treatment of “π-λ” systems), and the fundamental theorems of mathematical statistics.
Verma, S.; Reddy, D. Manigopal; Ghosh, S.; Kumar, D. Bharath; Chowdhury, A. Kundu
2017-10-01
We estimated the latest spatially and temporally resolved gridded constrained black carbon (BC) emissions over the Indian region using a strategic integrated modelling approach. This was done extracting information on initial bottom-up emissions and atmospheric BC concentration from a general circulation model (GCM) simulation in conjunction with the receptor modelling approach. Monthly BC emission (83-364 Gg) obtained from the present study exhibited a spatial and temporal variability with this being the highest (lowest) during February (July). Monthly BC emission flux was considerably high (> 100 kg km- 2) over the entire Indo-Gangetic plain (IGP), east and the west coast during winter months. This was relatively higher over the central and western India than over the IGP during summer months. Annual BC emission rate was 2534 Gg y- 1 with that over the IGP and central India respectively comprising 50% and 40% of the total annual BC emissions over India. A high relative increase was observed in modified BC emissions (more than five times the initial emissions) over the most part of the IGP, east coast, central/northwestern India. The relative predominance of monthly BC emission flux over a region (as depicted from z-score distribution maps) was inferred being consistent with the prevalence of region- and season-specific anthropogenic activity.
Rao, Xiongbin; Lau, Vincent K. N.
2015-09-01
In this paper, we consider the problem of compressive sensing (CS) recovery with a prior support and the prior support quality information available. Different from classical works which exploit prior support blindly, we shall propose novel CS recovery algorithms to exploit the prior support adaptively based on the quality information. We analyze the distortion bound of the recovered signal from the proposed algorithm and we show that a better quality prior support can lead to better CS recovery performance. We also show that the proposed algorithm would converge in $\\mathcal{O}\\left(\\log\\mbox{SNR}\\right)$ steps. To tolerate possible model mismatch, we further propose some robustness designs to combat incorrect prior support quality information. Finally, we apply the proposed framework to sparse channel estimation in massive MIMO systems with temporal correlation to further reduce the required pilot training overhead.
Gran, Fredrik; Jensen, Jørgen Arendt
2006-01-01
This paper investigates the possibility of flow estimation using spatio-temporal encoding of the transmissions in synthetic transmit aperture imaging (STA). The spatial encoding is based on a frequency division approach. In STA, a major disadvantage is that only a single transmitter (denoting....... In receive, the signals are separated using a simple filtering operation. To attain high axial resolution, broadband spectra must be synthesized for each of the transmitters. By multiplexing the different waveforms on different transmitters over a number of transmissions, this can be accomplished. To further...... were simulated for each angle at 0.10, 0.25, 0.50, and 1.00 m/s. The mean relative bias with respect to the peak flow for the three angles was less than 2%, 2%, and 4%, respectiv- ely....
宋广文; 夏星星; 李承宗; 何云凤
2012-01-01
研究一以时间距离和封面故事为变量考察对认知相关性的影响,结果发现时间距离、封面故事及二者间的交互作用显著.坏封面故事中,人们对1年后发生的事件要比1周、5年与50年后事件的认知相关程度都高,即最重视；好封面故事中,4个时间距离间的认知相关程度差异不显著.研究二在1周、1年、5年和50年四个时间距离下,探讨封面故事、框架和概率对风险偏好的影响,结果表明:仅当时间距离为1周时,不同概率在坏封面故事中风险倾向的差异显著.同时,大概率下,封面故事、封面故事与框架的交互作用对风险偏好的影响达到显著水平,表现出在坏封面故事中,与负面框架相比,正面框架下风险回避的比例更高,符合框架效应.结合研究一、二发现框架效应正发生在认知相关程度最低时.%When making choices, people are sensitive to the way in which the problem is presented. This sensitivity was well exemplified by the framing effect initially described by Tversky and Kahneman (1984). In the well-known "Asian disease problem ", they found that the majority were risk averse when the options of the problem were framed positively, yet turned to be risk seeking when the options were framed negatively. The first study was to investigate 625 Chinese college students' cognitive relativity in different temporal distances and cover stories. The results of the questionnaire survey showed the main effect of temporal distances and cover stories were both significant, and the effect of interaction was also significant. It showed the greatest cognitive relativity when the temporal distance was 1 year under the bad cover story. The second study used scenarios that were similar to the Asian disease problem to research how the characteristics of the cover stories framing effects and probability levels influence risk preference in different temporal distances. In order to examine whether
Pu, H. C.; Lin, C. H.
2016-05-01
To investigate the seismic behavior of crustal deformation, we deployed a dense seismic network at the Hsinchu area of northwestern Taiwan during the period between 2004 and 2006. Based on abundant local micro-earthquakes recorded at this seismic network, we have successfully determined 274 focal mechanisms among ∼1300 seismic events. It is very interesting to see that the dominant energy of both seismic strike-slip and normal faulting mechanisms repeatedly alternated with each other within two years. Also, the strike-slip and normal faulting earthquakes were largely accompanied with the surface slipping along N60°E and uplifting obtained from the continuous GPS data, individually. Those phenomena were probably resulted by the slow uplifts at the mid-crust beneath the northwestern Taiwan area. As the deep slow uplift was active below 10 km in depth along either the boundary fault or blind fault, the push of the uplifting material would simultaneously produce both of the normal faulting earthquakes in the shallow depths (0-10 km) and the slight surface uplifting. As the deep slow uplift was stop, instead, the strike-slip faulting earthquakes would be dominated as usual due to strongly horizontal plate convergence in the Taiwan. Since the normal faulting earthquakes repeatedly dominated in every 6 or 7 months between 2004 and 2006, it may conclude that slow slip events in the mid crust were frequent to release accumulated tectonic stress in the Hsinchu area.
Catoire, Laurent; Naudet, Valérie
2004-12-01
A simple empirical equation is presented for the estimation of closed-cup flash points for pure organic liquids. Data needed for the estimation of a flash point (FP) are the normal boiling point (Teb), the standard enthalpy of vaporization at 298.15 K [ΔvapH°(298.15 K)] of the compound, and the number of carbon atoms (n) in the molecule. The bounds for this equation are: -100⩽FP(°C)⩽+200; 250⩽Teb(K)⩽650; 20⩽Δvap H°(298.15 K)/(kJ mol-1)⩽110; 1⩽n⩽21. Compared to other methods (empirical equations, structural group contribution methods, and neural network quantitative structure-property relationships), this simple equation is shown to predict accurately the flash points for a variety of compounds, whatever their chemical groups (monofunctional compounds and polyfunctional compounds) and whatever their structure (linear, branched, cyclic). The same equation is shown to be valid for hydrocarbons, organic nitrogen compounds, organic oxygen compounds, organic sulfur compounds, organic halogen compounds, and organic silicone compounds. It seems that the flash points of organic deuterium compounds, organic tin compounds, organic nickel compounds, organic phosphorus compounds, organic boron compounds, and organic germanium compounds can also be predicted accurately by this equation. A mean absolute deviation of about 3 °C, a standard deviation of about 2 °C, and a maximum absolute deviation of 10 °C are obtained when predictions are compared to experimental data for more than 600 compounds. For all these compounds, the absolute deviation is equal or lower than the reproductibility expected at a 95% confidence level for closed-cup flash point measurement. This estimation technique has its limitations concerning the polyhalogenated compounds for which the equation should be used with caution. The mean absolute deviation and maximum absolute deviation observed and the fact that the equation provides unbiaised predictions lead to the conclusion that
Lexicographic Probability, Conditional Probability, and Nonstandard Probability
2009-11-11
the following conditions: CP1. µ(U |U) = 1 if U ∈ F ′. CP2 . µ(V1 ∪ V2 |U) = µ(V1 |U) + µ(V2 |U) if V1 ∩ V2 = ∅, U ∈ F ′, and V1, V2 ∈ F . CP3. µ(V |U...µ(V |X)× µ(X |U) if V ⊆ X ⊆ U , U,X ∈ F ′, V ∈ F . Note that it follows from CP1 and CP2 that µ(· |U) is a probability measure on (W,F) (and, in... CP2 hold. This is easily seen to determine µ. Moreover, µ vaciously satisfies CP3, since there do not exist distinct sets U and X in F ′ such that U
Vargas-Melendez, Leandro; Boada, Beatriz L; Boada, Maria Jesus L; Gauchia, Antonio; Diaz, Vicente
2017-04-29
Vehicles with a high center of gravity (COG), such as light trucks and heavy vehicles, are prone to rollover. This kind of accident causes nearly 33 % of all deaths from passenger vehicle crashes. Nowadays, these vehicles are incorporating roll stability control (RSC) systems to improve their safety. Most of the RSC systems require the vehicle roll angle as a known input variable to predict the lateral load transfer. The vehicle roll angle can be directly measured by a dual antenna global positioning system (GPS), but it is expensive. For this reason, it is important to estimate the vehicle roll angle from sensors installed onboard in current vehicles. On the other hand, the knowledge of the vehicle's parameters values is essential to obtain an accurate vehicle response. Some of vehicle parameters cannot be easily obtained and they can vary over time. In this paper, an algorithm for the simultaneous on-line estimation of vehicle's roll angle and parameters is proposed. This algorithm uses a probability density function (PDF)-based truncation method in combination with a dual Kalman filter (DKF), to guarantee that both vehicle's states and parameters are within bounds that have a physical meaning, using the information obtained from sensors mounted on vehicles. Experimental results show the effectiveness of the proposed algorithm.
SHA Yun-dong; GUO Xiao-peng; LIAO Lian-fang; XIE Li-juan
2011-01-01
As to the sonic fatigue problem of an aero-engine combustor liner structure under the random acoustic loadings, an effective method for predicting the fatigue life of a structure under random loadings was studied. Firstly, the probability distribution of Von Mises stress of thin-walled structure under random loadings was studied, analysis suggested that probability density function of Von Mises stress process accord approximately with two-parameter Weibull distribution. The formula for calculating Weibull parameters were given. Based on the Miner linear theory, the method for predicting the random sonic fatigue life based on the stress probability density was developed, and the model for fatigue life prediction was constructed. As an example, an aero-engine combustor liner structure was considered. The power spectrum density (PSD) of the vibrational stress response was calculated by using the coupled FEM/BEM (finite element method/boundary element method) model, the fatigue life was estimated by using the constructed model. And considering the influence of the wide frequency band, the calculated results were modified. Comparetive analysis shows that the estimated results of sonic fatigue of the combustor liner structure by using Weibull distribution of Von Mises stress are more conservative than using Dirlik distribution to some extend. The results show that the methods presented in this paper are practical for the random fatigue life analysis of the aeronautical thin-walled structures.
An estimate of the temporal fraction of cloud cover at San Pedro M\\'artir Observatory
Carrasco, E; Sánchez, L J; Avila, R; Cruz-González, I
2011-01-01
San Pedro M\\'artir in the Northwest of Mexico is the site of the Observatorio Astron\\'omico Nacional. It was one of the five candidates sites for the Thirty Meter Telescope, whose site-testing team spent four years measuring the atmospheric properties on site with a very complete array of instrumentation. Using the public database created by this team, we apply a novel method to solar radiation data to estimate the daytime fraction of time when the sky is clear of clouds. We analyse the diurnal, seasonal and annual cycles of cloud cover. We find that 82.4 per cent of the time the sky is clear of clouds. Our results are consistent with those obtained by other authors, using different methods, adding support to this value and proving the potential of the applied method. The clear conditions at the site are particularly good showing that San Pedro M\\'artir is an excellent site for optical and infrared observations.
Vasco, D.W.; Ferretti, Alessandro; Novali, Fabrizio
2008-05-01
Transient pressure variations within a reservoir can be treated as a propagating front and analyzed using an asymptotic formulation. From this perspective one can define a pressure 'arrival time' and formulate solutions along trajectories, in the manner of ray theory. We combine this methodology and a technique for mapping overburden deformation into reservoir volume change as a means to estimate reservoir flow properties, such as permeability. Given the entire 'travel time' or phase field, obtained from the deformation data, we can construct the trajectories directly, there-by linearizing the inverse problem. A numerical study indicates that, using this approach, we can infer large-scale variations in flow properties. In an application to Interferometric Synthetic Aperture (InSAR) observations associated with a CO{sub 2} injection at the Krechba field, Algeria, we image pressure propagation to the northwest. An inversion for flow properties indicates a linear trend of high permeability. The high permeability correlates with a northwest trending fault on the flank of the anticline which defines the field.
Wilson, James D; Woodall, William H
2016-01-01
In many applications it is of interest to identify anomalous behavior within a dynamic interacting system. Such anomalous interactions are reflected by structural changes in the network representation of the system. We propose and investigate the use of a dynamic version of the degree corrected stochastic block model (DCSBM) as a means to model and monitor dynamic networks that undergo a significant structural change. Our model provides a means to simulate a variety of local and global changes in a time-varying network. Furthermore, one can efficiently detect such changes using the maximum likelihood estimates of the parameters that characterize the DCSBM. We assess the utility of the dynamic DCSBM on both simulated and real networks. Using a simple monitoring strategy on the DCSBM, we are able to detect significant changes in the U.S. Senate co-voting network that reflects both times of cohesion and times of polarization among Republican and Democratic members. Our analysis suggests that the dynamic DCSBM pr...
An estimate of the temporal fraction of cloud cover at San Pedro Mártir Observatory
Carrasco, E.; Carramiñana, A.; Sánchez, L. J.; Avila, R.; Cruz-González, I.
2012-02-01
San Pedro Mártir in the north-west of Mexico is the site of the Observatorio Astronómico Nacional. It was one of the five candidate sites for the Thirty Meter Telescope, whose site-testing team spent four years measuring the atmospheric properties on site with a very complete array of instrumentation. Using the public data base created by this team, we apply a novel method to solar radiation data to estimate the daytime fraction of time when the sky is clear of clouds. We analyse the diurnal, seasonal and annual cycles of cloud cover. We find that 82.4 per cent of the time the sky is clear of clouds. Our results are consistent with those obtained by other authors, using different methods, adding support to this value and proving the potential of the applied method. The clear conditions at the site are particularly good showing that San Pedro Mártir is an excellent site for optical and infrared observations.
Spatio-temporal-based joint range and angle estimation for wideband signals
Villemin, Guilhem; Fossati, Caroline; Bourennane, Salah
2013-12-01
Object localization using active sensor network exploiting the scattering of the emitted waves by a transmitter has been drawing a lot of research interest in the last years. For most applications, the environment leads to the arrival of multiple signals corresponding to emitted signal, signals which are scattered by the objects, and noise. In practical systems, the signals impinging on an array are frequently correlated, and the object number rapidly exceeds the number of sensors, making unsuitable most high-resolution methods used in array processing. We propose a solution to overcome these two experimental constraints. Firstly, frequential smoothing is used to decorrelate the scattered signals, enabling the estimation of their time delays of arrival (TDOA), using subspace-based methods. Secondly, an efficient algorithm for source localization using the TDOA is proposed. The advantage of the developed method is its efficiency even if the number of sources is larger than the number of sensors, in the presence of correlated signals. The performances of the proposed method are assessed on simulated signals. The results on real-world data are also presented and analyzed.
Martin, E. R.; Dou, S.; Lindsey, N.; Chang, J. P.; Biondi, B. C.; Ajo Franklin, J. B.; Wagner, A. M.; Bjella, K.; Daley, T. M.; Freifeld, B. M.; Robertson, M.; Ulrich, C.; Williams, E. F.
2016-12-01
Localized strong sources of noise in an array have been shown to cause artifacts in Green's function estimates obtained via cross-correlation. Their effect is often reduced through the use of cross-coherence. Beyond independent localized sources, temporally or spatially correlated sources of noise frequently occur in practice but violate basic assumptions of much of the theory behind ambient noise Green's function retrieval. These correlated noise sources can occur in urban environments due to transportation infrastructure, or in areas around industrial operations like pumps running at CO2 sequestration sites or oil and gas drilling sites. Better understanding of these artifacts should help us develop and justify methods for their automatic removal from Green's function estimates. We derive expected artifacts in cross-correlations from several distributions of correlated noise sources including point sources that are exact time-lagged repeats of each other and Gaussian-distributed in space and time with covariance that exponentially decays. Assuming the noise distribution stays stationary over time, the artifacts become more coherent as more ambient noise is included in the Green's function estimates. We support our results with simple computational models. We observed these artifacts in Green's function estimates from a 2015 ambient noise study in Fairbanks, AK where a trenched distributed acoustic sensing (DAS) array was deployed to collect ambient noise alongside a road with the goal of developing a permafrost thaw monitoring system. We found that joints in the road repeatedly being hit by cars travelling at roughly the speed limit led to artifacts similar to those expected when several points are time-lagged copies of each other. We also show test results of attenuating the effects of these sources during time-lapse monitoring of an active thaw test in the same location with noise detected by a 2D trenched DAS array.
Sibois, Aurore E.; Desai, Shailen D.; Bertiger, Willy; Haines, Bruce J.
2017-02-01
We present results from the generation of 10-year-long continuous time series of the Earth's polar motion at 15-min temporal resolution using Global Positioning System ground data. From our results, we infer an overall noise level in our high-rate polar motion time series of 60 μas (RMS). However, a spectral decomposition of our estimates indicates a noise floor of 4 μas at periods shorter than 2 days, which enables recovery of diurnal and semidiurnal tidally induced polar motion. We deliberately place no constraints on retrograde diurnal polar motion despite its inherent ambiguity with long-period nutation. With this approach, we are able to resolve damped manifestations of the effects of the diurnal ocean tides on retrograde polar motion. As such, our approach is at least capable of discriminating between a historical background nutation model that excludes the effects of the diurnal ocean tides and modern models that include those effects. To assess the quality of our polar motion solution outside of the retrograde diurnal frequency band, we focus on its capability to recover tidally driven and non-tidal variations manifesting at the ultra-rapid (intra-daily) and rapid (characterized by periods ranging from 2 to 20 days) periods. We find that our best estimates of diurnal and semidiurnal tidally induced polar motion result from an approach that adopts, at the observation level, a reasonable background model of these effects. We also demonstrate that our high-rate polar motion estimates yield similar results to daily-resolved polar motion estimates, and therefore do not compromise the ability to resolve polar motion at periods of 2-20 days.
Rojas-Nandayapa, Leonardo
Tail probabilities of sums of heavy-tailed random variables are of a major importance in various branches of Applied Probability, such as Risk Theory, Queueing Theory, Financial Management, and are subject to intense research nowadays. To understand their relevance one just needs to think...... of insurance companies facing losses due to natural disasters, banks seeking protection against huge losses, failures in expensive and sophisticated systems or loss of valuable information in electronic systems. The main difficulty when dealing with this kind of problems is the unavailability of a closed...
S Varadhan, S R
2001-01-01
This volume presents topics in probability theory covered during a first-year graduate course given at the Courant Institute of Mathematical Sciences. The necessary background material in measure theory is developed, including the standard topics, such as extension theorem, construction of measures, integration, product spaces, Radon-Nikodym theorem, and conditional expectation. In the first part of the book, characteristic functions are introduced, followed by the study of weak convergence of probability distributions. Then both the weak and strong limit theorems for sums of independent rando
Nakamura, Toshio; Koike, Hiroko; Aizawa, Jun; Okuno, Mitsuru
2015-10-01
In this study, 14C analysis by accelerator mass spectrometry (AMS) was applied to age estimation based on temporal variations in bomb-produced-14C contents of a full elephant tusk registered at Kyushu University. The tusk measured 175 cm long and 13.8 cm in diameter at the root. Thirty tusk-fragment samples were used for 14C analysis with AMS to estimate the formation ages of different positions according to catalogued global 14C contents (F14C). The F14C value of the tip of the tusk suggested that the elephant was born around 1980, while that of the root suggested death around 1994, a lifespan of at least 14 years, rather shorter period than the average lifetime of an elephant (ca. 80 years). In addition, the F14C values of fragments collected along a cross-sectional line suggested that the outer part of the tusk formed first with inner parts being deposited gradually with growth.
Nakamura, Toshio, E-mail: nakamura@nendai.nagoya-u.ac.jp [Center for Chronological Research, Nagoya University, Chikusa, Nagoya, Aichi 464-8602 (Japan); Koike, Hiroko [Kyushu University Museum, Kyushu University, Hakozaki, Fukuoka, Fukuoka 814-0180 (Japan); Aizawa, Jun; Okuno, Mitsuru [Department of Earth System Science, Faculty of Science, Fukuoka University, 8-19-1 Nanakuma, Jonan-ku, Fukuoka, Fukuoka 814-0180 (Japan)
2015-10-15
In this study, {sup 14}C analysis by accelerator mass spectrometry (AMS) was applied to age estimation based on temporal variations in bomb-produced-{sup 14}C contents of a full elephant tusk registered at Kyushu University. The tusk measured 175 cm long and 13.8 cm in diameter at the root. Thirty tusk-fragment samples were used for {sup 14}C analysis with AMS to estimate the formation ages of different positions according to catalogued global {sup 14}C contents (F{sup 14}C). The F{sup 14}C value of the tip of the tusk suggested that the elephant was born around 1980, while that of the root suggested death around 1994, a lifespan of at least 14 years, rather shorter period than the average lifetime of an elephant (ca. 80 years). In addition, the F{sup 14}C values of fragments collected along a cross-sectional line suggested that the outer part of the tusk formed first with inner parts being deposited gradually with growth.
Gennai, S; Rallo, A; Keil, D; Seigneurin, A; Germi, R; Epaulard, O
2016-06-01
Herpes simplex virus (HSV) encephalitis is associated with a high risk of mortality and sequelae, and early diagnosis and treatment in the emergency department are necessary. However, most patients present with non-specific febrile, acute neurologic impairment; this may lead clinicians to overlook the diagnosis of HSV encephalitis. We aimed to identify which data collected in the first hours in a medical setting were associated with the diagnosis of HSV encephalitis. We conducted a multicenter retrospective case-control study in four French public hospitals from 2007 to 2013. The cases were the adult patients who received a confirmed diagnosis of HSV encephalitis. The controls were all the patients who attended the emergency department of Grenoble hospital with a febrile acute neurologic impairment, without HSV detection by polymerase chain reaction (PCR) in the cerebrospinal fluid (CSF), in 2012 and 2013. A multivariable logistic model was elaborated to estimate factors significantly associated with HSV encephalitis. Finally, an HSV probability score was derived from the logistic model. We identified 36 cases and 103 controls. Factors independently associated with HSV encephalitis were the absence of past neurological history (odds ratio [OR] 6.25 [95 % confidence interval (CI): 2.22-16.7]), the occurrence of seizure (OR 8.09 [95 % CI: 2.73-23.94]), a systolic blood pressure ≥140 mmHg (OR 5.11 [95 % CI: 1.77-14.77]), and a C-reactive protein probability score was calculated summing the value attributed to each independent factor. HSV encephalitis diagnosis may benefit from the use of this score based upon some easily accessible data. However, diagnostic evocation and probabilistic treatment must remain the rule.
含裂结构脆性断裂的失效概率计算%The Estimation for Failure Probability of Brittle Fracture Structure
薛红军; 吕国志
2001-01-01
采用随机有限元法研究结构脆性断裂的失效概率问题，借助于能描述裂尖奇异应变场的裂尖元，将随机有限元法扩展到概率断裂力学的研究领域，简化了应力强度因子对各随机变量的求导过程，针对各随机变量的不确定性，计算了应力强度因子的统计矩，并利用优化方法确定了可靠性指标，采用一阶可靠性方法给出脆性断裂的失效概率。Ⅰ型断裂的算例表明了计算模型和方法的有效性。%We extended the probabilistic finite element method to probabilistic fracture mechanics using a singular element that embeds the near crack-tip singular strain field. The computation procedures for derivatives of the stress intensity factors with respect to the various random variables were simplified through the adjoint approach. We calculated statistical moments of stress intensity factors for various uncertainties. We also determined the probability of fracture by using an optimization procedure to perform a first-order reliability analysis. The results on a mode I fracture example show the effectiveness of computation procedures for the established model. This makes the method useful for estimating the safety and the probability of fracture of a flawed structure.
People's conditional probability judgments follow probability theory (plus noise).
Costello, Fintan; Watts, Paul
2016-09-01
A common view in current psychology is that people estimate probabilities using various 'heuristics' or rules of thumb that do not follow the normative rules of probability theory. We present a model where people estimate conditional probabilities such as P(A|B) (the probability of A given that B has occurred) by a process that follows standard frequentist probability theory but is subject to random noise. This model accounts for various results from previous studies of conditional probability judgment. This model predicts that people's conditional probability judgments will agree with a series of fundamental identities in probability theory whose form cancels the effect of noise, while deviating from probability theory in other expressions whose form does not allow such cancellation. Two experiments strongly confirm these predictions, with people's estimates on average agreeing with probability theory for the noise-cancelling identities, but deviating from probability theory (in just the way predicted by the model) for other identities. This new model subsumes an earlier model of unconditional or 'direct' probability judgment which explains a number of systematic biases seen in direct probability judgment (Costello & Watts, 2014). This model may thus provide a fully general account of the mechanisms by which people estimate probabilities.
APJE-SLIM Based Method for Marine Human Error Probability Estimation%基于APJE-SLIM的海运人因失误概率的确定
席永涛; 陈伟炯; 夏少生; 张晓东
2011-01-01
Safety is the eternal theme in shipping industry.Research shows that human error is the main reason of maritime accidents.In order to research marine human errors, the PSF are discussed, and the human error probability (HEP) is estimated under the influence of PSF.Based on the detailed investigation of human errors in collision avoidance behavior which is the most key mission in navigation and the PSF, human reliability of mariners in collision avoidance is analyzed by using the integration of APJE and SLIM.Result shows that PSF such as fatigue and health status, knowledge, experience and training, task complexity, safety management and organizational effectiveness, etc.have varying influence on HEP.If the level of PSF can be improved, the HEP can decreased.Using APJE to determine the absolute human error probabilities of extreme point can solve the problem that the probability of reference point is hard to obtain in SLIM method, and obtain the marine HEP under the different influence levels of PSF.%安全是海运行业永恒的主题,调查研究表明,人因失误是造成海事的主要原因.为了对海运人因失误进行研究,探讨引起人因失误的行为形成因子(PSF),确定在PSF影响下的人因失误概率.在调查海上避让行为的人因失误和这些失误的行为形成因子的基础上,采用APJE和SLIM 相结合的方法对航海人员避让行为中的可靠性进行分析.结果表明,航海人员疲劳与健康程度、知识、经验与培训水平、任务复杂程度、安全管理水平与组织有效性等PSF对人因失误概率有着不同程度的影响,相应提高PSF水平,可极大地减少人因失误概率.利用APJE确定端点绝对失误概率,解决了SLIM方法中难以获得参考点概率的问题,获得了在不同种类不同水平PSF影响下的海运人因失误概率.
Laode M Golok Jaya
2017-07-01
Full Text Available This paper was aimed to analyse the effect of temporal decorrelation in carbon stocks estimation. Estimation of carbon stocks plays important roles particularly to understand the global carbon cycle in the atmosphere regarding with climate change mitigation effort. PolInSAR technique combines the advantages of Polarimetric Synthetic Aperture Radar (PolSAR and Interferometry Synthetic Aperture Radar (InSAR technique, which is evidenced to have significant contribution in radar mapping technology in the last few years. In carbon stocks estimation, PolInSAR provides information about vertical vegetation structure to estimate carbon stocks in the forest layers. Two coherence Synthetic Aperture Radar (SAR images of ALOS PALSAR full-polarimetric with 46 days temporal baseline were used in this research. The study was carried out in Southeast Sulawesi tropical forest. The research method was by comparing three interferometric phase coherence images affected by temporal decorrelation and their impacts on Random Volume over Ground (RvoG model. This research showed that 46 days temporal baseline has a significant impact to estimate tree heights of the forest cover where the accuracy decrease from R2=0.7525 (standard deviation of tree heights is 2.75 meters to R2=0.4435 (standard deviation 4.68 meters and R2=0.3772 (standard deviation 3.15 meters respectively. However, coherence optimisation can provide the best coherence image to produce a good accuracy of carbon stocks.
Muths, Erin L.; Scherer, R. D.; Amburgey, S. M.; Matthews, T.; Spencer, A. W.; Corn, P.S.
2016-01-01
In an era of shrinking budgets yet increasing demands for conservation, the value of existing (i.e., historical) data are elevated. Lengthy time series on common, or previously common, species are particularly valuable and may be available only through the use of historical information. We provide first estimates of the probability of survival and longevity (0.67–0.79 and 5–7 years, respectively) for a subalpine population of a small-bodied, ostensibly common amphibian, the Boreal Chorus Frog (Pseudacris maculata (Agassiz, 1850)), using historical data and contemporary, hypothesis-driven information–theoretic analyses. We also test a priori hypotheses about the effects of color morph (as suggested by early reports) and of drought (as suggested by recent climate predictions) on survival. Using robust mark–recapture models, we find some support for early hypotheses regarding the effect of color on survival, but we find no effect of drought. The congruence between early findings and our analyses highlights the usefulness of historical information in providing raw data for contemporary analyses and context for conservation and management decisions.
Livingston, Richard A.; Jin, Shuang
2005-05-01
Bridges and other civil structures can exhibit nonlinear and/or chaotic behavior under ambient traffic or wind loadings. The probability density function (pdf) of the observed structural responses thus plays an important role for long-term structural health monitoring, LRFR and fatigue life analysis. However, the actual pdf of such structural response data often has a very complicated shape due to its fractal nature. Various conventional methods to approximate it can often lead to biased estimates. This paper presents recent research progress at the Turner-Fairbank Highway Research Center of the FHWA in applying a novel probabilistic scaling scheme for enhanced maximum entropy evaluation to find the most unbiased pdf. The maximum entropy method is applied with a fractal interpolation formulation based on contraction mappings through an iterated function system (IFS). Based on a fractal dimension determined from the entire response data set by an algorithm involving the information dimension, a characteristic uncertainty parameter, called the probabilistic scaling factor, can be introduced. This allows significantly enhanced maximum entropy evaluation through the added inferences about the fine scale fluctuations in the response data. Case studies using the dynamic response data sets collected from a real world bridge (Commodore Barry Bridge, PA) and from the simulation of a classical nonlinear chaotic system (the Lorenz system) are presented in this paper. The results illustrate the advantages of the probabilistic scaling method over conventional approaches for finding the unbiased pdf especially in the critical tail region that contains the larger structural responses.
Rønjom, Marianne Feen; Brink, Carsten; Laugaard Lorenzen, Ebbe
2015-01-01
Background. To examine the variations of risk-estimates of radiation-induced hypothyroidism (HT) from our previously developed normal tissue complication probability (NTCP) model in patients with head and neck squamous cell carcinoma (HNSCC) in relation to variability of delineation of the thyroid...... gland. Patients and methods. In a previous study for development of an NTCP model for HT, the thyroid gland was delineated in 246 treatment plans of patients with HNSCC. Fifty of these plans were randomly chosen for re-delineation for a study of the intra- and inter-observer variability of thyroid......-observer variability resulted in a mean difference in thyroid volume and Dmean of 0.4 cm(3) (SD ± 1.6) and -0.5 Gy (SD ± 1.0), respectively, and 0.3 cm(3) (SD ± 1.8) and 0.0 Gy (SD ± 1.3) for inter-observer variability. The corresponding mean differences of NTCP values for radiation-induced HT due to intra- and inter...
Atwell, William; Tylka, Allan J.; Dietrich, William; Rojdev, Kristina; Matzkind, Courtney
2016-01-01
In an earlier paper (Atwell, et al., 2015), we investigated solar particle event (SPE) radiation exposures (absorbed dose) to small, thinly-shielded spacecraft during a period when the sunspot number (SSN) was less than 30. These SPEs contain Ground Level Events (GLE), sub-GLEs, and sub-sub-GLEs (Tylka and Dietrich, 2009, Tylka and Dietrich, 2008, and Atwell, et al., 2008). GLEs are extremely energetic solar particle events having proton energies extending into the several GeV range and producing secondary particles in the atmosphere, mostly neutrons, observed with ground station neutron monitors. Sub-GLE events are less energetic, extending into the several hundred MeV range, but do not produce secondary atmospheric particles. Sub-sub GLEs are even less energetic with an observable increase in protons at energies greater than 30 MeV, but no observable proton flux above 300 MeV. In this paper, we consider those SPEs that occurred during 1973-2010 when the SSN was greater than 30 but less than 50. In addition, we provide probability estimates of absorbed dose based on mission duration with a 95% confidence level (CL). We also discuss the implications of these data and provide some recommendations that may be useful to spacecraft designers of these smaller spacecraft.
The perception of probability.
Gallistel, C R; Krishan, Monika; Liu, Ye; Miller, Reilly; Latham, Peter E
2014-01-01
We present a computational model to explain the results from experiments in which subjects estimate the hidden probability parameter of a stepwise nonstationary Bernoulli process outcome by outcome. The model captures the following results qualitatively and quantitatively, with only 2 free parameters: (a) Subjects do not update their estimate after each outcome; they step from one estimate to another at irregular intervals. (b) The joint distribution of step widths and heights cannot be explained on the assumption that a threshold amount of change must be exceeded in order for them to indicate a change in their perception. (c) The mapping of observed probability to the median perceived probability is the identity function over the full range of probabilities. (d) Precision (how close estimates are to the best possible estimate) is good and constant over the full range. (e) Subjects quickly detect substantial changes in the hidden probability parameter. (f) The perceived probability sometimes changes dramatically from one observation to the next. (g) Subjects sometimes have second thoughts about a previous change perception, after observing further outcomes. (h) The frequency with which they perceive changes moves in the direction of the true frequency over sessions. (Explaining this finding requires 2 additional parametric assumptions.) The model treats the perception of the current probability as a by-product of the construction of a compact encoding of the experienced sequence in terms of its change points. It illustrates the why and the how of intermittent Bayesian belief updating and retrospective revision in simple perception. It suggests a reinterpretation of findings in the recent literature on the neurobiology of decision making.
Murdoch, M Elizabeth; Reif, John S; Mazzoil, Marilyn; McCulloch, Stephen D; Fair, Patricia A; Bossart, Gregory D
2008-09-01
Lobomycosis (lacaziosis) is a chronic fungal disease of the skin that affects only dolphins and humans. Previous studies have shown a high prevalence of lobomycosis in bottlenose dolphins (Tursiops truncatus) from the Indian River Lagoon, Florida (IRL). We studied the occurrence and distribution of lobomycosis in the IRL using photo-identification survey data collected between 1996 and 2006. Our objectives were to (1) determine the sensitivity and specificity of photo-identification for diagnosis of lobomycosis in free-ranging dolphins; (2) determine the spatial distribution of lobomycosis in the IRL; and (3) assess temporal patterns of occurrence. Photographs from 704 distinctly marked dolphins were reviewed for skin lesions compatible with lobomycosis. The presumptive diagnosis was validated by comparing the results of photographic analysis with physical examination and histologic examination of lesion biopsies in 102 dolphins captured and released during a health assessment and 3 stranded dolphins. Twelve of 16 confirmed cases were identified previously by photography, a sensitivity of 75%. Among 89 dolphins without disease, all 89 were considered negative, a specificity of 100%. The prevalence of lobomycosis estimated from photographic data was 6.8% (48/704). Spatial distribution was determined by dividing the IRL into six segments based on hydrodynamics and geographic features. The prevalence ranged from River. The incidence of the disease did not increase during the study period, indicating that the disease is endemic, rather than emerging. In summary, photo-identification is a useful tool to monitor the course of individual and population health for this enigmatic disease.
Janssen, Ronald J; Jylänki, Pasi; van Gerven, Marcel A J
2016-01-01
We have proposed a Bayesian approach for functional parcellation of whole-brain FMRI measurements which we call Clustered Activity Estimation with Spatial Adjacency Restrictions (CAESAR). We use distance-dependent Chinese restaurant processes (dd-CRPs) to define a flexible prior which partitions the voxel measurements into clusters whose number and shapes are unknown a priori. With dd-CRPs we can conveniently implement spatial constraints to ensure that our parcellations remain spatially contiguous and thereby physiologically meaningful. In the present work, we extend CAESAR by using Gaussian process (GP) priors to model the temporally smooth haemodynamic signals that give rise to the measured FMRI data. A challenge for GP inference in our setting is the cubic scaling with respect to the number of time points, which can become computationally prohibitive with FMRI measurements, potentially consisting of long time series. As a solution we describe an efficient implementation that is practically as fast as the corresponding time-independent non-GP model with typically-sized FMRI data sets. We also employ a population Monte-Carlo algorithm that can significantly speed up convergence compared to traditional single-chain methods. First we illustrate the benefits of CAESAR and the GP priors with simulated experiments. Next, we demonstrate our approach by parcellating resting state FMRI data measured from twenty participants as taken from the Human Connectome Project data repository. Results show that CAESAR affords highly robust and scalable whole-brain clustering of FMRI timecourses.
S.A. Margulis
2001-01-01
Full Text Available Global estimates of precipitation can now be made using data from a combination of geosynchronous and low earth-orbit satellites. However, revisit patterns of polar-orbiting satellites and the need to sample mixed-clouds scenes from geosynchronous satellites leads to the coarsening of the temporal resolution to the monthly scale. There are prohibitive limitations to the applicability of monthly-scale aggregated precipitation estimates in many hydrological applications. The nonlinear and threshold dependencies of surface hydrological processes on precipitation may cause the hydrological response of the surface to vary considerably based on the intermittent temporal structure of the forcing. Therefore, to make the monthly satellite data useful for hydrological applications (i.e. water balance studies, rainfall-runoff modelling, etc., it is necessary to disaggregate the monthly precipitation estimates into shorter time intervals so that they may be used in surface hydrology models. In this study, two simple statistical disaggregation schemes are developed for use with monthly precipitation estimates provided by satellites. The two techniques are shown to perform relatively well in introducing a reasonable temporal structure into the disaggregated time series. An ensemble of disaggregated realisations was routed through two land surface models of varying complexity so that the error propagation that takes place over the course of the month could be characterised. Results suggest that one of the proposed disaggregation schemes can be used in hydrological applications without introducing significant error. Keywords: precipitation, temporal disaggregation, hydrological modelling, error propagation
Rejani, R; Rao, K V; Osman, M; Srinivasa Rao, Ch; Reddy, K Sammi; Chary, G R; Pushpanjali; Samuel, Josily
2016-03-01
The ungauged wet semi-arid watershed cluster, Seethagondi, lies in the Adilabad district of Telangana in India and is prone to severe erosion and water scarcity. The runoff and soil loss data at watershed, catchment, and field level are necessary for planning soil and water conservation interventions. In this study, an attempt was made to develop a spatial soil loss estimation model for Seethagondi cluster using RUSLE coupled with ARCGIS and was used to estimate the soil loss spatially and temporally. The daily rainfall data of Aphrodite for the period from 1951 to 2007 was used, and the annual rainfall varied from 508 to 1351 mm with a mean annual rainfall of 950 mm and a mean erosivity of 6789 MJ mm ha(-1) h(-1) year(-1). Considerable variation in land use land cover especially in crop land and fallow land was observed during normal and drought years, and corresponding variation in the erosivity, C factor, and soil loss was also noted. The mean value of C factor derived from NDVI for crop land was 0.42 and 0.22 in normal year and drought years, respectively. The topography is undulating and major portion of the cluster has slope less than 10°, and 85.3% of the cluster has soil loss below 20 t ha(-1) year(-1). The soil loss from crop land varied from 2.9 to 3.6 t ha(-1) year(-1) in low rainfall years to 31.8 to 34.7 t ha(-1) year(-1) in high rainfall years with a mean annual soil loss of 12.2 t ha(-1) year(-1). The soil loss from crop land was higher in the month of August with an annual soil loss of 13.1 and 2.9 t ha(-1) year(-1) in normal and drought year, respectively. Based on the soil loss in a normal year, the interventions recommended for 85.3% of area of the watershed includes agronomic measures such as contour cultivation, graded bunds, strip cropping, mixed cropping, crop rotations, mulching, summer plowing, vegetative bunds, agri-horticultural system, and management practices such as broad bed furrow, raised sunken beds, and harvesting available water
Soenario, Ivan; Helbich, Marco; Schmitz, Oliver; Strak, Maciek; Hoek, Gerard; Karssenberg, Derek
2017-04-01
Air pollution has been associated with adverse health effects (e.g., cardiovascular and respiration diseases) in the urban environments. Therefore, the assessment of people's exposure to air pollution is central in epidemiological studies. The estimation of exposures on an individual level can be done by combining location information across space and over time with spatio-temporal data on air pollution concentrations. When detailed information on peoples' space-time paths (e.g. commuting patterns calculated by means of spatial routing algorithms or tracked through GPS) and peoples' major activity locations (e.g. home location, work location) are available, it is possible to calculate more precise personal exposure levels depending on peoples' individual space-time mobility patterns. This requires air pollution values not only at a high level of spatial accuracy and high temporal granularity but such data also needs to be available on a nation-wide scale. As current data is seriously limited in this respect, we introduce a novel data set of NO2 levels across the Netherlands. The provided NO2 concentrations are accessible on hourly timestamps on a 5 meter grid cell resolution for weekdays and weekends, and each month of the year. We modeled a single Land Use Regression model using a five year average of NO2 data from the Dutch NO2 measurement network consisting of N=46 sampling locations distributed over the country. Predictor variables for this model were selected in a data-driven manner using an Elastic Net and Best Subset Selection procedure from 70 candidate predictors including traffic, industry, infrastructure and population-based variables. Subsequently, to model NO2 for each time scale (hour, week, month), the LUR coefficients were fitted using the NO2 data, aggregated per time scale. Model validation was grounded on independent data collected in an ad hoc measurement campaign. Our results show a considerable difference in urban concentrations between
von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo
2014-06-01
Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.
Ettema, Jehan Frans; Østergaard, Søren; Kristensen, Anders Ringgaard
2009-01-01
Cross sectional data on the prevalence of claw and (inter) digital skin diseases on 4854 Holstein Friesian cows in 50 Danish dairy herds was used in a Bayesian network to create herd specific probability distributions for the presence of lameness causing diseases. Parity and lactation stage...... probabilities and random herd effects are used to formulate cow-level probability distributions of disease presence in a specific Danish dairy herd. By step-wise inclusion of information on cow- and herd-level risk factors, lameness prevalence and clinical diagnosis of diseases on cows in the herd, the Bayesian...... network systematically adjusts the probability distributions for disease presence in the specific herd. Information on population-, herd- and cow-level is combined and the uncertainty in inference on disease probability is quantified....
Zhang, Zulin, E-mail: zulin.zhang@hutton.ac.uk [The James Hutton Institute, Craigiebuckler, Aberdeen AB15 8QH (United Kingdom); Le Velly, Morgane [The James Hutton Institute, Craigiebuckler, Aberdeen AB15 8QH (United Kingdom); Robert Gordon University, Institute for Innovation Design and Sustainability (IDEAS), Riverside East, Garthdee, Aberdeen AB10 7GJ (United Kingdom); Rhind, Stewart M.; Kyle, Carol E.; Hough, Rupert L. [The James Hutton Institute, Craigiebuckler, Aberdeen AB15 8QH (United Kingdom); Duff, Elizabeth I. [Biomathematics and Statistics Scotland, Craigiebuckler, Aberdeen AB15 8QH (United Kingdom); McKenzie, Craig [Robert Gordon University, Institute for Innovation Design and Sustainability (IDEAS), Riverside East, Garthdee, Aberdeen AB10 7GJ (United Kingdom)
2015-05-15
Temporal concentration trends of BPA in soils were investigated following sewage sludge application to pasture (study 1: short term sludge application; study 2: long term multiple applications over 13 years). The background levels of BPA in control soils were similar, ranging between 0.67–10.57 ng g{sup −1} (mean: 3.02 ng g{sup −1}) and 0.51–6.58 ng g{sup −1} (mean: 3.22 ng g{sup −1}) for studies 1 and 2, respectively. Concentrations in both treated and control plots increased over the earlier sampling times of the study to a maximum and then decreased over later sampling times, suggesting other sources of BPA to both the treated and control soils over the study period. In study 1 there was a significant treatment effect of sludge application in the autumn (p = 0.002) although no significant difference was observed between treatment and control soils in the spring. In study 2 treated soils contained considerably higher BPA concentrations than controls ranging between 12.89–167.9 ng g{sup −1} (mean: 63.15 ng g{sup −1}). This and earlier studies indicate the long-term accumulation of multiple contaminants by multiple sewage sludge applications over a prolonged period although the effects of the presence of such contaminant mixtures have not yet been elucidated. Fugacity modelling was undertaken to estimate partitioning of Bisphenol A (soil plus sewage: pore water: soil air partitioning) and potential uptake into a range of food crops. While Bisphenol A sorbs strongly to the sewage-amended soil, 4% by mass was predicted to enter soil pore water resulting in significant uptake by crops particularly leafy vegetables (3.12–75.5 ng g{sup −1}), but also for root crops (1.28–31.0 ng g{sup −1}) with much lower uptake into cereal grains (0.62–15.0 ng g{sup −1}). This work forms part of a larger programme of research aimed at assessing the risks associated with the long-term application of sewage sludge to agricultural soils. - Highlights:
基于条件概率的短时睡眠状态实时估计%Sleep level estimation based on conditional probability for nap
王蓓; 张俊民; 张涛; 王行愚
2015-01-01
目的：根据脑电信号的特征，提出基于条件概率的睡眠状态实时估计方法，为睡眠监测提供反映睡眠状态连续变化的客观评价依据。方法在白天短时睡眠过程中，同步采集了4导与睡眠相关的脑电信号（ C3-A2，C4-A1，O1-A2，O2-A1），对每5秒记录数据进行傅里叶变换，分别计算了8~13 Hz和2~7 Hz 的脑电节律能量占空比特征参数。主要方法包含了学习和测试两个阶段：在学习阶段，根据训练数据获得脑电特征参数的概率密度分布；在测试阶段，根据当前特征，得到各睡眠分期的条件概率，并计算获得睡眠状态的估计值。结果分析和测试了12名受试者的短时睡眠数据。通过与睡眠分期的人工判读结果相比较，睡眠状态估计值呈现了睡眠深度的连续变化。觉醒期的显著性差异为2.94，睡眠一期和二期分别为1.78和1.62，分析结果符合实际规律。结论本文所定义的睡眠状态估计值蕴含了睡眠分期的特征，较好地反映了睡眠阶段在持续和过渡期间的连续变化过程，能够为白天短时睡眠状态分析提供实时监测和分析的客观评价依据。%Objective According to the characteristics of electroencephalograph( EEG),an automatic sleep level estimation method based on conditional probability is developed. The ultimate purpose is to obtain and realize the real-time sleep level evaluation. Methods There are 4 EEG channels(O2 -A1 ,O1 -A2 ,C4 -A1 , C3 -A2 )recorded during nap. For every 5-second data,two characteristic parameters of ratio of EEG rhythms (8-13 Hz,2-7 Hz)are calculated after fast Fourier transformation(FFT). The main method consists of two models:learning and testing. During the learning stage,the probability density functions of EEG parameters are obtained based on the training data. During the testing stage,the sleep level is estimated based on the conditional probability of sleep stages. Results The
Griot C
2008-10-01
Full Text Available Abstract Background The design of veterinary and public health surveillance systems has been improved by the ability to combine Geographical Information Systems (GIS, mathematical models and up to date epidemiological knowledge. In Switzerland, an early warning system was developed for detecting the incursion of the bluetongue disease virus (BT and to monitor the frequency of its vectors. Based on data generated by this surveillance system, GIS and transmission models were used in order to determine suitable seasonal vector habitat locations and risk periods for a larger and more targeted surveillance program. Results Combined thematic maps of temperature, humidity and altitude were created to visualize the association with Culicoides vector habitat locations. Additional monthly maps of estimated basic reproduction number transmission rates (R0 were created in order to highlight areas of Switzerland prone to higher BT outbreaks in relation to both vector activity and transmission levels. The maps revealed several foci of higher risk areas, especially in northern parts of Switzerland, suitable for both vector presence and vector activity for 2006. Results showed a variation of R0 values comparing 2005 and 2006 yet suggested that Switzerland was at risk of an outbreak of BT, especially if the incursion arrived in a suitable vector activity period. Since the time of conducting these analyses, this suitability has proved to be the case with the recent outbreaks of BT in northern Switzerland. Conclusion Our results stress the importance of environmental factors and their effect on the dynamics of a vector-borne disease. In this case, results of this model were used as input parameters for creating a national targeted surveillance program tailored to both the spatial and the temporal aspect of the disease and its vectors. In this manner, financial and logistic resources can be used in an optimal way through seasonally and geographically adjusted
U.S. Geological Survey, Department of the Interior — This dataset is one of eight datasets produced by this study. Four of the datasets predict the probability of detecting atrazine and(or) desethyl-atrazine (a...
U.S. Geological Survey, Department of the Interior — This dataset is one of eight datasets produced by this study. Four of the datasets predict the probability of detecting atrazine and(or) desethyl-atrazine (a...
U.S. Geological Survey, Department of the Interior — This dataset is one of eight datasets produced by this study. Four of the datasets predict the probability of detecting atrazine and(or) desethyl-atrazine (a...
U.S. Geological Survey, Department of the Interior — This dataset is one of eight datasets produced by this study. Four of the datasets predict the probability of detecting atrazine and(or) desethyl-atrazine (a...
U.S. Geological Survey, Department of the Interior — This dataset is one of eight datasets produced by this study. Four of the datasets predict the probability of detecting atrazine and(or) desethyl-atrazine (a...
U.S. Geological Survey, Department of the Interior — This dataset is one of eight datasets produced by this study. Four of the datasets predict the probability of detecting atrazine and(or) desethyl-atrazine (a...
U.S. Geological Survey, Department of the Interior — This dataset is one of eight datasets produced by this study. Four of the datasets predict the probability of detecting atrazine and(or) desethyl-atrazine (a...
U.S. Geological Survey, Department of the Interior — This dataset is one of eight datasets produced by this study. Four of the datasets predict the probability of detecting atrazine and(or) desethyl-atrazine (a...
Hevesi, Joseph A.; Johnson, Tyler D.
2016-10-17
A daily precipitation-runoff model, referred to as the Los Angeles Basin watershed model (LABWM), was used to estimate recharge and runoff for a 5,047 square kilometer study area that included the greater Los Angeles area and all surface-water drainages potentially contributing recharge to a 1,450 square kilometer groundwater-study area underlying the greater Los Angeles area, referred to as the Los Angeles groundwater-study area. The recharge estimates for the Los Angeles groundwater-study area included spatially distributed recharge in response to the infiltration of precipitation, runoff, and urban irrigation, as well as mountain-front recharge from surface-water drainages bordering the groundwater-study area. The recharge and runoff estimates incorporated a new method for estimating urban irrigation, consisting of residential and commercial landscape watering, based on land use and the percentage of pervious land area.The LABWM used a 201.17-meter gridded discretization of the study area to represent spatially distributed climate and watershed characteristics affecting the surface and shallow sub-surface hydrology for the Los Angeles groundwater study area. Climate data from a local network of 201 monitoring sites and published maps of 30-year-average monthly precipitation and maximum and minimum air temperature were used to develop the climate inputs for the LABWM. Published maps of land use, land cover, soils, vegetation, and surficial geology were used to represent the physical characteristics of the LABWM area. The LABWM was calibrated to available streamflow records at six streamflow-gaging stations.Model results for a 100-year target-simulation period, from water years 1915 through 2014, were used to quantify and evaluate the spatial and temporal variability of water-budget components, including evapotranspiration (ET), recharge, and runoff. The largest outflow of water from the LABWM was ET; the 100-year average ET rate of 362 millimeters per year (mm
Zambrano-Bigiarini, Mauricio; Nauditt, Alexandra; Birkel, Christian; Verbist, Koen; Ribbe, Lars
2017-03-01
Accurate representation of the real spatio-temporal variability of catchment rainfall inputs is currently severely limited. Moreover, spatially interpolated catchment precipitation is subject to large uncertainties, particularly in developing countries and regions which are difficult to access. Recently, satellite-based rainfall estimates (SREs) provide an unprecedented opportunity for a wide range of hydrological applications, from water resources modelling to monitoring of extreme events such as droughts and floods.This study attempts to exhaustively evaluate - for the first time - the suitability of seven state-of-the-art SRE products (TMPA 3B42v7, CHIRPSv2, CMORPH, PERSIANN-CDR, PERSIAN-CCS-Adj, MSWEPv1.1, and PGFv3) over the complex topography and diverse climatic gradients of Chile. Different temporal scales (daily, monthly, seasonal, annual) are used in a point-to-pixel comparison between precipitation time series measured at 366 stations (from sea level to 4600 m a.s.l. in the Andean Plateau) and the corresponding grid cell of each SRE (rescaled to a 0.25° grid if necessary). The modified Kling-Gupta efficiency was used to identify possible sources of systematic errors in each SRE. In addition, five categorical indices (PC, POD, FAR, ETS, fBIAS) were used to assess the ability of each SRE to correctly identify different precipitation intensities.Results revealed that most SRE products performed better for the humid South (36.4-43.7° S) and Central Chile (32.18-36.4° S), in particular at low- and mid-elevation zones (0-1000 m a.s.l.) compared to the arid northern regions and the Far South. Seasonally, all products performed best during the wet seasons (autumn and winter; MAM-JJA) compared to summer (DJF) and spring (SON). In addition, all SREs were able to correctly identify the occurrence of no-rain events, but they presented a low skill in classifying precipitation intensities during rainy days. Overall, PGFv3 exhibited the best performance everywhere
Hevesi, Joseph A.; Johnson, Tyler D.
2016-10-17
A daily precipitation-runoff model, referred to as the Los Angeles Basin watershed model (LABWM), was used to estimate recharge and runoff for a 5,047 square kilometer study area that included the greater Los Angeles area and all surface-water drainages potentially contributing recharge to a 1,450 square kilometer groundwater-study area underlying the greater Los Angeles area, referred to as the Los Angeles groundwater-study area. The recharge estimates for the Los Angeles groundwater-study area included spatially distributed recharge in response to the infiltration of precipitation, runoff, and urban irrigation, as well as mountain-front recharge from surface-water drainages bordering the groundwater-study area. The recharge and runoff estimates incorporated a new method for estimating urban irrigation, consisting of residential and commercial landscape watering, based on land use and the percentage of pervious land area.The LABWM used a 201.17-meter gridded discretization of the study area to represent spatially distributed climate and watershed characteristics affecting the surface and shallow sub-surface hydrology for the Los Angeles groundwater study area. Climate data from a local network of 201 monitoring sites and published maps of 30-year-average monthly precipitation and maximum and minimum air temperature were used to develop the climate inputs for the LABWM. Published maps of land use, land cover, soils, vegetation, and surficial geology were used to represent the physical characteristics of the LABWM area. The LABWM was calibrated to available streamflow records at six streamflow-gaging stations.Model results for a 100-year target-simulation period, from water years 1915 through 2014, were used to quantify and evaluate the spatial and temporal variability of water-budget components, including evapotranspiration (ET), recharge, and runoff. The largest outflow of water from the LABWM was ET; the 100-year average ET rate of 362 millimeters per year (mm
Penaloza, Andrea; Verschuren, Franck; Meyer, Guy; Quentin-Georget, Sybille; Soulie, Caroline; Thys, Frédéric; Roy, Pierre-Marie
2013-08-01
The assessment of clinical probability (as low, moderate, or high) with clinical decision rules has become a cornerstone of diagnostic strategy for patients with suspected pulmonary embolism, but little is known about the use of physician gestalt assessment of clinical probability. We evaluate the performance of gestalt assessment for diagnosing pulmonary embolism. We conducted a retrospective analysis of a prospective observational cohort of consecutive suspected pulmonary embolism patients in emergency departments. Accuracy of gestalt assessment was compared with the Wells score and the revised Geneva score by the area under the curve (AUC) of receiver operating characteristic curves. Agreement between the 3 methods was determined by κ test. The study population was 1,038 patients, with a pulmonary embolism prevalence of 31.3%. AUC differed significantly between the 3 methods and was 0.81 (95% confidence interval [CI] 0.78 to 0.84) for gestalt assessment, 0.71 (95% CI 0.68 to 0.75) for Wells, and 0.66 (95% CI 0.63 to 0.70) for the revised Geneva score. The proportion of patients categorized as having low clinical probability was statistically higher with gestalt than with revised Geneva score (43% versus 26%; 95% CI for the difference of 17%=13% to 21%). Proportion of patients categorized as having high clinical probability was higher with gestalt than with Wells (24% versus 7%; 95% CI for the difference of 17%=14% to 20%) or revised Geneva score (24% versus 10%; 95% CI for the difference of 15%=13% to 21%). Pulmonary embolism prevalence was significantly lower with gestalt versus clinical decision rules in low clinical probability (7.6% for gestalt versus 13.0% for revised Geneva score and 12.6% for Wells score) and non-high clinical probability groups (18.3% for gestalt versus 29.3% for Wells and 27.4% for revised Geneva score) and was significantly higher with gestalt versus Wells score in high clinical probability groups (72.1% versus 58.1%). Agreement
Mohammed Hussni O
2010-06-01
Full Text Available Abstract Background Cryptosporidium parvum is one of the most important biological contaminants in drinking water that produces life threatening infection in people with compromised immune systems. Dairy calves are thought to be the primary source of C. parvum contamination in watersheds. Understanding the spatial and temporal variation in the risk of C. parvum infection in dairy cattle is essential for designing cost-effective watershed management strategies to protect drinking water sources. Crude and Bayesian seasonal risk estimates for Cryptosporidium in dairy calves were used to investigate the spatio-temporal dynamics of C. parvum infection on dairy farms in the New York City watershed. Results Both global (Global Moran's I and specific (SaTScan cluster analysis methods revealed a significant (p C. parvum infection in all herds in the summer (p = 0.002, compared to the rest of the year. Bayesian estimates did not show significant spatial autocorrelation in any season. Conclusions Although we were not able to identify seasonal clusters using Bayesian approach, crude estimates highlighted both temporal and spatial clusters of C. parvum infection in dairy herds in a major watershed. We recommend that further studies focus on the factors that may lead to the presence of C. parvum clusters within the watershed, so that monitoring and prevention practices such as stream monitoring, riparian buffers, fencing and manure management can be prioritized and improved, to protect drinking water supplies and public health.
Inferring Beliefs as Subjectively Imprecise Probabilities
Andersen, Steffen; Fountain, John; Harrison, Glenn W.;
2012-01-01
We propose a method for estimating subjective beliefs, viewed as a subjective probability distribution. The key insight is to characterize beliefs as a parameter to be estimated from observed choices in a well-defined experimental task and to estimate that parameter as a random coefficient. The e...... probabilities are indeed best characterized as probability distributions with non-zero variance....
Cikota, Aleksandar [European Southern Observatory, Karl-Schwarzschild-Strasse 2, D-85748 Garching b. München (Germany); Deustua, Susana [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Marleau, Francine, E-mail: acikota@eso.org [Institute for Astro- and Particle Physics, University of Innsbruck, Technikerstrasse 25/8, A-6020 Innsbruck (Austria)
2016-03-10
We investigate limits on the extinction values of Type Ia supernovae (SNe Ia) to statistically determine the most probable color excess, E(B – V), with galactocentric distance, and use these statistics to determine the absorption-to-reddening ratio, R{sub V}, for dust in the host galaxies. We determined pixel-based dust mass surface density maps for 59 galaxies from the Key Insight on Nearby Galaxies: a Far-infrared Survey with Herschel (KINGFISH). We use SN Ia spectral templates to develop a Monte Carlo simulation of color excess E(B – V) with R{sub V} = 3.1 and investigate the color excess probabilities E(B – V) with projected radial galaxy center distance. Additionally, we tested our model using observed spectra of SN 1989B, SN 2002bo, and SN 2006X, which occurred in three KINGFISH galaxies. Finally, we determined the most probable reddening for Sa–Sap, Sab–Sbp, Sbc–Scp, Scd–Sdm, S0, and irregular galaxy classes as a function of R/R{sub 25}. We find that the largest expected reddening probabilities are in Sab–Sb and Sbc–Sc galaxies, while S0 and irregular galaxies are very dust poor. We present a new approach for determining the absorption-to-reddening ratio R{sub V} using color excess probability functions and find values of R{sub V} = 2.71 ± 1.58 for 21 SNe Ia observed in Sab–Sbp galaxies, and R{sub V} = 1.70 ± 0.38, for 34 SNe Ia observed in Sbc–Scp galaxies.
PROBABILITY SURVEYS , CONDITIONAL PROBABILITIES AND ECOLOGICAL RISK ASSESSMENT
We show that probability-based environmental resource monitoring programs, such as the U.S. Environmental Protection Agency's (U.S. EPA) Environmental Monitoring and Assessment Program, and conditional probability analysis can serve as a basis for estimating ecological risk over ...
PROBABILITY SURVEYS , CONDITIONAL PROBABILITIES AND ECOLOGICAL RISK ASSESSMENT
We show that probability-based environmental resource monitoring programs, such as the U.S. Environmental Protection Agency's (U.S. EPA) Environmental Monitoring and Assessment Program, and conditional probability analysis can serve as a basis for estimating ecological risk over ...
Cikota, Aleksandar; Marleau, Francine
2016-01-01
We investigate limits on the extinction values of Type Ia supernovae to statistically determine the most probable color excess, E(B-V), with galactocentric distance, and use these statistics to determine the absorption-to-reddening ratio, $R_V$, for dust in the host galaxies. We determined pixel-based dust mass surface density maps for 59 galaxies from the Key Insight on Nearby Galaxies: a Far-Infrared Survey with \\textit{Herschel} (KINGFISH, Kennicutt et al. (2011)). We use Type Ia supernova spectral templates (Hsiao et al. 2007) to develop a Monte Carlo simulation of color excess E(B-V) with $R_V$ = 3.1 and investigate the color excess probabilities E(B-V) with projected radial galaxy center distance. Additionally, we tested our model using observed spectra of SN 1989B, SN 2002bo and SN 2006X, which occurred in three KINGFISH galaxies. Finally, we determined the most probable reddening for Sa-Sap, Sab-Sbp, Sbc-Scp, Scd-Sdm, S0 and Irregular galaxy classes as a function of $R/R_{25}$. We find that the larges...
Ramos, Yuddy; St-Onge, Benoît; Blanchet, Jean-Pierre; Smargiassi, Audrey
2016-06-01
Air pollution is a major environmental and health problem, especially in urban agglomerations. Estimating personal exposure to fine particulate matter (PM2.5) remains a great challenge because it requires numerous point measurements to explain the daily spatial variation in pollutant levels. Furthermore, meteorological variables have considerable effects on the dispersion and distribution of pollutants, which also depends on spatio-temporal emission patterns. In this study we developed a hybrid interpolation technique that combined the inverse distance-weighted (IDW) method with Kriging with external drift (KED), and applied it to daily PM2.5 levels observed at 10 monitoring stations. This provided us with downscaled high-resolution maps of PM2.5 for the Island of Montreal. For the KED interpolation, we used spatio-temporal daily meteorological estimates and spatial covariates as land use and vegetation density. Different KED and IDW daily estimation models for the year 2010 were developed for each of the six synoptic weather classes. These clusters were developed using principal component analysis and unsupervised hierarchical classification. The results of the interpolation models were assessed with a leave-one-station-out cross-validation. The performance of the hybrid model was better than that of the KED or the IDW alone for all six synoptic weather classes (the daily estimate for R(2) was 0.66-0.93 and for root mean square error (RMSE) 2.54-1.89 μg/m(3)).
代志华; 付晓东; 黄袁; 贾楠
2012-01-01
为了进行服务风险管理,需要了解服务质量(QoS)的随机特性,而描述QoS随机特性的一种有效手段是获得其准确的概率分布.为此,提出了一种基于最大熵原理在小样本情况下获取Web服务QoS概率分布的方法.方法采用最大熵原理将小样本情况下QoS概率分布获取的问题规约为一个由已知QoS数据确定约束条件的最优化问题进行求解,获得QoS概率密度函数的解析式,然后设计了对该概率密度函数解析式参数进行估计的算法.最后,以实际的Web服务QoS数据为基础,通过实验验证了该方法对不同QoS分布获取时的有效性和合理性,并验证了分布获取算法的效率和终止性.%To manage the risk of service, it is necessary to obtain stochastic character of Quality of Service (QoS) that is represented as accurate probability distribution. This paper presented an approach to estimate probability distribution of Web service QoS in the case of small number of samples. Using max entropy principle, the analytical formula of the probability density function can be obtained by transforming the probability distribution estimation problem into an optimal problem with constraints obtained from sampling QoS data. Then an algorithm to estimate parameters of the probability density function was designed. The experimental and simulation results based on real Web service QoS data show the effectiveness of the proposed approach for probability distribution estimation of different QoS attribute. The efficiency and feasibility of the distribution estimation algorithm have got validated by experiments too.
Evaluation of hierarchical temporal memory for a real world application
Melis, Wim J.C.; Chizuwa, Shuhei; Kameyama, Michitaka
2010-01-01
A large number of real world applications, such as user support systems, can still not be performed easily by conventional algorithms in comparison with the human brain. Such intelligence is often implemented, by using probability based systems. This paper focuses on comparing the implementation of a cellular phone intention estimation example on a Bayesian Network and Hierarchical Temporal Memory. It is found that Hierarchical Temporal Memory is a system that requires little effort for desig...
Using FIESTA , an R-based tool for analysts, to look at temporal trends in forest estimates
Tracey S. Frescino; Paul L. Patterson; Elizabeth A. Freeman; Gretchen G. Moisen
2012-01-01
FIESTA (Forest Inventory Estimation for Analysis) is a user-friendly R package that supports the production of estimates for forest resources based on procedures from Bechtold and Patterson (2005). The package produces output consistent with current tools available for the Forest Inventory and Analysis National Program, such as FIDO (Forest Inventory Data Online) and...
Dripps, W.R.; Bradbury, K.R.
2007-01-01
Quantifying the spatial and temporal distribution of natural groundwater recharge is usually a prerequisite for effective groundwater modeling and management. As flow models become increasingly utilized for management decisions, there is an increased need for simple, practical methods to delineate recharge zones and quantify recharge rates. Existing models for estimating recharge distributions are data intensive, require extensive parameterization, and take a significant investment of time in order to establish. The Wisconsin Geological and Natural History Survey (WGNHS) has developed a simple daily soil-water balance (SWB) model that uses readily available soil, land cover, topographic, and climatic data in conjunction with a geographic information system (GIS) to estimate the temporal and spatial distribution of groundwater recharge at the watershed scale for temperate humid areas. To demonstrate the methodology and the applicability and performance of the model, two case studies are presented: one for the forested Trout Lake watershed of north central Wisconsin, USA and the other for the urban-agricultural Pheasant Branch Creek watershed of south central Wisconsin, USA. Overall, the SWB model performs well and presents modelers and planners with a practical tool for providing recharge estimates for modeling and water resource planning purposes in humid areas. ?? Springer-Verlag 2007.
Probability Aggregates in Probability Answer Set Programming
Saad, Emad
2013-01-01
Probability answer set programming is a declarative programming that has been shown effective for representing and reasoning about a variety of probability reasoning tasks. However, the lack of probability aggregates, e.g. {\\em expected values}, in the language of disjunctive hybrid probability logic programs (DHPP) disallows the natural and concise representation of many interesting problems. In this paper, we extend DHPP to allow arbitrary probability aggregates. We introduce two types of p...
Predicting the Probability of Lightning Occurrence with Generalized Additive Models
Fabsic, Peter; Mayr, Georg; Simon, Thorsten; Zeileis, Achim
2017-04-01
This study investigates the predictability of lightning in complex terrain. The main objective is to estimate the probability of lightning occurrence in the Alpine region during summertime afternoons (12-18 UTC) at a spatial resolution of 64 × 64 km2. Lightning observations are obtained from the ALDIS lightning detection network. The probability of lightning occurrence is estimated using generalized additive models (GAM). GAMs provide a flexible modelling framework to estimate the relationship between covariates and the observations. The covariates, besides spatial and temporal effects, include numerous meteorological fields from the ECMWF ensemble system. The optimal model is chosen based on a forward selection procedure with out-of-sample mean squared error as a performance criterion. Our investigation shows that convective precipitation and mid-layer stability are the most influential meteorological predictors. Both exhibit intuitive, non-linear trends: higher values of convective precipitation indicate higher probability of lightning, and large values of the mid-layer stability measure imply low lightning potential. The performance of the model was evaluated against a climatology model containing both spatial and temporal effects. Taking the climatology model as a reference forecast, our model attains a Brier Skill Score of approximately 46%. The model's performance can be further enhanced by incorporating the information about lightning activity from the previous time step, which yields a Brier Skill Score of 48%. These scores show that the method is able to extract valuable information from the ensemble to produce reliable spatial forecasts of the lightning potential in the Alps.
The probabilities of unique events.
Sangeet S Khemlani
Full Text Available Many theorists argue that the probabilities of unique events, even real possibilities such as President Obama's re-election, are meaningless. As a consequence, psychologists have seldom investigated them. We propose a new theory (implemented in a computer program in which such estimates depend on an intuitive non-numerical system capable only of simple procedures, and a deliberative system that maps intuitions into numbers. The theory predicts that estimates of the probabilities of conjunctions should often tend to split the difference between the probabilities of the two conjuncts. We report two experiments showing that individuals commit such violations of the probability calculus, and corroborating other predictions of the theory, e.g., individuals err in the same way even when they make non-numerical verbal estimates, such as that an event is highly improbable.
Duan, Weili; Takara, Kaoru; He, Bin; Luo, Pingping; Nover, Daniel; Yamashiki, Yosuke
2013-09-01
Nutrients and suspended sediment in surface water play important roles in aquatic ecosystems and contribute strongly to water quality with implication for drinking water resources, human and environmental health. Estimating loads of nutrients (nitrogen and phosphorus) and suspended sediment (SS) is complicated because of infrequent monitoring data, retransformation bias, data censoring, and non-normality. To obtain reliable unbiased estimates, the Maintenance of Variance-Extension type 3 (MOVE. 3) and the regression model Load Estimator (LOADEST) were applied to develop regression equations and to estimate total nitrogen (TN), total phosphorus (TP) and SS loads at five sites on the Ishikari River, Japan, from 1985 to 2010. Coefficients of determination (R(2)) for the best-fit regression models for loads of TN, TP, and SS for the five sites ranged from 71.86% to 90.94%, suggesting the model for all three constituents successfully simulated the variability in constituent loads at all studied sites. Estimated monthly average loads at Yishikarikakou-bashi were larger than at the other sites, with TN, TP, and SS loads ranging from 8.52×10(3) to 2.00×10(5) kg/day (Apr. 1999), 3.96×10(2) to 5.23×10(4) kg/ day (Apr. 1999), and 9.21×10(4) to 9.25×10(7) kg/day (Sep. 2001), respectively. Because of variation in river discharge, the estimated seasonal loads fluctuated widely over the period 1985 to 2010, with the greatest loads occurring in spring and the smallest loads occurring in winter. Estimated loads of TN, TP, and especially SS showed decreasing trends during the study period. Accurate load estimation is a necessary goal of water quality monitoring efforts and the methods described here provide essential information for effectively managing water resources. Copyright © 2013 Elsevier B.V. All rights reserved.
Dumont, Egon; Johnson, Andrew C; Keller, Virginie D J; Williams, Richard J
2015-01-01
Nano silver and nano zinc-oxide monthly concentrations in surface waters across Europe were modeled at ~6 x 9 km spatial resolution. Nano-particle loadings from households to rivers were simulated considering household connectivity to sewerage, sewage treatment efficiency, the spatial distribution of sewage treatment plants, and their associated populations. These loadings were used to model temporally varying nano-particle concentrations in rivers, lakes and wetlands by considering dilution, downstream transport, water evaporation, water abstraction, and nano-particle sedimentation. Temporal variability in concentrations caused by weather variation was simulated using monthly weather data for a representative 31-year period. Modeled concentrations represent current levels of nano-particle production.Two scenarios were modeled. In the most likely scenario, half the river stretches had long-term average concentrations exceeding 0.002 ng L(-1) nano silver and 1.5 ng L(-1) nano zinc oxide. In 10% of the river stretches, these concentrations exceeded 0.18 ng L(-1) and 150 ng L(-1), respectively. Predicted concentrations were usually highest in July.
于超; 刘洋; 樊治平
2012-01-01
如何在突发事件发生之初，采用科学的方法估计不同情景发生的概率是正确选择应急决策方案的前提和关键。针对突发事件情景概率估计问题，提出了一种主客观信息集成方法。首先，采用相似度计算方法确定同类历史案例与当前突发事件的相似度，并依据相似度筛选得到相似历史案例，通过统计相似案例灾害情景确定当前突发事件各可能情景发生的客观概率；然后，采用线性加权方法对专家主观判断给出的情景概率信息进行集结得到当前突发事件各情景发生的主观概率；进一步地，通过集成客观和主观概率信息确定突发事件各情景发生的概率；最后，通过一个算例说明了本文提出方法的可行性与有效性% How to estimate the probabilities of different scenarios scientifically in the beginning of emergency is a premise and key problem for emergency decision making. This paper proposes a method for estimating scenario probabilities by combining the subjective and objective data. In this method, firstly, the history cases are screened according to the calculated similarity degree between history cases and the current emergency event. Then, the objective scenario probabilities are calculated according to the statistics of scenario of similar cases. Furthermore, the subjective scenario probabilities are estimated according to the weighted average result of expert judgments. Moreover, scenario probabilities are obtained by integrating the subjective and objective scenario probabilities. Finally, a numerical example is used to illustrate the feasibility and significance of the proposed method.
Probably Almost Bayes Decisions
Anoulova, S.; Fischer, Paul; Poelt, S.
1996-01-01
discriminant functions for this purpose. We analyze this approach for different classes of distribution functions of Boolean features:kth order Bahadur-Lazarsfeld expansions andkth order Chow expansions. In both cases, we obtain upper bounds for the required sample size which are small polynomials...... in the relevant parameters and which match the lower bounds known for these classes. Moreover, the learning algorithms are efficient.......In this paper, we investigate the problem of classifying objects which are given by feature vectors with Boolean entries. Our aim is to "(efficiently) learn probably almost optimal classifications" from examples. A classical approach in pattern recognition uses empirical estimations of the Bayesian...
Rodhouse, T.J.; Irvine, K.M.; Vierling, K.T.; Vierling, L.A.
2011-01-01
Monitoring programs that evaluate restoration and inform adaptive management are important for addressing environmental degradation. These efforts may be well served by spatially explicit hierarchical approaches to modeling because of unavoidable spatial structure inherited from past land use patterns and other factors. We developed Bayesian hierarchical models to estimate trends from annual density counts observed in a spatially structured wetland forb (Camassia quamash [camas]) population following the cessation of grazing and mowing on the study area, and in a separate reference population of camas. The restoration site was bisected by roads and drainage ditches, resulting in distinct subpopulations ("zones") with different land use histories. We modeled this spatial structure by fitting zone-specific intercepts and slopes. We allowed spatial covariance parameters in the model to vary by zone, as in stratified kriging, accommodating anisotropy and improving computation and biological interpretation. Trend estimates provided evidence of a positive effect of passive restoration, and the strength of evidence was influenced by the amount of spatial structure in the model. Allowing trends to vary among zones and accounting for topographic heterogeneity increased precision of trend estimates. Accounting for spatial autocorrelation shifted parameter coefficients in ways that varied among zones depending on strength of statistical shrinkage, autocorrelation and topographic heterogeneity-a phenomenon not widely described. Spatially explicit estimates of trend from hierarchical models will generally be more useful to land managers than pooled regional estimates and provide more realistic assessments of uncertainty. The ability to grapple with historical contingency is an appealing benefit of this approach.
Scaling Qualitative Probability
Burgin, Mark
2017-01-01
There are different approaches to qualitative probability, which includes subjective probability. We developed a representation of qualitative probability based on relational systems, which allows modeling uncertainty by probability structures and is more coherent than existing approaches. This setting makes it possible proving that any comparative probability is induced by some probability structure (Theorem 2.1), that classical probability is a probability structure (Theorem 2.2) and that i...
Werner, Rene; Ehrhardt, Jan; Schmidt-Richberg, Alexander; Handels, Heinz [Luebeck Univ. (Germany). Inst. of Medical Informatics; Albers, Dirk; Petersen, Cordula; Cremers, Florian [University Medical Center Hamburg-Eppendorf, Hamburg (Germany). Dept. of Radiotherapy and Radio-Oncology; Frenzel, Thorsten [University Medical Center Hamburg-Eppendorf, Hamburg (Germany). Health Care Center
2012-07-01
Purpose: Breathing-induced motion effects on dose distributions in radiotherapy can be analyzed using 4D CT image sequences and registration-based dose accumulation techniques. Often simplifying assumptions are made during accumulation. In this paper, we study the dosimetric impact of two aspects which may be especially critical for IMRT treatment: the weighting scheme for the dose contributions of IMRT segments at different breathing phases and the temporal resolution of 4D CT images applied for dose accumulation. Methods: Based on a continuous problem formulation a patient- and plan-specific scheme for weighting segment dose contributions at different breathing phases is derived for use in step- and -shoot IMRT dose accumulation. Using 4D CT data sets and treatment plans for 5 lung tumor patients, dosimetric motion effects as estimated by the derived scheme are compared to effects resulting from a common equal weighting approach. Effects of reducing the temporal image resolution are evaluated for the same patients and both weighting schemes. Results: The equal weighting approach underestimates dosimetric motion effects when considering single treatment fractions. Especially interplay effects (relative misplacement of segments due to respiratory tumor motion) for IMRT segments with only a few monitor units are insufficiently represented (local point differences > 25% of the prescribed dose for larger tumor motion). The effects, however, tend to be averaged out over the entire treatment course. Regarding temporal image resolution, estimated motion effects in terms of measures of the CTV dose coverage are barely affected (in comparison to the full resolution) when using only half of the original resolution and equal weighting. In contrast, occurence and impact of interplay effects are poorly captured for some cases (large tumor motion, undersized PTV margin) for a resolution of 10/14 phases and the more accurate patient- and plan-specific dose accumulation scheme
J. Yoon
2013-08-01
Full Text Available It is now possible to monitor the global and long-term trends of trace gases that are important to atmospheric chemistry, climate, and air quality with satellite data records that span more than a decade. However, many of the remote sensing techniques used by satellite instruments produce measurements that have variable sensitivity to the vertical profiles of atmospheric gases. In the case of constrained retrievals like optimal estimation, this leads to a varying amount of a priori information in the retrieval and is represented by an averaging kernel. In this study, we investigate to what extent such trends can be biased by temporal changes of averaging kernels used in the retrieval algorithm. In particular, the surface carbon monoxide data retrieved from the Measurements Of Pollution In The Troposphere (MOPITT instrument from 2001 to 2010 were analysed. As a practical example based on the MOPITT data, we show that if the true atmospheric mixing ratio is continuously 50% higher or lower than the a priori state, the temporal change of the averaging kernel at the surface level gives rise to an artificial trend in retrieved surface carbon monoxide, ranging from −10.71 to +13.21 ppbv yr−1 (−5.68 to +8.84% yr−1 depending on location. Therefore, in the case of surface (or near-surface level CO derived from MOPITT, the AKs trends multiplied by the difference between true and a priori states must be quantified in order to estimate trend biases.