WorldWideScience

Sample records for interval distribution method

  1. Measurement of dead time by time interval distribution method

    Science.gov (United States)

    Arkani, Mohammad; Raisali, Gholamreza

    2015-02-01

    Non-random event losses due to dead time effect in nuclear radiation detection systems distort the original Poisson process into a new type of distribution. As the characteristics of the distribution depend on physical properties of the detection system, it is possible to estimate the dead time parameters based on time interval analysis, this is the problem investigated in this work. A BF3 ionization chamber is taken as a case study to check the validity of the method in experiment. The results are compared with the data estimated by power rising experiment performed in Esfahan Heavy Water Zero Power Reactor (EHWZPR). Using Monte Carlo simulation, the problem is elaborately studied and useful range for counting rates of the detector is determined. The proposed method is accurate and applicable for all kinds of radiation detectors with no potential difficulty and no need for any especial nuclear facility. This is not a time consuming method and advanced capability of online examination during normal operation of the detection system is possible.

  2. Interval methods: An introduction

    DEFF Research Database (Denmark)

    Achenie, L.E.K.; Kreinovich, V.; Madsen, Kaj

    2006-01-01

    This chapter contains selected papers presented at the Minisymposium on Interval Methods of the PARA'04 Workshop '' State-of-the-Art in Scientific Computing ''. The emphasis of the workshop was on high-performance computing (HPC). The ongoing development of ever more advanced computers provides....... An important characteristic of the computer performance in scientific computing is the accuracy of the Computation results. Often, we can estimate this accuracy by using traditional statistical techniques. However, in many practical situations, we do not know the probability distributions of different...... '' techniques, and the applications of these techniques to various problems of scientific computing....

  3. Interval methods: An introduction

    DEFF Research Database (Denmark)

    Achenie, L.E.K.; Kreinovich, V.; Madsen, Kaj

    2006-01-01

    . An important characteristic of the computer performance in scientific computing is the accuracy of the Computation results. Often, we can estimate this accuracy by using traditional statistical techniques. However, in many practical situations, we do not know the probability distributions of different...... the potential for solving increasingly difficult computational problems. However, given the complexity of modern computer architectures, the task of realizing this potential needs careful attention. A main concern of HPC is the development of software that optimizes the performance of a given computer...... measurement, estimation, and/or roundoff errors, we only know estimates of the upper bounds on the corresponding measurement errors, i.e., we only know an interval of possible values of each such error. The papers from the following chapter contain the description of the corresponding '' interval computation...

  4. INTERVAL ARITHMETIC AND STATIC INTERVAL FINITE ELEMENT METHOD

    Institute of Scientific and Technical Information of China (English)

    郭书祥; 吕震宙

    2001-01-01

    When the uncertainties of structures may be bounded in intervals, through some suitable discretization, interval finite element method can be constructed by combining the interval analysis with the traditional finite element method(FEM). The two parameters,median and deviation, were used to represent the uncertainties of interval variables. Based on the arithmetic rules of intervals, some properties and arithmetic rules of interval variables were demonstrated. Combining the procedure of interval analysis with FEM, a static linear interval finite element method was presented to solve the non-random uncertain structures. The solving of the characteristic parameters of n-freedom uncertain displacement field of the static governing equation was transformed into 2 n-order linear equations. It is shown by a numerical example that the proposed method is practical and effective.

  5. [Evaluation of the principles of distribution of electrocardiographic R-R intervals for elaboration of methods of automated diagnosis of cardiac rhythm disorders].

    Science.gov (United States)

    Tsukerman, B M; Finkel'shteĭn, I E

    1987-07-01

    A statistical analysis of prolonged ECG records has been carried out in patients with various heart rhythm and conductivity disorders. The distribution of absolute R-R duration values and relationships between adjacent intervals have been examined. A two-step algorithm has been constructed that excludes anomalous and "suspicious" intervals from a sample of consecutively recorded R-R intervals, until only the intervals between contractions of veritably sinus origin remain in the sample. The algorithm has been developed into a programme for microcomputer Electronica NC-80. It operates reliably even in cases of complex combined rhythm and conductivity disorders.

  6. Distributions of order patterns of interval maps

    CERN Document Server

    Abrams, Aaron; Landau, Henry; Landau, Zeph; Pommersheim, James

    2010-01-01

    A permutation $\\sigma$ describing the relative orders of the first $n$ iterates of a point $x$ under a self-map $f$ of the interval $I=[0,1]$ is called an \\emph{order pattern}. For fixed $f$ and $n$, measuring the points $x\\in I$ (according to Lebesgue measure) that generate the order pattern $\\sigma$ gives a probability distribution $\\mu_n(f)$ on the set of length $n$ permutations. We study the distributions that arise this way for various classes of functions $f$. Our main results treat the class of measure preserving functions. We obtain an exact description of the set of realizable distributions in this case: for each $n$ this set is a union of open faces of the polytope of flows on a certain digraph, and a simple combinatorial criterion determines which faces are included. We also show that for general $f$, apart from an obvious compatibility condition, there is no restriction on the sequence $\\{\\mu_n(f)\\}$ for $n=1,2,...$. In addition, we give a necessary condition for $f$ to have \\emph{finite exclusion...

  7. Interval Estimations of the Two-Parameter Exponential Distribution

    Directory of Open Access Journals (Sweden)

    Lai Jiang

    2012-01-01

    Full Text Available In applied work, the two-parameter exponential distribution gives useful representations of many physical situations. Confidence interval for the scale parameter and predictive interval for a future independent observation have been studied by many, including Petropoulos (2011 and Lawless (1977, respectively. However, interval estimates for the threshold parameter have not been widely examined in statistical literature. The aim of this paper is to, first, obtain the exact significance function of the scale parameter by renormalizing the p∗-formula. Then the approximate Studentization method is applied to obtain the significance function of the threshold parameter. Finally, a predictive density function of the two-parameter exponential distribution is derived. A real-life data set is used to show the implementation of the method. Simulation studies are then carried out to illustrate the accuracy of the proposed methods.

  8. On the Confidence Interval for the parameter of Poisson Distribution

    CERN Document Server

    Bityukov, S I; Taperechkina, V A

    2000-01-01

    In present paper the possibility of construction of continuous analogue of Poisson distribution with the search of bounds of confidence intervals for parameter of Poisson distribution is discussed and the results of numerical construction of confidence intervals are presented.

  9. Interval sampling methods and measurement error: a computer simulation.

    Science.gov (United States)

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments.

  10. Coefficient Omega Bootstrap Confidence Intervals: Nonnormal Distributions

    Science.gov (United States)

    Padilla, Miguel A.; Divers, Jasmin

    2013-01-01

    The performance of the normal theory bootstrap (NTB), the percentile bootstrap (PB), and the bias-corrected and accelerated (BCa) bootstrap confidence intervals (CIs) for coefficient omega was assessed through a Monte Carlo simulation under conditions not previously investigated. Of particular interests were nonnormal Likert-type and binary items.…

  11. Systolic Time Intervals and New Measurement Methods.

    Science.gov (United States)

    Tavakolian, Kouhyar

    2016-06-01

    Systolic time intervals have been used to detect and quantify the directional changes of left ventricular function. New methods of recording these cardiac timings, which are less cumbersome, have been recently developed and this has created a renewed interest and novel applications for these cardiac timings. This manuscript reviews these new methods and addresses the potential for the application of these cardiac timings for the diagnosis and prognosis of different cardiac diseases.

  12. Option Pricing Method in a Market Involving Interval Number Factors

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    The method for pricing the option in a market with interval number factors is proposed. The no-arbitrage principle in the interval number valued market and the rule to judge the reasonability of a price interval are given. Using the method, the price interval where the riskless interest and the volatility under B-S setting is given. The price interval from binomial tree model when the key factors u, d, R are all interval numbers is also discussed.

  13. DYNAMIC OPTIMIZATION FOR UNCERTAIN STRUCTURES USING INTERVAL METHOD

    Institute of Scientific and Technical Information of China (English)

    ChertSub-A-; WuJie; LiuChun

    2003-01-01

    An interval optimization method for the dynamic response of structures with interval parameters is presented. The matrices of structures with interval parameters are given. Combining the interval extension with the perturbation, the method for interval dynamic response analysis is derived. The interval optimization problem is transformed into a corresponding deterministic one. Because the mean values and the uncertainties of the interval parameters can be elected design variables, more information of the optimization results can be obtained by the present method than that obtained by the deterministic one. The present method is implemented for a truss structure. The numerical results show that the method is effective.

  14. TIME INTERVAL APPROACH TO THE PULSED NEUTRON LOGGING METHOD

    Institute of Scientific and Technical Information of China (English)

    赵经武; 苏为宁

    1994-01-01

    The time interval of neibouring neutrons emitted from a steady state neutron source can be treated as that from a time-dependent neutron source,In the rock space.the neutron flux is given by the neutron diffusion equation and is composed of an infinite number of “modes”,EaCh“mode”,is composed of two die-away curves.The delay action has been discussed and used to measure the time interval with only one detector in the experiment,Nuclear reactions with the time distribution due to different types of radiations observed in the neutron well-logging methods are presented with a view to getting the rock nuclear parameters from the time interval technique.

  15. Confidence Intervals for True Scores Using the Skew-Normal Distribution

    Science.gov (United States)

    Garcia-Perez, Miguel A.

    2010-01-01

    A recent comparative analysis of alternative interval estimation approaches and procedures has shown that confidence intervals (CIs) for true raw scores determined with the Score method--which uses the normal approximation to the binomial distribution--have actual coverage probabilities that are closest to their nominal level. It has also recently…

  16. Confidence Intervals for True Scores Using the Skew-Normal Distribution

    Science.gov (United States)

    Garcia-Perez, Miguel A.

    2010-01-01

    A recent comparative analysis of alternative interval estimation approaches and procedures has shown that confidence intervals (CIs) for true raw scores determined with the Score method--which uses the normal approximation to the binomial distribution--have actual coverage probabilities that are closest to their nominal level. It has also recently…

  17. Simulations of the Hadamard Variance: Probability Distributions and Confidence Intervals.

    Science.gov (United States)

    Ashby, Neil; Patla, Bijunath

    2016-04-01

    Power-law noise in clocks and oscillators can be simulated by Fourier transforming a modified spectrum of white phase noise. This approach has been applied successfully to simulation of the Allan variance and the modified Allan variance in both overlapping and nonoverlapping forms. When significant frequency drift is present in an oscillator, at large sampling times the Allan variance overestimates the intrinsic noise, while the Hadamard variance is insensitive to frequency drift. The simulation method is extended in this paper to predict the Hadamard variance for the common types of power-law noise. Symmetric real matrices are introduced whose traces-the sums of their eigenvalues-are equal to the Hadamard variances, in overlapping or nonoverlapping forms, as well as for the corresponding forms of the modified Hadamard variance. We show that the standard relations between spectral densities and Hadamard variance are obtained with this method. The matrix eigenvalues determine probability distributions for observing a variance at an arbitrary value of the sampling interval τ, and hence for estimating confidence in the measurements.

  18. Dynamic gesture recognition method based on probability matrix model of interval distribution%基于区间分布概率矩阵模型的动态手势识别方法

    Institute of Scientific and Technical Information of China (English)

    张建忠; 常丹华

    2013-01-01

    For the present, gesture recognition algorithms based on the accelerometer have contradiction between dynamic realtime and recognition rate. Aiming at this problem, a probability matrix model of interval distribution and dynamic gesture recognition method is proposed. The gestural signal from the three dimension accelerometer is preprocessed through a series of methods including automatically action data detection algorithm, normalization, and cubic spline interpolation. Moreover, according to the characteristics on signal distribution, the observation points in each axis are determined and get the probability matrix of interval distribution on each observation point. Further, the on-line and fast gesture discrimination algorithm is realized on the optimized matrixes. The method is evaluated on real data set from a finger-mount wearable device. The result shows that it has good real time effect and high recognition rate.%针对目前基于加速度传感器的手势识别算法的动态实时性与识别率的相互矛盾性,提出一种区间分布概率矩阵模型及动态手势识别方法.将手势动作的三维加速度信号进行动作数据自动检测、归一化和三次样条插值预处理,再根据信号分布特征,确定数据观测点,构造各观测点处的区间分布概率矩阵,优化矩阵,实现在线快速手势识别.该方法对手指可穿戴设备得到的真实数据集中进行了评估.结果显示其实时效果好,识别率高,实用性强.

  19. Statistical Models for Solar Flare Interval Distribution in Individual Active Regions

    CERN Document Server

    Kubo, Yuki

    2008-01-01

    This article discusses statistical models for solar flare interval distribution in individual active regions. We analyzed solar flare data in 55 active regions that are listed in the GOES soft X-ray flare catalog. We discuss some problems with a conventional procedure to derive probability density functions from any data set and propose a new procedure, which uses the maximum likelihood method and Akaike Information Criterion (AIC) to objectively compare some competing probability density functions. We found that lognormal and inverse Gaussian models are more likely models than the exponential model for solar flare interval distribution in individual active regions. The results suggest that solar flares do not occur randomly in time; rather, solar flare intervals appear to be regulated by solar flare mechanisms. We briefly mention a probabilistic solar flare forecasting method as an application of a solar flare interval distribution analysis.

  20. Technical Note: Methods for interval constrained atmospheric inversion of methane

    Directory of Open Access Journals (Sweden)

    J. Tang

    2010-08-01

    Full Text Available Three interval constrained methods, including the interval constrained Kalman smoother, the interval constrained maximum likelihood ensemble smoother and the interval constrained ensemble Kalman smoother are developed to conduct inversions of atmospheric trace gas methane (CH4. The negative values of fluxes in an unconstrained inversion are avoided in the constrained inversion. In a multi-year inversion experiment using pseudo observations derived from a forward transport simulation with known fluxes, the interval constrained fixed-lag Kalman smoother presents the best results, followed by the interval constrained fixed-lag ensemble Kalman smoother and the interval constrained maximum likelihood ensemble Kalman smoother. Consistent uncertainties are obtained for the posterior fluxes with these three methods. This study provides alternatives of the variable transform method to deal with interval constraints in atmospheric inversions.

  1. Binomial Distribution Sample Confidence Intervals Estimation 6. Excess Risk

    Directory of Open Access Journals (Sweden)

    Sorana BOLBOACĂ

    2004-02-01

    Full Text Available We present the problem of the confidence interval estimation for excess risk (Y/n-X/m fraction, a parameter which allows evaluating of the specificity of an association between predisposing or causal factors and disease in medical studies. The parameter is computes based on 2x2 contingency table and qualitative variables. The aim of this paper is to introduce four new methods of computing confidence intervals for excess risk called DAC, DAs, DAsC, DBinomial, and DBinomialC and to compare theirs performance with the asymptotic method called here DWald.In order to assess the methods, we use the PHP programming language and a PHP program was creates. The performance of each method for different sample sizes and different values of binomial variables were assess using a set of criterions. First, the upper and lower boundaries for a given X, Y and a specified sample size for choused methods were compute. Second, were assessed the average and standard deviation of the experimental errors, and the deviation relative to imposed significance level α = 5%. Four methods were assessed on random numbers for binomial variables and for sample sizes from 4 to 1000 domain.The experiments show that the DAC methods obtain performances in confidence intervals estimation for excess risk.

  2. Binomial distribution sample confidence intervals estimation for positive and negative likelihood ratio medical key parameters.

    Science.gov (United States)

    Bolboacă, Sorana; Jäntschi, Lorentz

    2005-01-01

    Likelihood Ratio medical key parameters calculated on categorical results from diagnostic tests are usually express accompanied with their confidence intervals, computed using the normal distribution approximation of binomial distribution. The approximation creates known anomalies,especially for limit cases. In order to improve the quality of estimation, four new methods (called here RPAC, RPAC0, RPAC1, and RPAC2) were developed and compared with the classical method (called here RPWald), using an exact probability calculation algorithm.Computer implementations of the methods use the PHP language. We defined and implemented the functions of the four new methods and the five criterions of confidence interval assessment. The experiments run for samples sizes which vary in 14 - 34 range, 90 - 100 range (0 binomial variables (1 method obtains the best overall performance of computing confidence interval for positive and negative likelihood ratios.

  3. Optimal interval for major maintenance actions in electricity distribution networks

    Energy Technology Data Exchange (ETDEWEB)

    Louit, Darko; Pascual, Rodrigo [Centro de Mineria, Pontificia Universidad Catolica de Chile, Av. Vicuna MacKenna, 4860 Santiago (Chile); Banjevic, Dragan [Centre for Maintenance Optimization and Reliability Engineering, University of Toronto, 5 King' s College Rd., Toronto, Ontario (Canada)

    2009-09-15

    Many systems require the periodic undertaking of major (preventive) maintenance actions (MMAs) such as overhauls in mechanical equipment, reconditioning of train lines, resurfacing of roads, etc. In the long term, these actions contribute to achieving a lower rate of occurrence of failures, though in many cases they increase the intensity of the failure process shortly after performed, resulting in a non-monotonic trend for failure intensity. Also, in the special case of distributed assets such as communications and energy networks, pipelines, etc., it is likely that the maintenance action takes place sequentially over an extended period of time, implying that different sections of the network underwent the MMAs at different periods. This forces the development of a model based on a relative time scale (i.e. time since last major maintenance event) and the combination of data from different sections of a grid, under a normalization scheme. Additionally, extended maintenance times and sequential execution of the MMAs make it difficult to identify failures occurring before and after the preventive maintenance action. This results in the loss of important information for the characterization of the failure process. A simple model is introduced to determine the optimal MMA interval considering such restrictions. Furthermore, a case study illustrates the optimal tree trimming interval around an electricity distribution network. (author)

  4. A filtering method for the interval eigenvalue problem

    DEFF Research Database (Denmark)

    Hladik, Milan; Daney, David; Tsigaridas, Elias

    2011-01-01

    We consider the general problem of computing intervals that contain the real eigenvalues of interval matrices. Given an outer approximation (superset) of the real eigenvalue set of an interval matrix, we propose a filtering method that iteratively improves the approximation. Even though our method...... is based on a sufficient regularity condition, it is very efficient in practice and our experimental results suggest that it improves, in general, significantly the initial outer approximation. The proposed method works for general, as well as for symmetric interval matrices....

  5. The Applicability of Confidence Intervals of Quantiles for the Generalized Logistic Distribution

    Science.gov (United States)

    Shin, H.; Heo, J.; Kim, T.; Jung, Y.

    2007-12-01

    The generalized logistic (GL) distribution has been widely used for frequency analysis. However, there is a little study related to the confidence intervals that indicate the prediction accuracy of distribution for the GL distribution. In this paper, the estimation of the confidence intervals of quantiles for the GL distribution is presented based on the method of moments (MOM), maximum likelihood (ML), and probability weighted moments (PWM) and the asymptotic variances of each quantile estimator are derived as functions of the sample sizes, return periods, and parameters. Monte Carlo simulation experiments are also performed to verify the applicability of the derived confidence intervals of quantile. As the results, the relative bias (RBIAS) and relative root mean square error (RRMSE) of the confidence intervals generally increase as return period increases and reverse as sample size increases. And PWM for estimating the confidence intervals performs better than the other methods in terms of RRMSE when the data is almost symmetric while ML shows the smallest RBIAS and RRMSE when the data is more skewed and sample size is moderately large. The GL model was applied to fit the distribution of annual maximum rainfall data. The results show that there are little differences in the estimated quantiles between ML and PWM while distinct differences in MOM.

  6. Interval finite difference method for steady-state temperature field prediction with interval parameters

    Science.gov (United States)

    Wang, Chong; Qiu, Zhi-Ping

    2014-04-01

    A new numerical technique named interval finite difference method is proposed for the steady-state temperature field prediction with uncertainties in both physical parameters and boundary conditions. Interval variables are used to quantitatively describe the uncertain parameters with limited information. Based on different Taylor and Neumann series, two kinds of parameter perturbation methods are presented to approximately yield the ranges of the uncertain temperature field. By comparing the results with traditional Monte Carlo simulation, a numerical example is given to demonstrate the feasibility and effectiveness of the proposed method for solving steady-state heat conduction problem with uncertain-but-bounded parameters. [Figure not available: see fulltext.

  7. An empirical method for establishing positional confidence intervals tailored for composite interval mapping of QTL.

    Directory of Open Access Journals (Sweden)

    Andrew Crossett

    Full Text Available BACKGROUND: Improved genetic resolution and availability of sequenced genomes have made positional cloning of moderate-effect QTL realistic in several systems, emphasizing the need for precise and accurate derivation of positional confidence intervals (CIs for QTL. Support interval (SI methods based on the shape of the QTL likelihood curve have proven adequate for standard interval mapping, but have not been shown to be appropriate for use with composite interval mapping (CIM, which is one of the most commonly used QTL mapping methods. RESULTS: Based on a non-parametric confidence interval (NPCI method designed for use with the Haley-Knott regression method for mapping QTL, a CIM-specific method (CIM-NPCI was developed to appropriately account for the selection of background markers during analysis of bootstrap-resampled data sets. Coverage probabilities and interval widths resulting from use of the NPCI, SI, and CIM-NPCI methods were compared in a series of simulations analyzed via CIM, wherein four genetic effects were simulated in chromosomal regions with distinct marker densities while heritability was fixed at 0.6 for a population of 200 isolines. CIM-NPCIs consistently capture the simulated QTL across these conditions while slightly narrower SIs and NPCIs fail at unacceptably high rates, especially in genomic regions where marker density is high, which is increasingly common for real studies. The effects of a known CIM bias toward locating QTL peaks at markers were also investigated for each marker density case. Evaluation of sub-simulations that varied according to the positions of simulated effects relative to the nearest markers showed that the CIM-NPCI method overcomes this bias, offering an explanation for the improved coverage probabilities when marker densities are high. CONCLUSIONS: Extensive simulation studies herein demonstrate that the QTL confidence interval methods typically used to positionally evaluate CIM results can be

  8. Reference intervals data mining: no longer a probability paper method.

    Science.gov (United States)

    Katayev, Alexander; Fleming, James K; Luo, Dajie; Fisher, Arren H; Sharp, Thomas M

    2015-01-01

    To describe the application of a data-mining statistical algorithm for calculation of clinical laboratory tests reference intervals. Reference intervals for eight different analytes and different age and sex groups (a total of 11 separate reference intervals) for tests that are unlikely to be ordered during routine screening of disease-free populations were calculated using the modified algorithm for data mining of test results stored in the laboratory database and compared with published peer-reviewed studies that used direct sampling. The selection of analytes was based on the predefined criteria that include comparability of analytical methods with a statistically significant number of observations. Of the 11 calculated reference intervals, having upper and lower limits for each, 21 of 22 reference interval limits were not statistically different from the reference studies. The presented statistical algorithm is shown to be an accurate and practical tool for reference interval calculations. Copyright© by the American Society for Clinical Pathology.

  9. Fast Distributed Gradient Methods

    CERN Document Server

    Jakovetic, Dusan; Moura, Jose M F

    2011-01-01

    The paper proposes new fast distributed optimization gradient methods and proves convergence to the exact solution at rate O(\\log k/k), much faster than existing distributed optimization (sub)gradient methods with convergence O(1/\\sqrt{k}), while incurring practically no additional communication nor computation cost overhead per iteration. We achieve this for convex (with at least one strongly convex,) coercive, three times differentiable and with Lipschitz continuous first derivative (private) cost functions. Our work recovers for distributed optimization similar convergence rate gains obtained by centralized Nesterov gradient and fast iterative shrinkage-thresholding algorithm (FISTA) methods over ordinary centralized gradient methods. We also present a constant step size distributed fast gradient algorithm for composite non-differentiable costs. A simulation illustrates the effectiveness of our distributed methods.

  10. An Interval Maximum Entropy Method for Quadratic Programming Problem

    Institute of Scientific and Technical Information of China (English)

    RUI Wen-juan; CAO De-xin; SONG Xie-wu

    2005-01-01

    With the idea of maximum entropy function and penalty function methods, we transform the quadratic programming problem into an unconstrained differentiable optimization problem, discuss the interval extension of the maximum entropy function, provide the region deletion test rules and design an interval maximum entropy algorithm for quadratic programming problem. The convergence of the method is proved and numerical results are presented. Both theoretical and numerical results show that the method is reliable and efficient.

  11. The Distribution of the Product Explains Normal Theory Mediation Confidence Interval Estimation.

    Science.gov (United States)

    Kisbu-Sakarya, Yasemin; MacKinnon, David P; Miočević, Milica

    2014-05-01

    The distribution of the product has several useful applications. One of these applications is its use to form confidence intervals for the indirect effect as the product of 2 regression coefficients. The purpose of this article is to investigate how the moments of the distribution of the product explain normal theory mediation confidence interval coverage and imbalance. Values of the critical ratio for each random variable are used to demonstrate how the moments of the distribution of the product change across values of the critical ratio observed in research studies. Results of the simulation study showed that as skewness in absolute value increases, coverage decreases. And as skewness in absolute value and kurtosis increases, imbalance increases. The difference between testing the significance of the indirect effect using the normal theory versus the asymmetric distribution of the product is further illustrated with a real data example. This article is the first study to show the direct link between the distribution of the product and indirect effect confidence intervals and clarifies the results of previous simulation studies by showing why normal theory confidence intervals for indirect effects are often less accurate than those obtained from the asymmetric distribution of the product or from resampling methods.

  12. Binomial Distribution Sample Confidence Intervals Estimation 1. Sampling and Medical Key Parameters Calculation

    Directory of Open Access Journals (Sweden)

    Tudor DRUGAN

    2003-08-01

    Full Text Available The aim of the paper was to present the usefulness of the binomial distribution in studying of the contingency tables and the problems of approximation to normality of binomial distribution (the limits, advantages, and disadvantages. The classification of the medical keys parameters reported in medical literature and expressing them using the contingency table units based on their mathematical expressions restrict the discussion of the confidence intervals from 34 parameters to 9 mathematical expressions. The problem of obtaining different information starting with the computed confidence interval for a specified method, information like confidence intervals boundaries, percentages of the experimental errors, the standard deviation of the experimental errors and the deviation relative to significance level was solves through implementation in PHP programming language of original algorithms. The cases of expression, which contain two binomial variables, were separately treated. An original method of computing the confidence interval for the case of two-variable expression was proposed and implemented. The graphical representation of the expression of two binomial variables for which the variation domain of one of the variable depend on the other variable was a real problem because the most of the software used interpolation in graphical representation and the surface maps were quadratic instead of triangular. Based on an original algorithm, a module was implements in PHP in order to represent graphically the triangular surface plots. All the implementation described above was uses in computing the confidence intervals and estimating their performance for binomial distributions sample sizes and variable.

  13. Interval Analysis of the Finite Element Method for Stochastic Structures

    Institute of Scientific and Technical Information of China (English)

    刘长虹; 刘筱玲; 陈虬

    2004-01-01

    A random parameter can be transformed into an interval number in the structural analysis with the concept of the confidence interval. Hence, analyses of uncertain structural systems can be used in the traditional FEM software. In some cases, the amount of solutions in stochastic structures is nearly as many as that in the traditional structural problems. In addition, a new method to evaluate the failure probability of structures is presented for the needs of the modern engineering design.

  14. Confidence Intervals for the Coefficient of Variation in a Normal Distribution with a Known Population Mean

    Directory of Open Access Journals (Sweden)

    Wararit Panichkitkosolkul

    2013-01-01

    Full Text Available This paper presents three confidence intervals for the coefficient of variation in a normal distribution with a known population mean. One of the proposed confidence intervals is based on the normal approximation. The other proposed confidence intervals are the shortest-length confidence interval and the equal-tailed confidence interval. A Monte Carlo simulation study was conducted to compare the performance of the proposed confidence intervals with the existing confidence intervals. Simulation results have shown that all three proposed confidence intervals perform well in terms of coverage probability and expected length.

  15. A Comparative Study on Decision Making Methods with Interval Data

    Directory of Open Access Journals (Sweden)

    Aditya Chauhan

    2014-01-01

    Full Text Available Multiple Criteria Decision Making (MCDM models are used to solve a number of decision making problems universally. Most of these methods require the use of integers as input data. However, there are problems which have indeterminate values or data intervals which need to be analysed. In order to solve problems with interval data, many methods have been reported. Through this study an attempt has been made to compare and analyse the popular decision making tools for interval data problems. Namely, I-TOPSIS (Technique for Order Preference by Similarity to Ideal Solution, DI-TOPSIS, cross entropy, and interval VIKOR (VlseKriterijumska Optimiza-cija I Kompromisno Resenje have been compared and a novel algorithm has been proposed. The new algorithm makes use of basic TOPSIS technique to overcome the limitations of known methods. To compare the effectiveness of the various methods, an example problem has been used where selection of best material family for the capacitor application has to be made. It was observed that the proposed algorithm is able to overcome the known limitations of the previous techniques. Thus, it can be easily and efficiently applied to various decision making problems with interval data.

  16. ADAPTIVE INTERVAL WAVELET PRECISE INTEGRATION METHOD FOR PARTIAL DIFFERENTIAL EQUATIONS

    Institute of Scientific and Technical Information of China (English)

    MEI Shu-li; LU Qi-shao; ZHANG Sen-wen; JIN Li

    2005-01-01

    The quasi-Shannon interval wavelet is constructed based on the interpolation wavelet theory, and an adaptive precise integration method, which is based on extrapolation method is presented for nonlinear ordinary differential equations (ODEs). And then, an adaptive interval wavelet precise integration method (AIWPIM) for nonlinear partial differential equations(PDEs) is proposed. The numerical results show that the computational precision of AIWPIM is higher than that of the method constructed by combining the wavelet and the 4th Runge-Kutta method, and the computational amounts of these two methods are almost equal. For convenience, the Burgers equation is taken as an example in introducing this method, which is also valid for more general cases.

  17. Fuzzy and interval finite element method for heat conduction problem

    CERN Document Server

    Majumdar, Sarangam; Chakraverty, S

    2012-01-01

    Traditional finite element method is a well-established method to solve various problems of science and engineering. Different authors have used various methods to solve governing differential equation of heat conduction problem. In this study, heat conduction in a circular rod has been considered which is made up of two different materials viz. aluminum and copper. In earlier studies parameters in the differential equation have been taken as fixed (crisp) numbers which actually may not. Those parameters are found in general by some measurements or experiments. So the material properties are actually uncertain and may be considered to vary in an interval or as fuzzy and in that case complex interval arithmetic or fuzzy arithmetic has to be considered in the analysis. As such the problem is discretized into finite number of elements which depend on interval/fuzzy parameters. Representation of interval/fuzzy numbers may give the clear picture of uncertainty. Hence interval/fuzzy arithmetic is applied in the fin...

  18. Credible Intervals for Precision and Recall Based on a K-Fold Cross-Validated Beta Distribution.

    Science.gov (United States)

    Wang, Yu; Li, Jihong

    2016-08-01

    interval length in all 27 cases of simulated and real data experiments. However, the confidence intervals based on the K-fold and corrected K-fold cross-validated t distributions are in the two extremes. Thus, when focusing on the reliability of the inference for precision and recall, the proposed methods are preferable, especially for the first credible interval.

  19. Process control and optimization with simple interval calculation method

    DEFF Research Database (Denmark)

    Pomerantsev, A.; Rodionova, O.; Høskuldsson, Agnar

    2006-01-01

    the series of expanding PLS/SIC models in order to support the on-line process improvements. This method helps to predict the effect of planned actions on the product quality and thus enables passive quality control. We have also considered an optimization approach that proposes the correcting actions......Methods of process control and optimization are presented and illustrated with a real world example. The optimization methods are based on the PLS block modeling as well as on the simple interval calculation methods of interval prediction and object status classification. It is proposed to employ...... for the quality improvement in the course of production. The latter is an active quality optimization, which takes into account the actual history of the process. The advocate approach is allied to the conventional method of multivariate statistical process control (MSPC) as it also employs the historical process...

  20. Interrogation of distributional data for the End Ordovician crisis interval

    DEFF Research Database (Denmark)

    Mac Ørum Rasmussen, Christian; Harper, David Alexander Taylor

    2011-01-01

    on the peri-Laurentian terranes, in the Laurentian epicratonic seas and on the margins of the Ægir Ocean. Refuges during the survival interval were probably located in the shallow-water zones of especially Baltica, but also Gondwana, the peri-Laurentian terranes and the Kazakh Terranes. Except for Baltica......) probably as a consequence of the progressively narrowing Iapetus Ocean...

  1. Triple I method and interval valued fuzzy reasoning

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The aims of this paper are: (i) to show that the CRI method should be improved and remould into the triple I method, (ii) to propose a new type of fuzzy reasoning with multiple rules of which the premise of each rule is an interval valued fuzzy subset, (iii) to establish the "fire one or leave (FOOL)" principle as pretreatment for solving the fuzzy reasoning problem mentioned in (ii), and (iv) to solve the problem mentioned in (ii).

  2. Triple I method and interval valued fuzzy reasoning

    Institute of Scientific and Technical Information of China (English)

    王国俊

    2000-01-01

    The aims of this paper are.- (i) to show that the CRI method should be improved and remould into the triple I method, (ii) to propose a new type of fuzzy reasoning with multiple rules of which the premise of each rule is an interval valued fuzzy subset, (iii) to establish the "fire one or leave (FOOL)" principle as pretreatment for solving the fuzzy reasoning problem mentioned in (ii), and (iv) to solve the problem mentioned in (ii).

  3. An uncertain multidisciplinary design optimization method using interval convex models

    Science.gov (United States)

    Li, Fangyi; Luo, Zhen; Sun, Guangyong; Zhang, Nong

    2013-06-01

    This article proposes an uncertain multi-objective multidisciplinary design optimization methodology, which employs the interval model to represent the uncertainties of uncertain-but-bounded parameters. The interval number programming method is applied to transform each uncertain objective function into two deterministic objective functions, and a satisfaction degree of intervals is used to convert both the uncertain inequality and equality constraints to deterministic inequality constraints. In doing so, an unconstrained deterministic optimization problem will be constructed in association with the penalty function method. The design will be finally formulated as a nested three-loop optimization, a class of highly challenging problems in the area of engineering design optimization. An advanced hierarchical optimization scheme is developed to solve the proposed optimization problem based on the multidisciplinary feasible strategy, which is a well-studied method able to reduce the dimensions of multidisciplinary design optimization problems by using the design variables as independent optimization variables. In the hierarchical optimization system, the non-dominated sorting genetic algorithm II, sequential quadratic programming method and Gauss-Seidel iterative approach are applied to the outer, middle and inner loops of the optimization problem, respectively. Typical numerical examples are used to demonstrate the effectiveness of the proposed methodology.

  4. Asymptotic confidence interval for the coefficient of variation of Poisson distribution: a simulation study

    Directory of Open Access Journals (Sweden)

    Wararit Panichkitkosolkul

    2010-01-01

    Full Text Available A new asymptotic confidence interval constructed by using a confidence intervalfor the Poisson mean is proposed for the coefficient of variation of the Poisson distribution.The following confidence intervals are considered: McKay’s confidence interval, Vangel’sconfidence interval and the proposed confidence interval. Using Monte Carlo simulations,the coverage probabilities and expected lengths of these confidence intervals are compared.Simulation results show that all scenarios of the new asymptotic confidence interval havedesired minimum coverage probabilities of 0.95 and 0.90. In addition, the newly proposedconfidence interval is better than the existing ones in terms of coverage probability andexpected length for all sample sizes and parameter values considered in this paper.

  5. [Nonparametric method of estimating survival functions containing right-censored and interval-censored data].

    Science.gov (United States)

    Xu, Yonghong; Gao, Xiaohuan; Wang, Zhengxi

    2014-04-01

    Missing data represent a general problem in many scientific fields, especially in medical survival analysis. Dealing with censored data, interpolation method is one of important methods. However, most of the interpolation methods replace the censored data with the exact data, which will distort the real distribution of the censored data and reduce the probability of the real data falling into the interpolation data. In order to solve this problem, we in this paper propose a nonparametric method of estimating the survival function of right-censored and interval-censored data and compare its performance to SC (self-consistent) algorithm. Comparing to the average interpolation and the nearest neighbor interpolation method, the proposed method in this paper replaces the right-censored data with the interval-censored data, and greatly improves the probability of the real data falling into imputation interval. Then it bases on the empirical distribution theory to estimate the survival function of right-censored and interval-censored data. The results of numerical examples and a real breast cancer data set demonstrated that the proposed method had higher accuracy and better robustness for the different proportion of the censored data. This paper provides a good method to compare the clinical treatments performance with estimation of the survival data of the patients. This pro vides some help to the medical survival data analysis.

  6. A general approach for postmortem interval based on uniformly distributed and interconnected qualitative indicators.

    Science.gov (United States)

    Matuszewski, Szymon

    2017-05-01

    There are many qualitative indicators for postmortem interval (PMI) of human or animal cadavers. When such indicators are uniformly spaced over PMI, the resultant distribution may be very useful for the estimation of PMI. Existing methods of estimation rely on indicator persistence time that is, however, difficult to estimate because of its dependence on many interacting factors, of which forensic scientists are usually unaware in casework. In this article, an approach is developed for the estimation of PMI from qualitative markers in which indicator persistence time is not used. The method involves the estimation of an interval preceding appearance of a marker on cadaver called the pre-appearance interval (PAI). PMI is delineated by PAI for two consecutive markers: the one being recorded on the cadaver (defining lower PMI) and the other that is next along the PMI timeline but yet absent on the cadaver (defining upper PMI). The approach was calibrated for use with subsequent life stages of carrion insects and tested using results of pig cadaver experiments. Results demonstrate that the presence and absence of the subsequent developmental stages of carrion insects, coupled with the estimation of their PAI, gives a reliable and easily accessible knowledge of PMI in a forensic context.

  7. Design of time interval generator based on hybrid counting method

    Energy Technology Data Exchange (ETDEWEB)

    Yao, Yuan [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); Wang, Zhaoqi [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Lu, Houbing [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Hefei Electronic Engineering Institute, Hefei 230037 (China); Chen, Lian [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Jin, Ge, E-mail: goldjin@ustc.edu.cn [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China)

    2016-10-01

    Time Interval Generators (TIGs) are frequently used for the characterizations or timing operations of instruments in particle physics experiments. Though some “off-the-shelf” TIGs can be employed, the necessity of a custom test system or control system makes the TIGs, being implemented in a programmable device desirable. Nowadays, the feasibility of using Field Programmable Gate Arrays (FPGAs) to implement particle physics instrumentation has been validated in the design of Time-to-Digital Converters (TDCs) for precise time measurement. The FPGA-TDC technique is based on the architectures of Tapped Delay Line (TDL), whose delay cells are down to few tens of picosecond. In this case, FPGA-based TIGs with high delay step are preferable allowing the implementation of customized particle physics instrumentations and other utilities on the same FPGA device. A hybrid counting method for designing TIGs with both high resolution and wide range is presented in this paper. The combination of two different counting methods realizing an integratable TIG is described in detail. A specially designed multiplexer for tap selection is emphatically introduced. The special structure of the multiplexer is devised for minimizing the different additional delays caused by the unpredictable routings from different taps to the output. A Kintex-7 FPGA is used for the hybrid counting-based implementation of a TIG, providing a resolution up to 11 ps and an interval range up to 8 s.

  8. Design of time interval generator based on hybrid counting method

    Science.gov (United States)

    Yao, Yuan; Wang, Zhaoqi; Lu, Houbing; Chen, Lian; Jin, Ge

    2016-10-01

    Time Interval Generators (TIGs) are frequently used for the characterizations or timing operations of instruments in particle physics experiments. Though some "off-the-shelf" TIGs can be employed, the necessity of a custom test system or control system makes the TIGs, being implemented in a programmable device desirable. Nowadays, the feasibility of using Field Programmable Gate Arrays (FPGAs) to implement particle physics instrumentation has been validated in the design of Time-to-Digital Converters (TDCs) for precise time measurement. The FPGA-TDC technique is based on the architectures of Tapped Delay Line (TDL), whose delay cells are down to few tens of picosecond. In this case, FPGA-based TIGs with high delay step are preferable allowing the implementation of customized particle physics instrumentations and other utilities on the same FPGA device. A hybrid counting method for designing TIGs with both high resolution and wide range is presented in this paper. The combination of two different counting methods realizing an integratable TIG is described in detail. A specially designed multiplexer for tap selection is emphatically introduced. The special structure of the multiplexer is devised for minimizing the different additional delays caused by the unpredictable routings from different taps to the output. A Kintex-7 FPGA is used for the hybrid counting-based implementation of a TIG, providing a resolution up to 11 ps and an interval range up to 8 s.

  9. Adaptive Kalman Filter Based on Adjustable Sampling Interval in Burst Detection for Water Distribution System

    Directory of Open Access Journals (Sweden)

    Doo Yong Choi

    2016-04-01

    Full Text Available Rapid detection of bursts and leaks in water distribution systems (WDSs can reduce the social and economic costs incurred through direct loss of water into the ground, additional energy demand for water supply, and service interruptions. Many real-time burst detection models have been developed in accordance with the use of supervisory control and data acquisition (SCADA systems and the establishment of district meter areas (DMAs. Nonetheless, no consideration has been given to how frequently a flow meter measures and transmits data for predicting breaks and leaks in pipes. This paper analyzes the effect of sampling interval when an adaptive Kalman filter is used for detecting bursts in a WDS. A new sampling algorithm is presented that adjusts the sampling interval depending on the normalized residuals of flow after filtering. The proposed algorithm is applied to a virtual sinusoidal flow curve and real DMA flow data obtained from Jeongeup city in South Korea. The simulation results prove that the self-adjusting algorithm for determining the sampling interval is efficient and maintains reasonable accuracy in burst detection. The proposed sampling method has a significant potential for water utilities to build and operate real-time DMA monitoring systems combined with smart customer metering systems.

  10. A new method for wavelength interval selection that intelligently optimizes the locations, widths and combinations of the intervals.

    Science.gov (United States)

    Deng, Bai-Chuan; Yun, Yong-Huan; Ma, Pan; Lin, Chen-Chen; Ren, Da-Bing; Liang, Yi-Zeng

    2015-03-21

    In this study, a new algorithm for wavelength interval selection, known as interval variable iterative space shrinkage approach (iVISSA), is proposed based on the VISSA algorithm. It combines global and local searches to iteratively and intelligently optimize the locations, widths and combinations of the spectral intervals. In the global search procedure, it inherits the merit of soft shrinkage from VISSA to search the locations and combinations of informative wavelengths, whereas in the local search procedure, it utilizes the information of continuity in spectroscopic data to determine the widths of wavelength intervals. The global and local search procedures are carried out alternatively to realize wavelength interval selection. This method was tested using three near infrared (NIR) datasets. Some high-performing wavelength selection methods, such as synergy interval partial least squares (siPLS), moving window partial least squares (MW-PLS), competitive adaptive reweighted sampling (CARS), genetic algorithm PLS (GA-PLS) and interval random frog (iRF), were used for comparison. The results show that the proposed method is very promising with good results both on prediction capability and stability. The MATLAB codes for implementing iVISSA are freely available on the website: .

  11. Statistical Inference for the Parameter of Pareto Distribution Based on Progressively Typ e-I Interval Censored Sample

    Institute of Scientific and Technical Information of China (English)

    Abdalroof M.S.; Zhao Zhi-wen; Wang De-hui

    2014-01-01

    In this paper, the estimation of parameters based on a progressively type-I interval censored sample from a Pareto distribution is studied. Different methods of estimation are discussed, which include mid-point approximation estimator, the maximum likelihood estimator and moment estimator. The estimation procedures are discussed in details and compared via Monte Carlo simulations in terms of their biases.

  12. Interval MULTIMOORA method with target values of attributes based on interval distance and preference degree: biomaterials selection

    Science.gov (United States)

    Hafezalkotob, Arian; Hafezalkotob, Ashkan

    2016-12-01

    A target-based MADM method covers beneficial and non-beneficial attributes besides target values for some attributes. Such techniques are considered as the comprehensive forms of MADM approaches. Target-based MADM methods can also be used in traditional decision-making problems in which beneficial and non-beneficial attributes only exist. In many practical selection problems, some attributes have given target values. The values of decision matrix and target-based attributes can be provided as intervals in some of such problems. Some target-based decision-making methods have recently been developed; however, a research gap exists in the area of MADM techniques with target-based attributes under uncertainty of information. We extend the MULTIMOORA method for solving practical material selection problems in which material properties and their target values are given as interval numbers. We employ various concepts of interval computations to reduce degeneration of uncertain data. In this regard, we use interval arithmetic and introduce innovative formula for interval distance of interval numbers to create interval target-based normalization technique. Furthermore, we use a pairwise preference matrix based on the concept of degree of preference of interval numbers to calculate the maximum, minimum, and ranking of these numbers. Two decision-making problems regarding biomaterials selection of hip and knee prostheses are discussed. Preference degree-based ranking lists for subordinate parts of the extended MULTIMOORA method are generated by calculating the relative degrees of preference for the arranged assessment values of the biomaterials. The resultant rankings for the problem are compared with the outcomes of other target-based models in the literature.

  13. Confidence intervals for functions of coefficients of variation with bounded parameter spaces in two gamma distributions

    Directory of Open Access Journals (Sweden)

    Patarawan Sangnawakij

    2017-02-01

    Full Text Available The problem of estimating parameters in a gamma distribution has been widely studied with respect to both theories and applications. In special cases, when the parameter space is bounded, the construction of the confidence interval based on the classical Neyman procedure is unsatisfactory because the information regarding the restriction of the parameter is disregarded. In order to develop the estimator for this issue, the confidence intervals for the coefficient of variation for the case of a gamma distribution were proposed. Extending to two populations, the confidence intervals for the difference and the ratio of coefficients of variation with restricted parameters were presented. Monte Carlo simulations were used to investigate the performance of the proposed estimators. The results showed that the proposed confidence intervals performed better than the compared estimators in terms of expected length, especially when the coefficients of variation were close to the boundary. Additionally, two examples using real data were analyzed to illustrate the findings of the paper.

  14. Thickness distribution of multi-stage incremental forming with different forming stages and angle intervals

    Institute of Scientific and Technical Information of China (English)

    李军超; 杨芬芬; 周志强

    2015-01-01

    Although multi-stage incremental sheet forming has always been adopted instead of single-stage forming to form parts with a steep wall angle or to achieve a high forming performance, it is largely dependent on empirical designs. In order to research multi-stage forming further, the effect of forming stages (n) and angle interval between the two adjacent stages (Δα) on thickness distribution was investigated. Firstly, a finite element method (FEM) model of multi-stage incremental forming was established and experimentally verified. Then, based on the proposed simulation model, different strategies were adopted to form a frustum of cone with wall angle of 30° to research the thickness distribution of multi-pass forming. It is proved that the minimum thickness increases largely and the variance of sheet thickness decreases significantly as the value of n grows. Further, with the increase of Δα, the minimum thickness increases initially and then decreases, and the optimal thickness distribution is achieved with Δα of 10°. Additionally, a formula is deduced to estimate the sheet thickness after multi-stage forming and proved to be effective. And the simulation results fit well with the experimental results.

  15. Interval Estimation of Stress-Strength Reliability Based on Lower Record Values from Inverse Rayleigh Distribution

    Directory of Open Access Journals (Sweden)

    Bahman Tarvirdizade

    2014-01-01

    Full Text Available We consider the estimation of stress-strength reliability based on lower record values when X and Y are independently but not identically inverse Rayleigh distributed random variables. The maximum likelihood, Bayes, and empirical Bayes estimators of R are obtained and their properties are studied. Confidence intervals, exact and approximate, as well as the Bayesian credible sets for R are obtained. A real example is presented in order to illustrate the inferences discussed in the previous sections. A simulation study is conducted to investigate and compare the performance of the intervals presented in this paper and some bootstrap intervals.

  16. Assessment of histological changes in antemortem gingival tissues fixed at various time intervals: A method of estimation of postmortem interval

    Science.gov (United States)

    Mahalakshmi, V.; Gururaj, N.; Sathya, R.; Sabarinath, T. R.; Sivapathasundharam, B.; Kalaiselvan, S.

    2016-01-01

    Introduction: Conventional methods to estimate the time of death are adequate, but a histological method is yet unavailable to assess postmortem interval (PMI). The autolytic changes that occur in an unfixed antemortem gingival tissue which reflects histologically at an early stage are similar to changes that occur in postmortem tissue. These histological changes can be used and applied in a postmortem tissue as a method to assess PMI. Aims: The aim of the study is to assess the histological changes in a gingival tissue left unfixed for various time intervals and to correlate the findings with duration. Materials and Methods: Sixty gingival tissues obtained from patients following therapeutic extractions, impactions, gingivectomy and crown lengthening procedures were used. Each tissue obtained was divided into two pieces and labeled as “A”, the control group and “ B” the study group. Tissues labeled “A” were fixed in 10% formalin immediately and tissues labeled“B” were placed in closed containers and fixed after 15, 30, 45 min, 1, 2, and 4 h time interval. Of the sixty tissues in the study group “ B”, ten tissues were used for each time interval under investigation. All the fixed tissues were processed, stained, assessed, and analyzed statistically using Pearson correlation and regression analysis. Results: Histological changes appear at 15 min in an unfixed antemortem tissue. At 2 h interval, all layers with few cells in basal cell layer are involved. At 4 h interval, loss of stratification and complete homogenization of cells in the superficial layers with prominent changes in basal layer is evident. There was a positive correlation (<1.0) between the time interval and the appearance of the histological changes. Conclusion: Histological changes such as complete homogenization of cells in superficial layers and loss of epithelial architecture at 4 h in unfixed antemortem tissue may be used as a criterion to estimate PMI, after further studies

  17. Assessment of histological changes in antemortem gingival tissues fixed at various time intervals: A method of estimation of postmortem interval

    Directory of Open Access Journals (Sweden)

    V Mahalakshmi

    2016-01-01

    Full Text Available Introduction: Conventional methods to estimate the time of death are adequate, but a histological method is yet unavailable to assess postmortem interval (PMI. The autolytic changes that occur in an unfixed antemortem gingival tissue which reflects histologically at an early stage are similar to changes that occur in postmortem tissue. These histological changes can be used and applied in a postmortem tissue as a method to assess PMI. Aims: The aim of the study is to assess the histological changes in a gingival tissue left unfixed for various time intervals and to correlate the findings with duration. Materials and Methods: Sixty gingival tissues obtained from patients following therapeutic extractions, impactions, gingivectomy and crown lengthening procedures were used. Each tissue obtained was divided into two pieces and labeled as “A”, the control group and “ B” the study group. Tissues labeled “A” were fixed in 10% formalin immediately and tissues labeled“B” were placed in closed containers and fixed after 15, 30, 45 min, 1, 2, and 4 h time interval. Of the sixty tissues in the study group “ B”, ten tissues were used for each time interval under investigation. All the fixed tissues were processed, stained, assessed, and analyzed statistically using Pearson correlation and regression analysis. Results: Histological changes appear at 15 min in an unfixed antemortem tissue. At 2 h interval, all layers with few cells in basal cell layer are involved. At 4 h interval, loss of stratification and complete homogenization of cells in the superficial layers with prominent changes in basal layer is evident. There was a positive correlation (<1.0 between the time interval and the appearance of the histological changes.Conclusion: Histological changes such as complete homogenization of cells in superficial layers and loss of epithelial architecture at 4 h in unfixed antemortem tissue may be used as a criterion to estimate PMI, after

  18. The variability of tidewater-glacier calving: origin of event-size and interval distributions

    CERN Document Server

    Chapuis, Anne

    2012-01-01

    Calving activity at the front of tidewater glaciers is characterized by a large variability in iceberg sizes and inter-event intervals. We present calving-event data obtained from continuous observations of the fronts of two tidewater glaciers on Svalbard, and show that the distributions of event sizes and inter-event intervals can be reproduced by a simple calving model focusing on the mutual interplay between calving and the destabilization of the glacier front. The event-size distributions of both the field and the model data extend over several orders of magnitude and resemble power laws. The distributions of inter-event intervals are broad, but have a less pronounced tail. In the model, the width of the size distribution increases with the calving susceptibility of the glacier front, a parameter measuring the effect of calving on the stress in the local neighborhood of the calving region. Inter-event interval distributions, in contrast, are insensitive to the calving susceptibility. Above a critical susc...

  19. Resampling methods in Microsoft Excel® for estimating reference intervals.

    Science.gov (United States)

    Theodorsson, Elvar

    2015-01-01

    Computer-intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. 
The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular.
 Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples.

  20. A novel method of Newton iteration-based interval analysis for multidisciplinary systems

    Science.gov (United States)

    Wang, Lei; Xiong, Chuang; Wang, RuiXing; Wang, XiaoJun; Wu, Di

    2017-09-01

    A Newton iteration-based interval uncertainty analysis method (NI-IUAM) is proposed to analyze the propagating effect of interval uncertainty in multidisciplinary systems. NI-IUAM decomposes one multidisciplinary system into single disciplines and utilizes a Newton iteration equation to obtain the upper and lower bounds of coupled state variables at each iterative step. NI-IUAM only needs to determine the bounds of uncertain parameters and does not require specific distribution formats. In this way, NI-IUAM may greatly reduce the necessity for raw data. In addition, NI-IUAM can accelerate the convergence process as a result of the super-linear convergence of Newton iteration. The applicability of the proposed method is discussed, in particular that solutions obtained in each discipline must be compatible in multidisciplinary systems. The validity and efficiency of NI-IUAM is demonstrated by both numerical and engineering examples.

  1. Measurements of the charged particle multiplicity distribution in restricted rapidity intervals

    CERN Document Server

    Buskulic, Damir; De Bonis, I; Décamp, D; Ghez, P; Goy, C; Lees, J P; Lucotte, A; Minard, M N; Odier, P; Pietrzyk, B; Ariztizabal, F; Chmeissani, M; Crespo, J M; Efthymiopoulos, I; Fernández, E; Fernández-Bosman, M; Gaitan, V; Garrido, L; Martínez, M; Orteu, S; Pacheco, A; Padilla, C; Palla, Fabrizio; Pascual, A; Perlas, J A; Sánchez, F; Teubert, F; Colaleo, A; Creanza, D; De Palma, M; Farilla, A; Gelao, G; Girone, M; Iaselli, Giuseppe; Maggi, G; Maggi, M; Marinelli, N; Natali, S; Nuzzo, S; Ranieri, A; Raso, G; Romano, F; Ruggieri, F; Selvaggi, G; Silvestris, L; Tempesta, P; Zito, G; Huang, X; Lin, J; Ouyang, Q; Wang, T; Xie, Y; Xu, R; Xue, S; Zhang, J; Zhang, L; Zhao, W; Bonvicini, G; Cattaneo, M; Comas, P; Coyle, P; Drevermann, H; Engelhardt, A; Forty, Roger W; Frank, M; Hagelberg, R; Harvey, J; Jacobsen, R; Janot, P; Jost, B; Knobloch, J; Lehraus, Ivan; Markou, C; Martin, E B; Mato, P; Meinhard, H; Minten, Adolf G; Miquel, R; Oest, T; Palazzi, P; Pater, J R; Pusztaszeri, J F; Ranjard, F; Rensing, P E; Rolandi, Luigi; Schlatter, W D; Schmelling, M; Schneider, O; Tejessy, W; Tomalin, I R; Venturi, A; Wachsmuth, H W; Wiedenmann, W; Wildish, T; Witzeling, W; Wotschack, J; Ajaltouni, Ziad J; Bardadin-Otwinowska, Maria; Barrès, A; Boyer, C; Falvard, A; Gay, P; Guicheney, C; Henrard, P; Jousset, J; Michel, B; Monteil, S; Montret, J C; Pallin, D; Perret, P; Podlyski, F; Proriol, J; Rossignol, J M; Saadi, F; Fearnley, Tom; Hansen, J B; Hansen, J D; Hansen, J R; Hansen, P H; Nilsson, B S; Kyriakis, A; Simopoulou, Errietta; Siotis, I; Vayaki, Anna; Zachariadou, K; Blondel, A; Bonneaud, G R; Brient, J C; Bourdon, P; Passalacqua, L; Rougé, A; Rumpf, M; Tanaka, R; Valassi, Andrea; Verderi, M; Videau, H L; Candlin, D J; Parsons, M I; Focardi, E; Parrini, G; Corden, M; Delfino, M C; Georgiopoulos, C H; Jaffe, D E; Antonelli, A; Bencivenni, G; Bologna, G; Bossi, F; Campana, P; Capon, G; Chiarella, V; Felici, G; Laurelli, P; Mannocchi, G; Murtas, F; Murtas, G P; Pepé-Altarelli, M; Dorris, S J; Halley, A W; ten Have, I; Knowles, I G; Lynch, J G; Morton, W T; O'Shea, V; Raine, C; Reeves, P; Scarr, J M; Smith, K; Smith, M G; Thompson, A S; Thomson, F; Thorn, S; Turnbull, R M; Becker, U; Braun, O; Geweniger, C; Graefe, G; Hanke, P; Hepp, V; Kluge, E E; Putzer, A; Rensch, B; Schmidt, M; Sommer, J; Stenzel, H; Tittel, K; Werner, S; Wunsch, M; Beuselinck, R; Binnie, David M; Cameron, W; Colling, D J; Dornan, Peter J; Konstantinidis, N P; Moneta, L; Moutoussi, A; Nash, J; San Martin, G; Sedgbeer, J K; Stacey, A M; Dissertori, G; Girtler, P; Kneringer, E; Kuhn, D; Rudolph, G; Bowdery, C K; Brodbeck, T J; Colrain, P; Crawford, G; Finch, A J; Foster, F; Hughes, G; Sloan, Terence; Whelan, E P; Williams, M I; Galla, A; Greene, A M; Kleinknecht, K; Quast, G; Raab, J; Renk, B; Sander, H G; Wanke, R; Zeitnitz, C; Aubert, Jean-Jacques; Bencheikh, A M; Benchouk, C; Bonissent, A; Bujosa, G; Calvet, D; Carr, J; Diaconu, C A; Etienne, F; Thulasidas, M; Nicod, D; Payre, P; Rousseau, D; Talby, M; Abt, I; Assmann, R W; Bauer, C; Blum, Walter; Brown, D; Dietl, H; Dydak, Friedrich; Ganis, G; Gotzhein, C; Jakobs, K; Kroha, H; Lütjens, G; Lutz, Gerhard; Männer, W; Moser, H G; Richter, R H; Rosado-Schlosser, A; Settles, Ronald; Seywerd, H C J; Stierlin, U; Saint-Denis, R; Wolf, G; Alemany, R; Boucrot, J; Callot, O; Cordier, A; Courault, F; Davier, M; Duflot, L; Grivaz, J F; Jacquet, M; Kim, D W; Le Diberder, F R; Lefrançois, J; Lutz, A M; Musolino, G; Nikolic, I A; Park, H J; Park, I C; Schune, M H; Simion, S; Veillet, J J; Videau, I; Abbaneo, D; Azzurri, P; Bagliesi, G; Batignani, G; Bettarini, S; Bozzi, C; Calderini, G; Carpinelli, M; Ciocci, M A; Ciulli, V; Dell'Orso, R; Fantechi, R; Ferrante, I; Foà, L; Forti, F; Giassi, A; Giorgi, M A; Gregorio, A; Ligabue, F; Lusiani, A; Marrocchesi, P S; Messineo, A; Rizzo, G; Sanguinetti, G; Sciabà, A; Spagnolo, P; Steinberger, Jack; Tenchini, Roberto; Tonelli, G; Triggiani, G; Vannini, C; Verdini, P G; Walsh, J; Betteridge, A P; Blair, G A; Bryant, L M; Cerutti, F; Gao, Y; Green, M G; Johnson, D L; Medcalf, T; Mir, M; Perrodo, P; Strong, J A; Bertin, V; Botterill, David R; Clifft, R W; Edgecock, T R; Haywood, S; Edwards, M; Maley, P; Norton, P R; Thompson, J C; Bloch-Devaux, B; Colas, P; Duarte, H; Emery, S; Kozanecki, Witold; Lançon, E; Lemaire, M C; Locci, E; Marx, B; Pérez, P; Rander, J; Renardy, J F; Rosowsky, A; Roussarie, A; Schuller, J P; Schwindling, J; Si Mohand, D; Trabelsi, A; Vallage, B; Johnson, R P; Kim, H Y; Litke, A M; McNeil, M A; Taylor, G; Beddall, A; Booth, C N; Boswell, R; Cartwright, S L; Combley, F; Dawson, I; Köksal, A; Letho, M; Newton, W M; Rankin, C; Thompson, L F; Böhrer, A; Brandt, S; Cowan, G D; Feigl, E; Grupen, Claus; Lutters, G; Minguet-Rodríguez, J A; Rivera, F; Saraiva, P; Smolik, L; Stephan, F; Apollonio, M; Bosisio, L; Della Marina, R; Giannini, G; Gobbo, B; Ragusa, F; Rothberg, J E; Wasserbaech, S R; Armstrong, S R; Bellantoni, L; Elmer, P; Feng, Z; Ferguson, D P S; Gao, Y S; González, S; Grahl, J; Harton, J L; Hayes, O J; Hu, H; McNamara, P A; Nachtman, J M; Orejudos, W; Pan, Y B; Saadi, Y; Schmitt, M; Scott, I J; Sharma, V; Turk, J; Walsh, A M; Wu Sau Lan; Wu, X; Yamartino, J M; Zheng, M; Zobernig, G

    1995-01-01

    Charged particle multiplicity distributions have been measured with the ALEPH detector in restricted rapidity intervals |Y| \\leq 0.5,1.0, 1.5,2.0\\/ along the thrust axis and also without restriction on rapidity. The distribution for the full range can be parametrized by a log-normal distribution. For smaller windows one finds a more complicated structure, which is understood to arise from perturbative effects. The negative-binomial distribution fails to describe the data both with and without the restriction on rapidity. The JETSET model is found to describe all aspects of the data while the width predicted by HERWIG is in significant disagreement.

  2. Methods for analyzing the uncertainty of a reconstructed result in a traffic accident with interval and probabilistic traces.

    Science.gov (United States)

    Zou, Tiefang; Peng, Xulong; Wu, Wenguang; Cai, Ming

    2017-01-01

    In order to make the reconstructed result more reliable, a method named improved probabilistic-interval method was proposed to analyze the uncertainty of a reconstructed result in a traffic accident with probabilistic and interval traces. In the method, probabilistic traces are replaced by probabilistic sub-intervals firstly; secondly, these probabilistic sub-intervals and those interval traces will be combined to form many new uncertainty analysis problems with only interval traces; thirdly, the upper and lower bound of the reconstructed result and their probability were calculated in each new uncertainty analysis problem, and an algorithm was proposed to shorten the time taken for this step; finally, distribution functions of the upper and lower bound of the reconstructed result were obtained by doing statistic analysis. Through 2 numerical cases, results obtained from the proposed method were almost the same as results obtained from the Monte Carlo method, but the time taken for the proposed method was far less than the time taken for the Monte Carlo method and results obtained from the proposed method were more stable. Through applying the proposed method to a true vehicle-pedestrian accident, not only the upper and lower bound of the impact velocity (v) can be obtained; but also the probability that the upper bound and the lower bound of v falls in an arbitrary interval can be obtained; furthermore, the probability that the interval of v is less than an arbitrary interval can be obtained also. It is concluded that the proposed improved probabilistic-interval method is practical. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. Statistical Inference for the Parameter of Rayleigh Distribution Based on Progressively Typ e-I Interval Censored Sample

    Institute of Scientific and Technical Information of China (English)

    Abdalroof M S; Zhao Zhi-wen; Wang De-hui

    2015-01-01

    In this paper, the estimation of parameters based on a progressively type-I interval censored sample from a Rayleigh distribution is studied. Different methods of estimation are discussed. They include mid-point approximation estima-tor, the maximum likelihood estimator, moment estimator, Bayes estimator, sampling adjustment moment estimator, sampling adjustment maximum likelihood estimator and estimator based on percentile. The estimation procedures are discussed in details and compared via Monte Carlo simulations in terms of their biases.

  4. Statistical distribution of the clearness index with radiation data integrated over five minute intervals

    Energy Technology Data Exchange (ETDEWEB)

    Jurado, M.; Caridad, J.M. [Universidad de Cordoba (Spain); Ruiz, V. [Universidad de Sevilla (Spain)

    1995-12-31

    In this paper, the influence of the measurement interval of solar radiation data on the cumulative probability distribution of the clearness index is studied. The distribution observed in southern Spain is bimodal using 5 min data, and this property fades away as the data are aggregated over larger time intervals, and it also depends on the air mass. This is used to confirm the existence of two kinds of types of radiation associated with clear or cloudy skies. Also, with 5 min radiation data, a new statistical model is proposed, based on a mixture of two normal distributions, which provides a good fit for the data measured in Seville. 9 refs., 3 figs., 1 tab.

  5. NMR and interval PLS as reliable methods for determination of cholesterol in rodent lipoprotein fractions

    DEFF Research Database (Denmark)

    Kristensen, Mette; Savorani, Francesco; Ravn-Haren, Gitte

    2010-01-01

    Risk of cardiovascular disease is related to cholesterol distribution in different lipoprotein fractions. Lipoproteins in rodent model studies can only reliably be measured by time- and plasma-consuming fractionation. An alternative method to measure cholesterol distribution in the lipoprotein...... fractions in rat plasma is presented in this paper. Plasma from two rat studies (n = 68) was used in determining the lipoprotein profile by an established ultracentrifugation method and proton nuclear magnetic resonance (NMR) spectra of replicate samples was obtained. From the ultracentrifugation reference...... data and the NMR spectra, an interval partial least-square (iPLS) regression model to predict the amount of cholesterol in the different lipoprotein fractions was developed. The relative errors of the prediction models were between 12 and 33% and had correlation coefficients (r) between 0.96 and 0...

  6. Resampling approach for determination of the method for reference interval calculation in clinical laboratory practice.

    Science.gov (United States)

    Pavlov, Igor Y; Wilson, Andrew R; Delgado, Julio C

    2010-08-01

    Reference intervals (RI) play a key role in clinical interpretation of laboratory test results. Numerous articles are devoted to analyzing and discussing various methods of RI determination. The two most widely used approaches are the parametric method, which assumes data normality, and a nonparametric, rank-based procedure. The decision about which method to use is usually made arbitrarily. The goal of this study was to demonstrate that using a resampling approach for the comparison of RI determination techniques could help researchers select the right procedure. Three methods of RI calculation-parametric, transformed parametric, and quantile-based bootstrapping-were applied to multiple random samples drawn from 81 values of complement factor B observations and from a computer-simulated normally distributed population. It was shown that differences in RI between legitimate methods could be up to 20% and even more. The transformed parametric method was found to be the best method for the calculation of RI of non-normally distributed factor B estimations, producing an unbiased RI and the lowest confidence limits and interquartile ranges. For a simulated Gaussian population, parametric calculations, as expected, were the best; quantile-based bootstrapping produced biased results at low sample sizes, and the transformed parametric method generated heavily biased RI. The resampling approach could help compare different RI calculation methods. An algorithm showing a resampling procedure for choosing the appropriate method for RI calculations is included.

  7. Species sensitivity distribution for pentachlorophenol to aquatic organisms based on interval ecotoxicological data.

    Science.gov (United States)

    Zhao, Jinsong; Zhang, Run

    2017-11-01

    Species sensitivity distribution (SSD) model is often used to extrapolate the chemicals' effects from the ecotoxicological data on individual species to ecosystems, and is widely applied to derive water quality criteria or to assess ecological risk. Because of the influence of various factors, the ecotoxicological data of a specific chemicals to an individual usually exist in a range. The feasibility of interval ecotoxicological data directly applied to build SSD model has not been clearly stated. In the present study, by means of Bayesian statistics, the half maximal effective concentration (EC50) of pentachlorophenol (PCP) to 161 aquatic organisms, which were organized into 7 groups, i.e., single determined value, geometric mean estimation, median estimation, interval data, and combination of single determined data with other groups, were used to develop SSD models and to estimate the minimum sample sizes. The results showed that the interval data could be directly applied to build SSD model, and when combined with single point data could give the narrowest credible interval that indicates a stable and robust SSD model. Meanwhile, the results also implied that at least 6-14 ecotoxicological data were required to build a stable SSD model. It suggests that the utilization of interval data in building SSD model can effectively enhance the availability of ecotoxicological data, reduce the uncertainty brought by sample size or point estimation, and provide a reliable way to widen the application of SSD model. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Interval finite element method and its application on anti-slide stability analysis

    Institute of Scientific and Technical Information of China (English)

    SHAO Guo-jian; SU Jing-bo

    2007-01-01

    The problem of interval correlation results in interval extension is discussed by the relationship of interval-valued functions and real-valued functions. The methods of reducing interval extension are given. Based on the ideas of the paper, the formulas of sub-interval perturbed finite element method based on the elements are given. The sub-interval amount is discussed and the approximate computation formula is given. At the same time, the computational precision is discussed and some measures of improving computational efficiency are given. Finally, based on sub-interval perturbed finite element method and anti-slide stability analysis method, the formula for computing the bounds of stability factor is given. It provides a basis for estimating and evaluating reasonably anti-slide stability of structures.

  9. Binomial Distribution Sample Confidence Intervals Estimation 7. Absolute Risk Reduction and ARR-like Expressions

    Directory of Open Access Journals (Sweden)

    Andrei ACHIMAŞ CADARIU

    2004-08-01

    Full Text Available Assessments of a controlled clinical trial suppose to interpret some key parameters as the controlled event rate, experimental event date, relative risk, absolute risk reduction, relative risk reduction, number needed to treat when the effect of the treatment are dichotomous variables. Defined as the difference in the event rate between treatment and control groups, the absolute risk reduction is the parameter that allowed computing the number needed to treat. The absolute risk reduction is compute when the experimental treatment reduces the risk for an undesirable outcome/event. In medical literature when the absolute risk reduction is report with its confidence intervals, the method used is the asymptotic one, even if it is well know that may be inadequate. The aim of this paper is to introduce and assess nine methods of computing confidence intervals for absolute risk reduction and absolute risk reduction – like function.Computer implementations of the methods use the PHP language. Methods comparison uses the experimental errors, the standard deviations, and the deviation relative to the imposed significance level for specified sample sizes. Six methods of computing confidence intervals for absolute risk reduction and absolute risk reduction-like functions were assessed using random binomial variables and random sample sizes.The experiments shows that the ADAC, and ADAC1 methods obtains the best overall performance of computing confidence intervals for absolute risk reduction.

  10. The Distribution of the Interval between Events of a Cox Process with Shot Noise Intensity

    Directory of Open Access Journals (Sweden)

    Angelos Dassios

    2008-01-01

    Full Text Available Applying piecewise deterministic Markov processes theory, the probability generating function of a Cox process, incorporating with shot noise process as the claim intensity, is obtained. We also derive the Laplace transform of the distribution of the shot noise process at claim jump times, using stationary assumption of the shot noise process at any times. Based on this Laplace transform and from the probability generating function of a Cox process with shot noise intensity, we obtain the distribution of the interval of a Cox process with shot noise intensity for insurance claims and its moments, that is, mean and variance.

  11. Applying Possibility Degree Method for Ranking Interval Numbers to Partnership Selection

    Institute of Scientific and Technical Information of China (English)

    LI Tong; ZHANG Qiang; ZHENG Tao

    2008-01-01

    A new interval number ranking approach is applied for assessment of priorities of the alternative partners, where the attribute values are given out as interval numbers while the weight of each criterion is still exact numerical value pattern. After aggregating with the weighted arithmetic averaging operator, the result is still in the form of interval number. To achieve the priorities of alternative partners we take the possibility method for ranking interval numbers into account which could derive priorities from inconsistent attribute values, thus eliminating the adjustment to the inconsistent attribute values. Moreover, this method is very simple and needs less calculation. An illustrative example is given out to demonstrate this smart method.

  12. A power-law distribution of phase-locking intervals does not imply critical interaction

    CERN Document Server

    Botcharova, Maria; Berthouze, Luc

    2012-01-01

    Neural synchronisation plays a critical role in information processing, storage and transmission. Characterising the pattern of synchronisation is therefore of great interest. It has recently been suggested that the brain displays broadband criticality based on two measures of synchronisation - phase locking intervals and global lability of synchronisation - showing power law statistics at the critical threshold in a classical model of synchronisation. In this paper, we provide evidence that, within the limits of the model selection approach used to ascertain the presence of power law statistics, the pooling of pairwise phase-locking intervals from a non-critically interacting system can produce a distribution that is similarly assessed as being power law. In contrast, the global lability of synchronisation measure is shown to better discriminate critical from non critical interaction.

  13. Interval Methods for Model Qualification: Methodology and Advanced Application

    OpenAIRE

    Alexandre dit Sandretto, Julien; Trombettoni, Gilles; Daney, David

    2012-01-01

    It is often too complex to use, and sometimes impossible to obtain, an actual model in simulation or command field . To handle a system in practice, a simplification of the real model is then necessary. This simplification goes through some hypotheses made on the system or the modeling approach. In this paper, we deal with all models that can be expressed by real-valued variables involved in analytical relations and depending on parameters. We propose a method that qualifies the simplificatio...

  14. An Extended TOPSIS Method for Multiple Attribute Decision Making based on Interval Neutrosophic Uncertain Linguistic Variables

    Directory of Open Access Journals (Sweden)

    Said Broumi

    2015-03-01

    Full Text Available The interval neutrosophic uncertain linguistic variables can easily express the indeterminate and inconsistent information in real world, and TOPSIS is a very effective decision making method more and more extensive applications. In this paper, we will extend the TOPSIS method to deal with the interval neutrosophic uncertain linguistic information, and propose an extended TOPSIS method to solve the multiple attribute decision making problems in which the attribute value takes the form of the interval neutrosophic uncertain linguistic variables and attribute weight is unknown. Firstly, the operational rules and properties for the interval neutrosophic variables are introduced. Then the distance between two interval neutrosophic uncertain linguistic variables is proposed and the attribute weight is calculated by the maximizing deviation method, and the closeness coefficients to the ideal solution for each alternatives. Finally, an illustrative example is given to illustrate the decision making steps and the effectiveness of the proposed method.

  15. Estimation of drought and flood recurrence interval from historical discharge data: a case study utilising the power law distribution

    Science.gov (United States)

    Eadie, Chris; Favis-Mortlock, David

    2010-05-01

    -law distribution is fitted, but a much longer recurrence interval — on the order of 1000 years — using the USA's standard LP3 method. In addition Pandey et al. (1998) found that fitting a power-law distribution, compared with fitting a Generalized Extreme Value distribution, can lead to a large decrease in the predicted return period for a given flood event. Both these findings have obvious implications for river management design. Power-law distributions have been fitted to fluvial discharge data by many authors (most notably by Malamud et al., 1996 and Pandey et al., 1998), who then use these fitted distributions to estimate flow probabilities. These authors found that the power-law performed as well or better than many of the distributions currently used around the world, despite utilising fewer parameters. The power-law has not, however, been officially adopted by any country for fitting to fluvial discharge data. This paper demonstrates a statistically robust method, based on Maximum Likelihood Estimation, for fitting a power-law distribution to mean daily streamflows. The fitted distribution is then used to calculate return periods, which are compared to the return periods obtained by other, more commonly used, distributions. The implications for river management, extremes of flow in particular, are then explored.

  16. Assessing the complexity of short-term heartbeat interval series by distribution entropy.

    Science.gov (United States)

    Li, Peng; Liu, Chengyu; Li, Ke; Zheng, Dingchang; Liu, Changchun; Hou, Yinglong

    2015-01-01

    Complexity of heartbeat interval series is typically measured by entropy. Recent studies have found that sample entropy (SampEn) or fuzzy entropy (FuzzyEn) quantifies essentially the randomness, which may not be uniformly identical to complexity. Additionally, these entropy measures are heavily dependent on the predetermined parameters and confined to data length. Aiming at improving the robustness of complexity assessment for short-term RR interval series, this study developed a novel measure--distribution entropy (DistEn). The DistEn took full advantage of the inherent information underlying the vector-to-vector distances in the state space by probability density estimation. Performances of DistEn were examined by theoretical data and experimental short-term RR interval series. Results showed that DistEn correctly ranked the complexity of simulated chaotic series and Gaussian noise series. The DistEn had relatively lower sensitivity to the predetermined parameters and showed stability even for quantifying the complexity of extremely short series. Analysis further showed that the DistEn indicated the loss of complexity in both healthy aging and heart failure patients (both p < 0.01), whereas neither the SampEn nor the FuzzyEn achieved comparable results (all p ≥ 0.05). This study suggested that the DistEn would be a promising measure for prompt clinical examination of cardiovascular function.

  17. 中介效应的点估计和区间估计:乘积分布法、非参数Bootstrap和MCMC法%Assessing Point and Interval Estimation for the Mediating Effect: Distribution of the Product, Nonparametric Bootstrap and Markov Chain Monte Carlo Methods

    Institute of Scientific and Technical Information of China (English)

    方杰; 张敏强

    2012-01-01

    针对中介效应ab的抽样分布往往不是正态分布的问题,学者近年提出了三类无需对ab的抽样分布进行任何限制且适用于中、小样本的方法,包括乘积分布法、非参数Bootstrap和马尔科夫链蒙特卡罗(MCMC)方法.采用模拟技术比较了三类方法在中介效应分析中的表现.结果发现:1)有先验信息的MCMC方法的ab点估计最准确;2)有先验信息的MCMC方法的统计功效最高,但付出了低估第Ⅰ类错误率的代价,偏差校正的非参数百分位Bootstrap方法的统计功效其次,但付出了高估第Ⅰ类错误率的代价;3)有先验信息的MCMC方法的中介效应区间估计最准确.结果表明,当有先验信息时,推荐使用有先验信息的MCMC方法;当先验信息不可得时,推荐使用偏差校正的非参数百分位Bootstrap方法.%Because few sampling distributions of mediating effect are normally distributed, in recent years, Classic approaches to assessing mediation (Baron & Kenny, 1986; Sobel, 1982) have been supplemented by computationally intensive methods such as nonparametric bootstrap, the distribution of the product methods, and Markov chain Monte Carlo (MCMC) methods. These approaches are suitable for medium or small sample size and do not impose the assumption of normality of the sampling distribution of mediating effects. However, little is known about how these methods perform relative to each other.This study extends Mackinnon and colleagues' (Mackinnon, Lockwood & Williams, 2004; Yuan & Mackinnon, 2009) works by conducting a simulation using R software. This simulation examines several approaches for assessing mediation. Three factors were considered in the simulation design: (a) sample size (N=25, 50, 100, 200, 1000); (b) parameter combinations (a=b=0, a=0.39 b=0, a=0 b=0.59, a=b=0.14, a=b=0.39, a=b=0.59); ? method for assessing mediation (distribute of the product method, nonparametric percentile Bootstrap method, bias-corrected nonparametric

  18. An efficient method of wavelength interval selection based on random frog for multivariate spectral calibration

    Science.gov (United States)

    Yun, Yong-Huan; Li, Hong-Dong; Wood, Leslie R. E.; Fan, Wei; Wang, Jia-Jun; Cao, Dong-Sheng; Xu, Qing-Song; Liang, Yi-Zeng

    2013-07-01

    Wavelength selection is a critical step for producing better prediction performance when applied to spectral data. Considering the fact that the vibrational and rotational spectra have continuous features of spectral bands, we propose a novel method of wavelength interval selection based on random frog, called interval random frog (iRF). To obtain all the possible continuous intervals, spectra are first divided into intervals by moving window of a fix width over the whole spectra. These overlapping intervals are ranked applying random frog coupled with PLS and the optimal ones are chosen. This method has been applied to two near-infrared spectral datasets displaying higher efficiency in wavelength interval selection than others. The source code of iRF can be freely downloaded for academy research at the website: http://code.google.com/p/multivariate-calibration/downloads/list.

  19. Non-Gaussian distributions of melodic intervals in music: The Lévy-stable approximation

    Science.gov (United States)

    Niklasson, Gunnar A.; Niklasson, Maria H.

    2015-11-01

    The analysis of structural patterns in music is of interest in order to increase our fundamental understanding of music, as well as for devising algorithms for computer-generated music, so called algorithmic composition. Musical melodies can be analyzed in terms of a “music walk” between the pitches of successive tones in a notescript, in analogy with the “random walk” model commonly used in physics. We find that the distribution of melodic intervals between tones can be approximated with a Lévy-stable distribution. Since music also exibits self-affine scaling, we propose that the “music walk” should be modelled as a Lévy motion. We find that the Lévy motion model captures basic structural patterns in classical as well as in folk music.

  20. Interval-Censored Time-to-Event Data Methods and Applications

    CERN Document Server

    Chen, Ding-Geng

    2012-01-01

    Interval-Censored Time-to-Event Data: Methods and Applications collects the most recent techniques, models, and computational tools for interval-censored time-to-event data. Top biostatisticians from academia, biopharmaceutical industries, and government agencies discuss how these advances are impacting clinical trials and biomedical research. Divided into three parts, the book begins with an overview of interval-censored data modeling, including nonparametric estimation, survival functions, regression analysis, multivariate data analysis, competing risks analysis, and other models for interva

  1. Molecular characterization of a genomic interval with highly uneven recombination distribution on maize chromosome 10 L.

    Science.gov (United States)

    Wang, Gang; Xu, Jianping; Tang, Yuanping; Zhou, Liangliang; Wang, Fei; Xu, Zhengkai; Song, Rentao

    2011-09-01

    Homologous recombination in meiosis provides the evolutionary driving force in eukaryotic organisms by generating genetic variability. Meiotic recombination does not always occur evenly across the chromosome, and therefore genetic and physical distances are not consistently in proportion. We discovered a 278 kb interval on the long arm of chromosome 10 (10 L) through analyzed 13,933 descendants of backcross population. The recombinant events distributed unevenly in the interval. The ratio of genetic to physical distance in the interval fluctuated about 47-fold. With the assistance of molecular markers, the interval was divided into several subintervals for further characterization. In agreement with previous observations, high gene-density regions such as subinterval A and B were also genetic recombination hot subintervals, and repetitive sequence-riched region such as subinterval C was also found to be recombination inert at the detection level of the study. However, we found an unusual subinterval D, in which the 72-kb region contained 6 genes. The gene-density of subinterval D was 5.8 times that of the genome-wide average. The ratio of genetic to physical distance in subinterval D was 0.58 cM/Mb, only about 3/4 of the genome average. We carried out an analysis of sequence polymorphisms and methylation status in subinterval D, and the potential causes of recombination suppression were discussed. This study was another case of a detailed genetic analysis of an unusual recombination region in the maize genome. © Springer Science+Business Media B.V. 2011

  2. A Comparison of Two Reading Fluency Methods: Repeated Readings to a Fluency Criterion and Interval Sprinting

    Science.gov (United States)

    Kostewicz, Douglas E.; Kubina, Richard M., Jr.

    2010-01-01

    Teachers have used the method of repeated readings to build oral reading fluency in students with and without special needs. A new fluency building intervention called interval sprinting uses shorter timing intervals (i.e., sprints) across a passage. This study used an alternating treatment design to compare repeated readings and interval…

  3. Applications of interval arithmetic in solving polynomial equations by Wu's elimination method

    Institute of Scientific and Technical Information of China (English)

    CHEN; Falai; YANG; Wu

    2005-01-01

    Wu's elimination method is an important method for solving multivariate polynomial equations. In this paper, we apply interval arithmetic to Wu's method and convert the problem of solving polynomial equations into that of solving interval polynomial equations. Parallel results such as zero-decomposition theorem are obtained for interval polynomial equations. The advantages of the new approach are two-folds: First, the problem of the numerical instability arisen from floating-point arithmetic is largely overcome. Second,the low efficiency of the algorithm caused by large intermediate coefficients introduced by exact compaction is dramatically improved. Some examples are provided to illustrate the effectiveness of the proposed algorithm.

  4. Comparison of the methods for determination of calibration and verification intervals of measuring devices

    Directory of Open Access Journals (Sweden)

    Toteva Pavlina

    2017-01-01

    Full Text Available The paper presents different determination and optimisation methods for verification intervals of technical devices for monitoring and measurement based on the requirements of some widely used international standards, e.g. ISO 9001, ISO/IEC 17020, ISO/IEC 17025 etc., maintained by various organizations implementing measuring devices in practice. Comparative analysis of the reviewed methods is conducted in terms of opportunities for assessing the adequacy of interval(s for calibration of measuring devices and their optimisation accepted by an organization – an extension or reduction depending on the obtained results. The advantages and disadvantages of the reviewed methods are discussed, and recommendations for their applicability are provided.

  5. An efficient hybrid reliability analysis method with random and interval variables

    Science.gov (United States)

    Xie, Shaojun; Pan, Baisong; Du, Xiaoping

    2016-09-01

    Random and interval variables often coexist. Interval variables make reliability analysis much more computationally intensive. This work develops a new hybrid reliability analysis method so that the probability analysis (PA) loop and interval analysis (IA) loop are decomposed into two separate loops. An efficient PA algorithm is employed, and a new efficient IA method is developed. The new IA method consists of two stages. The first stage is for monotonic limit-state functions. If the limit-state function is not monotonic, the second stage is triggered. In the second stage, the limit-state function is sequentially approximated with a second order form, and the gradient projection method is applied to solve the extreme responses of the limit-state function with respect to the interval variables. The efficiency and accuracy of the proposed method are demonstrated by three examples.

  6. Two Interval Estimates on the Parameter of Negative Binomial Distribution%负二项分布参数的两种区间估计

    Institute of Scientific and Technical Information of China (English)

    姜培华; 范国良

    2012-01-01

    研究了负二项分布参数的区间估计方法,给出其两种区间估计方法.首先给出负二项分布参数的精确区间估计方法;其次给出大样本近似区间估计方法.最后通过数值例子介绍这些区间估计方法的应用.%The methods of interval estimate on the parameter of negative binomial distribution have been studied. Two methods are given. Firstly, the accurate interval estimate is put forward ; secondly, large sample approximate inter-val estimate is gained; Finally,these methods of interval estimate are studied and illustration is shown by examples.

  7. Simple methods of determining confidence intervals for functions of estimates in published results.

    Directory of Open Access Journals (Sweden)

    Garrett Fitzmaurice

    Full Text Available Often, the reader of a published paper is interested in a comparison of parameters that has not been presented. It is not possible to make inferences beyond point estimation since the standard error for the contrast of the estimated parameters depends upon the (unreported correlation. This study explores approaches to obtain valid confidence intervals when the correlation [Formula: see text] is unknown. We illustrate three proposed approaches using data from the National Health Interview Survey. The three approaches include the Bonferroni method and the standard confidence interval assuming [Formula: see text] (most conservative or [Formula: see text] (when the correlation is known to be non-negative. The Bonferroni approach is found to be the most conservative. For the difference in two estimated parameter, the standard confidence interval assuming [Formula: see text] yields a 95% confidence interval that is approximately 12.5% narrower than the Bonferroni confidence interval; when the correlation is known to be positive, the standard 95% confidence interval assuming [Formula: see text] is approximately 38% narrower than the Bonferroni. In summary, this article demonstrates simple methods to determine confidence intervals for unreported comparisons. We suggest use of the standard confidence interval assuming [Formula: see text] if no information is available or [Formula: see text] if the correlation is known to be non-negative.

  8. Application of Interval Multi-attribute Decision-Making Method to Aeroengine Performance Ranking

    Institute of Scientific and Technical Information of China (English)

    Zhang Haijun; Zuo Hongfu; Liang Jian

    2006-01-01

    In view of the uncertainty of the monitored performance parameters of aeroengines, the fluctuating scope of the monitored informarion during a period is taken as interval numbers, and the interval multi-attribute decision-making method is employed to predict the performance of aeroengine. The synthetic weights of interval numbers are obtained by calculating deviation degree and possibility degree. As an example of application, 5 performance parameters monitored on 10 CF6 aeroengines of China Eastern Airlines Co.,Ltd are adopted as decision attributes to verify the algorithm. The obtained synthetic ranking result shows the effectiveness and rationality of the proposed method in reflecting the performance status of aeroengins.

  9. An Approximate Interval Estimate on the Parameter of Negative Binomial Distribution%负二项分布参数的一类近似区间估计

    Institute of Scientific and Technical Information of China (English)

    姜培华

    2012-01-01

    借助负二项分布和卡方分布的极限关系,推导给出当参数P较小条件下的近似区间估计,并通过数值例子介绍了此区间估计方法的应用.%An approximate interval estimate for small was gained with the limit relationship of negative binomi- al distribution and the chi - square distribution ; Finally, the method of interval estimate was studied and illustration was shown by examples.

  10. A hybrid Markov chain-von Mises density model for the drug-dosing interval and drug holiday distributions.

    Science.gov (United States)

    Fellows, Kelly; Rodriguez-Cruz, Vivian; Covelli, Jenna; Droopad, Alyssa; Alexander, Sheril; Ramanathan, Murali

    2015-03-01

    Lack of adherence is a frequent cause of hospitalizations, but its effects on dosing patterns have not been extensively investigated. The purpose of this work was to critically evaluate a novel pharmacometric model for deriving the relationships of adherence to dosing patterns and the dosing interval distribution. The hybrid, stochastic model combines a Markov chain process with the von Mises distribution. The model was challenged with electronic medication monitoring data from 207 hypertension patients and against 5-year persistence data. The model estimates distributions of dosing runs, drug holidays, and dosing intervals. Drug holidays, which can vary between individuals with the same adherence, were characterized by the patient cooperativity index parameter. The drug holiday and dosing run distributions deviate markedly from normality. The dosing interval distribution exhibits complex patterns of multimodality and can be long-tailed. Dosing patterns are an important but under recognized covariate for explaining within-individual variance in drug concentrations.

  11. Comparative study of two linearization methods for time intervals generation of SVPWM technique

    Directory of Open Access Journals (Sweden)

    Khaled N. Faris

    2016-12-01

    In this paper a comparative study for two linearization methods are carried out for generating the time intervals of SVPWM technique. The proposed linearization methods achieve a minimum computational time rather than the trigonometric sine function which is considered the base for the time interval calculations of the SVPWM technique. The first linearization method is based on the first order equation, and the second method is the (Takagi–Sugeno fuzzy modeling system. The comparative study includes the accuracy of the two models, also a simulation model is carried out for current THD estimation using the two proposed methods compared with the current THD generated by SVPWM based on the trigonometric sine function.

  12. Interval model updating using perturbation method and Radial Basis Function neural networks

    Science.gov (United States)

    Deng, Zhongmin; Guo, Zhaopu; Zhang, Xinjie

    2017-02-01

    In recent years, stochastic model updating techniques have been applied to the quantification of uncertainties inherently existing in real-world engineering structures. However in engineering practice, probability density functions of structural parameters are often unavailable due to insufficient information of a structural system. In this circumstance, interval analysis shows a significant advantage of handling uncertain problems since only the upper and lower bounds of inputs and outputs are defined. To this end, a new method for interval identification of structural parameters is proposed using the first-order perturbation method and Radial Basis Function (RBF) neural networks. By the perturbation method, each random variable is denoted as a perturbation around the mean value of the interval of each parameter and that those terms can be used in a two-step deterministic updating sense. Interval model updating equations are then developed on the basis of the perturbation technique. The two-step method is used for updating the mean values of the structural parameters and subsequently estimating the interval radii. The experimental and numerical case studies are given to illustrate and verify the proposed method in the interval identification of structural parameters.

  13. Efficient Methods for Stable Distributions

    Science.gov (United States)

    2007-11-02

    are used, corresponding to the common values used in digital signal processing. Five new functions for discrete/quantized stable distributions were...written. • sgendiscrete generates discrete stable random variates. It works by generating continuous stable random variables using the Chambers- Mallows ...with stable distributions. It allows engineers and scientists to analyze data and work with stable distributions within the common matlab environment

  14. A modified hybrid uncertain analysis method for dynamic response field of the LSOAAC with random and interval parameters

    Science.gov (United States)

    Zi, Bin; Zhou, Bin

    2016-07-01

    For the prediction of dynamic response field of the luffing system of an automobile crane (LSOAAC) with random and interval parameters, a hybrid uncertain model is introduced. In the hybrid uncertain model, the parameters with certain probability distribution are modeled as random variables, whereas, the parameters with lower and upper bounds are modeled as interval variables instead of given precise values. Based on the hybrid uncertain model, the hybrid uncertain dynamic response equilibrium equation, in which different random and interval parameters are simultaneously included in input and output terms, is constructed. Then a modified hybrid uncertain analysis method (MHUAM) is proposed. In the MHUAM, based on random interval perturbation method, the first-order Taylor series expansion and the first-order Neumann series, the dynamic response expression of the LSOAAC is developed. Moreover, the mathematical characteristics of extrema of bounds of dynamic response are determined by random interval moment method and monotonic analysis technique. Compared with the hybrid Monte Carlo method (HMCM) and interval perturbation method (IPM), numerical results show the feasibility and efficiency of the MHUAM for solving the hybrid LSOAAC problems. The effects of different uncertain models and parameters on the LSOAAC response field are also investigated deeply, and numerical results indicate that the impact made by the randomness in the thrust of the luffing cylinder F is larger than that made by the gravity of the weight in suspension Q . In addition, the impact made by the uncertainty in the displacement between the lower end of the lifting arm and the luffing cylinder a is larger than that made by the length of the lifting arm L .

  15. Numerical solution of optimal control problems using multiple-interval integral Gegenbauer pseudospectral methods

    Science.gov (United States)

    Tang, Xiaojun

    2016-04-01

    The main purpose of this work is to provide multiple-interval integral Gegenbauer pseudospectral methods for solving optimal control problems. The latest developed single-interval integral Gauss/(flipped Radau) pseudospectral methods can be viewed as special cases of the proposed methods. We present an exact and efficient approach to compute the mesh pseudospectral integration matrices for the Gegenbauer-Gauss and flipped Gegenbauer-Gauss-Radau points. Numerical results on benchmark optimal control problems confirm the ability of the proposed methods to obtain highly accurate solutions.

  16. INTERVAL FINITE VOLUME METHOD FOR UNCERTAINTY SIMULATION OF TWO-DIMENSIONAL RIVER WATER QUALITY

    Institute of Scientific and Technical Information of China (English)

    HE Li; ZENG Guang-ming; HUANG Guo-he; LU Hong-wei

    2004-01-01

    Under the interval uncertainties, by incorporating the discretization form of finite volume method and interval algebra theory, an Interval Finite Volume Method (IFVM) was developed to solve water quality simulation issues for two-dimensional river when lacking effective data of flow velocity and flow quantity. The IFVM was practically applied to a segment of the Xiangjiang River because the Project of Hunan Inland Waterway Multipurpose must be started working after the environmental impact assessment for it. The simulation results suggest that there exist rather apparent pollution zones of BOD5 downstream the Dongqiaogang discharger and that of COD downstream Xiaoxiangjie discharger, but the pollution sources have no impact on the safety of the three water plants located in this river segment. Although the developed IFVM is to be perfected, it is still a powerful tool under interval uncertainties for water environmental impact assessment, risk analysis, and water quality planning, etc. besides water quality simulation studied in this paper.

  17. A Method to Compute Multiplicity Corrected Confidence Intervals for Odds Ratios and Other Relative Effect Estimates

    OpenAIRE

    Jimmy Thomas Efird; Susan Searles Nielsen

    2008-01-01

    Epidemiological studies commonly test multiple null hypotheses. In some situations it may be appropriate to account for multiplicity using statistical methodology rather than simply interpreting results with greater caution as the number of comparisons increases. Given the one-to-one relationship that exists between confidence intervals and hypothesis tests, we derive a method based upon the Hochberg step-up procedure to obtain multiplicity corrected confidence intervals (CI) for odds ratios ...

  18. Distribution Entropy (DistEn): A complexity measure to detect arrhythmia from short length RR interval time series.

    Science.gov (United States)

    Karmakar, Chandan; Udhayakumar, Radhagayathri K; Palaniswami, Marimuthu

    2015-01-01

    Heart rate complexity analysis is a powerful non-invasive means to diagnose several cardiac ailments. Non-linear tools of complexity measurement are indispensable in order to bring out the complete non-linear behavior of Physiological signals. The most popularly used non-linear tools to measure signal complexity are the entropy measures like Approximate entropy (ApEn) and Sample entropy (SampEn). But, these methods become unreliable and inaccurate at times, in particular, for short length data. Recently, a novel method of complexity measurement called Distribution Entropy (DistEn) was introduced, which showed reliable performance to capture complexity of both short term synthetic and short term physiologic data. This study aims to i) examine the competence of DistEn in discriminating Arrhythmia from Normal sinus rhythm (NSR) subjects, using RR interval time series data; ii) explore the level of consistency of DistEn with data length N; and iii) compare the performance of DistEn with ApEn and SampEn. Sixty six RR interval time series data belonging to two groups of cardiac conditions namely `Arrhythmia' and `NSR' have been used for the analysis. The data length N was varied from 50 to 1000 beats with embedding dimension m = 2 for all entropy measurements. Maximum ROC area obtained using ApEn, SampEn and DistEn were 0.83, 0.86 and 0.94 for data length 1000, 1000 and 500 beats respectively. The results show that DistEn undoubtedly exhibits a consistently high performance as a classification feature in comparison with ApEn and SampEn. Therefore, DistEn shows a promising behavior as bio marker for detecting Arrhythmia from short length RR interval data.

  19. Effect of the revisit interval and temporal upscaling methods on the accuracy of remotely sensed evapotranspiration estimates

    Science.gov (United States)

    Alfieri, Joseph G.; Anderson, Martha C.; Kustas, William P.; Cammalleri, Carmelo

    2017-01-01

    Accurate spatially distributed estimates of actual evapotranspiration (ET) derived from remotely sensed data are critical to a broad range of practical and operational applications. However, due to lengthy return intervals and cloud cover, data acquisition is not continuous over time, particularly for satellite sensors operating at medium ( ˜ 100 m) or finer resolutions. To fill the data gaps between clear-sky data acquisitions, interpolation methods that take advantage of the relationship between ET and other environmental properties that can be continuously monitored are often used. This study sought to evaluate the accuracy of this approach, which is commonly referred to as temporal upscaling, as a function of satellite revisit interval. Using data collected at 20 Ameriflux sites distributed throughout the contiguous United States and representing four distinct land cover types (cropland, grassland, forest, and open-canopy) as a proxy for perfect retrievals on satellite overpass dates, this study assesses daily ET estimates derived using five different reference quantities (incident solar radiation, net radiation, available energy, reference ET, and equilibrium latent heat flux) and three different interpolation methods (linear, cubic spline, and Hermite spline). Not only did the analyses find that the temporal autocorrelation, i.e., persistence, of all of the reference quantities was short, it also found that those land cover types with the greatest ET exhibited the least persistence. This carries over to the error associated with both the various scaled quantities and flux estimates. In terms of both the root mean square error (RMSE) and mean absolute error (MAE), the errors increased rapidly with increasing return interval following a logarithmic relationship. Again, those land cover types with the greatest ET showed the largest errors. Moreover, using a threshold of 20 % relative error, this study indicates that a return interval of no more than 5 days is

  20. Global Robust Stability of Switched Interval Neural Networks with Discrete and Distributed Time-Varying Delays of Neural Type

    Directory of Open Access Journals (Sweden)

    Huaiqin Wu

    2012-01-01

    Full Text Available By combing the theories of the switched systems and the interval neural networks, the mathematics model of the switched interval neural networks with discrete and distributed time-varying delays of neural type is presented. A set of the interval parameter uncertainty neural networks with discrete and distributed time-varying delays of neural type are used as the individual subsystem, and an arbitrary switching rule is assumed to coordinate the switching between these networks. By applying the augmented Lyapunov-Krasovskii functional approach and linear matrix inequality (LMI techniques, a delay-dependent criterion is achieved to ensure to such switched interval neural networks to be globally asymptotically robustly stable in terms of LMIs. The unknown gain matrix is determined by solving this delay-dependent LMIs. Finally, an illustrative example is given to demonstrate the validity of the theoretical results.

  1. Hydrogen Production Technologies Evaluation Based on Interval-Valued Intuitionistic Fuzzy Multiattribute Decision Making Method

    Directory of Open Access Journals (Sweden)

    Dejian Yu

    2014-01-01

    Full Text Available We establish a decision making model for evaluating hydrogen production technologies in China, based on interval-valued intuitionistic fuzzy set theory. First of all, we propose a series of interaction interval-valued intuitionistic fuzzy aggregation operators comparing them with some widely used and cited aggregation operators. In particular, we focus on the key issue of the relationships between the proposed operators and existing operators for clear understanding of the motivation for proposing these interaction operators. This research then studies a group decision making method for determining the best hydrogen production technologies using interval-valued intuitionistic fuzzy approach. The research results of this paper are more scientific for two reasons. First, the interval-valued intuitionistic fuzzy approach applied in this paper is more suitable than other approaches regarding the expression of the decision maker’s preference information. Second, the results are obtained by the interaction between the membership degree interval and the nonmembership degree interval. Additionally, we apply this approach to evaluate the hydrogen production technologies in China and compare it with other methods.

  2. Numerical modeling of skin tissue heating using the interval finite difference method.

    Science.gov (United States)

    Mochnacki, B; Belkhayat, Alicja Piasecka

    2013-09-01

    Numerical analysis of heat transfer processes proceeding in a nonhomogeneous biological tissue domain is presented. In particular, the skin tissue domain subjected to an external heat source is considered. The problem is treated as an axially-symmetrical one (it results from the mathematical form of the function describing the external heat source). Thermophysical parameters of sub-domains (volumetric specific heat, thermal conductivity, perfusion coefficient etc.) are given as interval numbers. The problem discussed is solved using the interval finite difference method basing on the rules of directed interval arithmetic, this means that at the stage of FDM algorithm construction the mathematical manipulations are realized using the interval numbers. In the final part of the paper the results of numerical computations are shown, in particular the problem of admissible thermal dose is analyzed.

  3. A Monte Carlo Study of Eight Confidence Interval Methods for Coefficient Alpha

    Science.gov (United States)

    Romano, Jeanine L.; Kromrey, Jeffrey D.; Hibbard, Susan T.

    2010-01-01

    The purpose of this research is to examine eight of the different methods for computing confidence intervals around alpha that have been proposed to determine which of these, if any, is the most accurate and precise. Monte Carlo methods were used to simulate samples under known and controlled population conditions. In general, the differences in…

  4. A new method and instrument for accurately measuring interval between ultrashort pulses

    Institute of Scientific and Technical Information of China (English)

    Zhonggang Ji; Yuxin Leng; Yunpei Deng; Bin Tang; Haihe Lu; Ruxin Li; Zhizhan Xu

    2005-01-01

    @@ Using second-order autocorrelation conception, a novel method and instrument for accurately measuring interval between two linearly polarized ultrashort pulses with real time were presented. The experiment demonstrated that the measuring method and instrument were simple and accurate (the measurement error < 5 fs). During measuring, there was no moving element resulting in dynamic measurement error.

  5. Solving the Interval Riccati differential equation by Wavelet operational matrix method

    Directory of Open Access Journals (Sweden)

    N. Ahangari Ghadimi

    2016-03-01

    Full Text Available Riccati differential equation is an important equation, in many fields of engineering and applied sciences, so recently lots of methods have been proposed to solve this equation. Haar Wavelet operational matrix,is one of the effective methods to solve this equation, that is very simple and easy, compared to other orders. In this paper, we want to solve the nonlinear riccati differential equation in interval initial condition. first we simplify it by using the block pulse function to expand the Haar wavelet one. we have three cases for each interval, but now it can be solved for positive interval Haar coefficients. The results reveal that the proposed method is very effective and simple.

  6. The Interval Slope Method for Long-Term Forecasting of Stock Price Trends

    Directory of Open Access Journals (Sweden)

    Chun-xue Nie

    2016-01-01

    Full Text Available A stock price is a typical but complex type of time series data. We used the effective prediction of long-term time series data to schedule an investment strategy and obtain higher profit. Due to economic, environmental, and other factors, it is very difficult to obtain a precise long-term stock price prediction. The exponentially segmented pattern (ESP is introduced here and used to predict the fluctuation of different stock data over five future prediction intervals. The new feature of stock pricing during the subinterval, named the interval slope, can characterize fluctuations in stock price over specific periods. The cumulative distribution function (CDF of MSE was compared to those of MMSE-BC and SVR. We concluded that the interval slope developed here can capture more complex dynamics of stock price trends. The mean stock price can then be predicted over specific time intervals relatively accurately, in which multiple mean values over time intervals are used to express the time series in the long term. In this way, the prediction of long-term stock price can be more precise and prevent the development of cumulative errors.

  7. Calculation Methods for Wallenius’ Noncentral Hypergeometric Distribution

    DEFF Research Database (Denmark)

    Fog, Agner

    2008-01-01

    distribution are derived. Range of applicability, numerical problems, and efficiency are discussed for each method. Approximations to the mean and variance are also discussed. This distribution has important applications in models of biased sampling and in models of evolutionary systems....... is the conditional distribution of independent binomial variates given their sum. No reliable calculation method for Wallenius' noncentral hypergeometric distribution has hitherto been described in the literature. Several new methods for calculating probabilities from Wallenius' noncentral hypergeometric...

  8. The minimum interval for confident spike sorting: A sequential decision method.

    Science.gov (United States)

    Hebert, Paul; Burdick, Joel

    2010-01-01

    This paper develops a method to determine the minimum duration interval which ensures that the process of "sorting" the extracellular action potentials recorded during that interval achieves a desired confidence level of accuracy. During the recording process, a sequential decision theory approach continually evaluates a variant of the likelihood ratio test using the model evidence of the sorting/clustering hypotheses. The test is compared against a threshold which encodes a desired confidence level on the accuracy of the subsequent clustering procedure. When the threshold is exceeded, the clustering model with the highest model evidence is accepted. We first develop a testing procedure for a single recording interval, and then extend the method to multi-interval recording by using both Bayesian priors from previous recording intervals and recently developed cluster tracking procedure. Lastly, a more advanced tracker is implemented and initials results are presented. This later procedure is useful for real time applications such as brain machine interfaces and autonomous recording electrodes. We test our theory on recordings from Macaque parietal cortex, showing that the method does reach the desired confidence level.

  9. The Interval-Valued Triangular Fuzzy Soft Set and Its Method of Dynamic Decision Making

    Directory of Open Access Journals (Sweden)

    Xiaoguo Chen

    2014-01-01

    Full Text Available A concept of interval-valued triangular fuzzy soft set is presented, and some operations of “AND,” “OR,” intersection, union and complement, and so forth are defined. Then some relative properties are discussed and several conclusions are drawn. A dynamic decision making model is built based on the definition of interval-valued triangular fuzzy soft set, in which period weight is determined by the exponential decay method. The arithmetic weighted average operator of interval-valued triangular fuzzy soft set is given by the aggregating thought, thereby aggregating interval-valued triangular fuzzy soft sets of different time-series into a collective interval-valued triangular fuzzy soft set. The formulas of selection and decision values of different objects are given; therefore the optimal decision making is achieved according to the decision values. Finally, the steps of this method are concluded, and one example is given to explain the application of the method.

  10. A comparison of confidence interval methods for the concordance correlation coefficient and intraclass correlation coefficient with small number of raters.

    Science.gov (United States)

    Feng, Dai; Svetnik, Vladimir; Coimbra, Alexandre; Baumgartner, Richard

    2014-01-01

    The intraclass correlation coefficient (ICC) with fixed raters or, equivalently, the concordance correlation coefficient (CCC) for continuous outcomes is a widely accepted aggregate index of agreement in settings with small number of raters. Quantifying the precision of the CCC by constructing its confidence interval (CI) is important in early drug development applications, in particular in qualification of biomarker platforms. In recent years, there have been several new methods proposed for construction of CIs for the CCC, but their comprehensive comparison has not been attempted. The methods consisted of the delta method and jackknifing with and without Fisher's Z-transformation, respectively, and Bayesian methods with vague priors. In this study, we carried out a simulation study, with data simulated from multivariate normal as well as heavier tailed distribution (t-distribution with 5 degrees of freedom), to compare the state-of-the-art methods for assigning CI to the CCC. When the data are normally distributed, the jackknifing with Fisher's Z-transformation (JZ) tended to provide superior coverage and the difference between it and the closest competitor, the Bayesian method with the Jeffreys prior was in general minimal. For the nonnormal data, the jackknife methods, especially the JZ method, provided the coverage probabilities closest to the nominal in contrast to the others which yielded overly liberal coverage. Approaches based upon the delta method and Bayesian method with conjugate prior generally provided slightly narrower intervals and larger lower bounds than others, though this was offset by their poor coverage. Finally, we illustrated the utility of the CIs for the CCC in an example of a wake after sleep onset (WASO) biomarker, which is frequently used in clinical sleep studies of drugs for treatment of insomnia.

  11. PSO type-reduction method for geometric interval type-2 fuzzy logic systems

    Institute of Scientific and Technical Information of China (English)

    ZHAO Xian-zhang; GAO Yi-bo; ZENG Jun-fang; YANG Yi-ping

    2008-01-01

    In a special case of type-2 fuzzy logic systems (FLS), i.e. geometric interval type-2 fuzzy logic sys-tems (GIT-2FLS), the crisp output is obtained by computing the geometric center of footprint of uncertainty (FOU) without type-reduction, but the defuzzifying method acts against the corner concepts of type-2 fuzzy sets in some cases. In this paper, a PSO type-reduction method for GIT-2FLS based on the particle swarm optimiza-tion (PSO) algorithm is presented. With the PSO type-reduction, the inference principle of geometric interval FLS operating on the continuous domain is consistent with that of traditional interval type-2 FLS operating on the discrete domain. With comparative experiments, it is proved that the PSO type-reduction exhibits good perform-ance, and is a satisfactory complement for the theory of GIT-2FLS.

  12. Prediction bands and intervals for the scapulo-humeral coordination based on the Bootstrap and two Gaussian methods.

    Science.gov (United States)

    Cutti, A G; Parel, I; Raggi, M; Petracci, E; Pellegrini, A; Accardo, A P; Sacchetti, R; Porcellini, G

    2014-03-21

    Quantitative motion analysis protocols have been developed to assess the coordination between scapula and humerus. However, the application of these protocols to test whether a subject's scapula resting position or pattern of coordination is "normal", is precluded by the unavailability of reference prediction intervals and bands, respectively. The aim of this study was to present such references for the "ISEO" protocol, by using the non-parametric Bootstrap approach and two parametric Gaussian methods (based on Student's T and Normal distributions). One hundred and eleven asymptomatic subjects were divided into three groups based on their age (18-30, 31-50, and 51-70). For each group, "monolateral" prediction bands and intervals were computed for the scapulo-humeral patterns and the scapula resting orientation, respectively. A fourth group included the 36 subjects (42 ± 13 year-old) for whom the scapulo-humeral coordination was measured bilaterally, and "differential" prediction bands and intervals were computed, which describe right-to-left side differences. Bootstrap and Gaussian methods were compared using cross-validation analyses, by evaluating the coverage probability in comparison to a 90% target. Results showed a mean coverage for Bootstrap from 86% to 90%, compared to 67-70% for parametric bands and 87-88% for parametric intervals. Bootstrap prediction bands showed a distinctive change in amplitude and mean pattern related to age, with an increase toward scapula retraction, lateral rotation and posterior tilt. In conclusion, Bootstrap ensures an optimal coverage and should be preferred over parametric methods. Moreover, the stratification of "monolateral" prediction bands and intervals by age appears relevant for the correct classification of patients. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Treatment of uncertainty through the interval smart/swing weighting method: a case study

    Directory of Open Access Journals (Sweden)

    Luiz Flávio Autran Monteiro Gomes

    2011-12-01

    Full Text Available An increasingly competitive market means that many decisions must be taken, quickly and with precision, in complex, high risk scenarios. This combination of factors makes it necessary to use decision aiding methods which provide a means of dealing with uncertainty in the judgement of the alternatives. This work presents the use of the MAUT method, combined with the INTERVAL SMART/SWING WEIGHTING method. Although multicriteria decision aiding was not conceived specifically for tackling uncertainty, the combined use of MAUT and the INTERVAL SMART/SWING WEIGHTING method allows approaching decision problems under uncertainty. The main concepts which are involved in these two methods are described and their joint application to the case study concerning the selection of a printing service supplier is presented. The case study makes use of the WINPRE software as a support tool for the calculation of dominance. It is then concluded that the proposed approach can be applied to decision making problems under uncertainty.

  14. A Direct Latent Variable Modeling Based Method for Point and Interval Estimation of Coefficient Alpha

    Science.gov (United States)

    Raykov, Tenko; Marcoulides, George A.

    2015-01-01

    A direct approach to point and interval estimation of Cronbach's coefficient alpha for multiple component measuring instruments is outlined. The procedure is based on a latent variable modeling application with widely circulated software. As a by-product, using sample data the method permits ascertaining whether the population discrepancy…

  15. Computing interval-valued reliability measures: application of optimal control methods

    DEFF Research Database (Denmark)

    Kozin, Igor; Krymsky, Victor

    2017-01-01

    The paper describes an approach to deriving interval-valued reliability measures given partial statistical information on the occurrence of failures. We apply methods of optimal control theory, in particular, Pontryagin’s principle of maximum to solve the non-linear optimisation problem and derive...

  16. A Direct Latent Variable Modeling Based Method for Point and Interval Estimation of Coefficient Alpha

    Science.gov (United States)

    Raykov, Tenko; Marcoulides, George A.

    2015-01-01

    A direct approach to point and interval estimation of Cronbach's coefficient alpha for multiple component measuring instruments is outlined. The procedure is based on a latent variable modeling application with widely circulated software. As a by-product, using sample data the method permits ascertaining whether the population discrepancy…

  17. Quantitative immunofluorescence microscopy of subcellular GLUT4 distribution in human skeletal muscle: effects of endurance and sprint interval training.

    Science.gov (United States)

    Bradley, Helen; Shaw, Christopher S; Worthington, Philip L; Shepherd, Sam O; Cocks, Matthew; Wagenmakers, Anton J M

    2014-07-01

    Increases in insulin-mediated glucose uptake following endurance training (ET) and sprint interval training (SIT) have in part been attributed to concomitant increases in glucose transporter 4 (GLUT4) protein content in skeletal muscle. This study used an immunofluorescence microscopy method to investigate changes in subcellular GLUT4 distribution and content following ET and SIT. Percutaneous muscle biopsy samples were taken from the m. vastus lateralis of 16 sedentary males in the overnight fasted state before and after 6 weeks of ET and SIT. An antibody was fully validated and used to show large (> 1 μm) and smaller (GLUT4-containing clusters. The large clusters likely represent trans-Golgi network stores and the smaller clusters endosomal stores and GLUT4 storage vesicles (GSVs). Density of GLUT4 clusters was higher at the fibre periphery especially in perinuclear regions. A less dense punctate distribution was seen in the rest of the muscle fibre. Total GLUT4 fluorescence intensity increased in type I and type II fibres following both ET and SIT. Large GLUT4 clusters increased in number and size in both type I and type II fibres, while the smaller clusters increased in size. The greatest increases in GLUT4 fluorescence intensity occurred within the 1 μm layer immediately adjacent to the PM. The increase in peripheral localisation and protein content of GLUT4 following ET and SIT is likely to contribute to the improvements in glucose homeostasis observed after both training modes.

  18. Geometric computations with interval and new robust methods applications in computer graphics, GIS and computational geometry

    CERN Document Server

    Ratschek, H

    2003-01-01

    This undergraduate and postgraduate text will familiarise readers with interval arithmetic and related tools to gain reliable and validated results and logically correct decisions for a variety of geometric computations plus the means for alleviating the effects of the errors. It also considers computations on geometric point-sets, which are neither robust nor reliable in processing with standard methods. The authors provide two effective tools for obtaining correct results: (a) interval arithmetic, and (b) ESSA the new powerful algorithm which improves many geometric computations and makes th

  19. QT interval in healthy dogs: which method of correcting the QT interval in dogs is appropriate for use in small animal clinics?

    Directory of Open Access Journals (Sweden)

    Maira S. Oliveira

    2014-05-01

    Full Text Available The electrocardiography (ECG QT interval is influenced by fluctuations in heart rate (HR what may lead to misinterpretation of its length. Considering that alterations in QT interval length reflect abnormalities of the ventricular repolarisation which predispose to occurrence of arrhythmias, this variable must be properly evaluated. The aim of this work is to determine which method of correcting the QT interval is the most appropriate for dogs regarding different ranges of normal HR (different breeds. Healthy adult dogs (n=130; German Shepherd, Boxer, Pit Bull Terrier, and Poodle were submitted to ECG examination and QT intervals were determined in triplicates from the bipolar limb II lead and corrected for the effects of HR through the application of three published formulae involving quadratic, cubic or linear regression. The mean corrected QT values (QTc obtained using the diverse formulae were significantly different (ρ<0.05, while those derived according to the equation QTcV = QT + 0.087(1- RR were the most consistent (linear regression. QTcV values were strongly correlated (r=0.83 with the QT interval and showed a coefficient of variation of 8.37% and a 95% confidence interval of 0.22-0.23 s. Owing to its simplicity and reliability, the QTcV was considered the most appropriate to be used for the correction of QT interval in dogs.

  20. Population Validity and Cross-Validity: Applications of Distribution Theory for Testing Hypotheses, Setting Confidence Intervals, and Determining Sample Size

    Science.gov (United States)

    Algina, James; Keselman, H. J.

    2008-01-01

    Applications of distribution theory for the squared multiple correlation coefficient and the squared cross-validation coefficient are reviewed, and computer programs for these applications are made available. The applications include confidence intervals, hypothesis testing, and sample size selection. (Contains 2 tables.)

  1. Super-High Resolution Time Interval Measurement Method Based on Time-Space Relationships

    Institute of Scientific and Technical Information of China (English)

    DU Bao-Qiang; ZHOU Wei

    2009-01-01

    Based on the principle of quantized delay-time, a super-high resolution time interval measurement method is proposed based on time-space relationships.Using the delay-time stability that time and frequency signal travel in a specific medium, the measured time interval can be quantized.Combined with the phase coincidence detection technique, the measurement of time can be changed into the measurement of space length.The resolution and the stability of the measurement system are easily improved.Experimental results show that the measurement resolution of the measured time interval depends on the length difference of the double delay-time unit.When the length difference is set up on millimeter level or sub-millimeter level, super-high measurement resolution from hundreds of picosecond to tens of picosecond can be obtained.

  2. Continuous Exercise but Not High Intensity Interval Training Improves Fat Distribution in Overweight Adults

    Directory of Open Access Journals (Sweden)

    Shelley E. Keating

    2014-01-01

    Full Text Available Objective. The purpose of this study was to assess the effect of high intensity interval training (HIIT versus continuous aerobic exercise training (CONT or placebo (PLA on body composition by randomized controlled design. Methods. Work capacity and body composition (dual-energy X-ray absorptiometry were measured before and after 12 weeks of intervention in 38 previously inactive overweight adults. Results. There was a significant group × time interaction for change in work capacity (P<0.001, which increased significantly in CONT (23.8±3.0% and HIIT (22.3±3.5% but not PLA (3.1±5.0%. There was a near-significant main effect for percentage trunk fat, with trunk fat reducing in CONT by 3.1±1.6% and in PLA by 1.1±0.4%, but not in HIIT (increase of 0.7±1.0% (P=0.07. There was a significant reduction in android fat percentage in CONT (2.7±1.3% and PLA (1.4±0.8% but not HIIT (increase of 0.8±0.7% (P=0.04. Conclusion. These data suggest that HIIT may be advocated as a time-efficient strategy for eliciting comparable fitness benefits to traditional continuous exercise in inactive, overweight adults. However, in this population HIIT does not confer the same benefit to body fat levels as continuous exercise training.

  3. A Method to Compute Multiplicity Corrected Confidence Intervals for Odds Ratios and Other Relative Effect Estimates

    Directory of Open Access Journals (Sweden)

    Jimmy Thomas Efird

    2008-12-01

    Full Text Available Epidemiological studies commonly test multiple null hypotheses. In some situations it may be appropriate to account for multiplicity using statistical methodology rather than simply interpreting results with greater caution as the number of comparisons increases. Given the one-to-one relationship that exists between confidence intervals and hypothesis tests, we derive a method based upon the Hochberg step-up procedure to obtain multiplicity corrected confidence intervals (CI for odds ratios (OR and by analogy for other relative effect estimates. In contrast to previously published methods that explicitly assume knowledge of P values, this method only requires that relative effect estimates and corresponding CI be known for each comparison to obtain multiplicity corrected CI.

  4. A preprocessing tool for removing artifact from cardiac RR interval recordings using three-dimensional spatial distribution mapping.

    Science.gov (United States)

    Stapelberg, Nicolas J C; Neumann, David L; Shum, David H K; McConnell, Harry; Hamilton-Craig, Ian

    2016-04-01

    Artifact is common in cardiac RR interval data that is recorded for heart rate variability (HRV) analysis. A novel algorithm for artifact detection and interpolation in RR interval data is described. It is based on spatial distribution mapping of RR interval magnitude and relationships to adjacent values in three dimensions. The characteristics of normal physiological RR intervals and artifact intervals were established using 24-h recordings from 20 technician-assessed human cardiac recordings. The algorithm was incorporated into a preprocessing tool and validated using 30 artificial RR (ARR) interval data files, to which known quantities of artifact (0.5%, 1%, 2%, 3%, 5%, 7%, 10%) were added. The impact of preprocessing ARR files with 1% added artifact was also assessed using 10 time domain and frequency domain HRV metrics. The preprocessing tool was also used to preprocess 69 24-h human cardiac recordings. The tool was able to remove artifact from technician-assessed human cardiac recordings (sensitivity 0.84, SD = 0.09, specificity of 1.00, SD = 0.01) and artificial data files. The removal of artifact had a low impact on time domain and frequency domain HRV metrics (ranging from 0% to 2.5% change in values). This novel preprocessing tool can be used with human 24-h cardiac recordings to remove artifact while minimally affecting physiological data and therefore having a low impact on HRV measures of that data.

  5. ALTERABLE INTERVAL OPTICAL-ELECTRONIC AUTOCOLLIMATION METHOD FOR STRAIGHTNESS MEASUREMENT OF PRECISION GUIDE

    Institute of Scientific and Technical Information of China (English)

    Xue Zi; Tan Jiubin; Zhao Weiqian; Zhang Heng

    2005-01-01

    Optical-electronic autocollimation method is commonly used to measure straightness of precision guides in engineering application. However, the traditional fixed interval optical-electronic autocollimation method is not suitable for measuring straightness of an air-bearing guide with a long air-bearing bush or a precision straight guide with a long slide-carriage, because the air-bearing bush and the slide-carriage are actually taken as a big bridgeboard bigger than the length of the bridgeboard with the reflector, which is about 1/4~1/2 of total length of the measured guide. If straightness is measured according to the traditional method, only a few points are sampled so that the guide straightness can not be evaluated fully or accurately. In order to solve the problem, an alterable measuring interval method is proposed for straightness measurement based on analyzing the mutual relations and effects among the tilting angle of the reflector, the length of the bridgeboard, the measuring interval and the straightness of the guide. A straightness calculation model is also developed using the method, and the errors stemming from the method proposed are introduced in brief. A precision air-bearing guide with a long air-bearing bush is measured and evaluated using the method proposed, and the actual measurement and evaluation results prove that the method is correct in theory and practical in operation. The method proposed gives an effective and flexible solution to the straightness measurement of the precision guide with long slide-carriage or air-bearing bush in application. It is an extension of the traditional optical-electronic autocollimation method for straightness measurement.

  6. Detection of bursts in neuronal spike trains by the mean inter-spike interval method

    Institute of Scientific and Technical Information of China (English)

    Lin Chen; Yong Deng; Weihua Luo; Zhen Wang; Shaoqun Zeng

    2009-01-01

    Bursts are electrical spikes firing with a high frequency, which are the most important property in synaptic plasticity and information processing in the central nervous system. However, bursts are difficult to identify because bursting activities or patterns vary with phys-iological conditions or external stimuli. In this paper, a simple method automatically to detect bursts in spike trains is described. This method auto-adaptively sets a parameter (mean inter-spike interval) according to intrinsic properties of the detected burst spike trains, without any arbitrary choices or any operator judgrnent. When the mean value of several successive inter-spike intervals is not larger than the parameter, a burst is identified. By this method, bursts can be automatically extracted from different bursting patterns of cultured neurons on multi-electrode arrays, as accurately as by visual inspection. Furthermore, significant changes of burst variables caused by electrical stimulus have been found in spontaneous activity of neuronal network. These suggest that the mean inter-spike interval method is robust for detecting changes in burst patterns and characteristics induced by environmental alterations.

  7. Thurstone's Method of Equal-Appearing Intervals in Measuring Attitudes: An Old Method That Is Not Forgotten.

    Science.gov (United States)

    Roberts, J. Kyle

    Many school districts face the problem of evaluating new programs to train students in ethics and moral decision making. Using conventional personality tests in program evaluation may be helpful, but probably will not provide measures for the attitudes that are targeted by the intervention. The method of equal- appearing intervals developed by L.…

  8. A COMPUTATIONAL METHOD FOR INTERVAL MIXED VARIABLE ENERGY MATRICES IN PRECISE INTEGRATION

    Institute of Scientific and Technical Information of China (English)

    高索文; 吴志刚; 王本利; 马兴瑞

    2001-01-01

    To solve the Riccati equation of LQ control problem, the computation of interval mixed variable energy matrices is the first step. Taylor expansion can be used to compute the matrices. According to the analogy between structural mechanics and optimal control and the mechanical implication of the matrices, a computational method using state transition matrix of differential equation was presented. Numerical examples are provided to show the effectiveness of the present approach.

  9. Identification and continuity of the distributions of burst-length and interspike intervals in the stochastic Morris-Lecar neuron.

    Science.gov (United States)

    Rowat, Peter F; Greenwood, Priscilla E

    2011-12-01

    Using the Morris-Lecar model neuron with a type II parameter set and K(+)-channel noise, we investigate the interspike interval distribution as increasing levels of applied current drive the model through a subcritical Hopf bifurcation. Our goal is to provide a quantitative description of the distributions associated with spiking as a function of applied current. The model generates bursty spiking behavior with sequences of random numbers of spikes (bursts) separated by interburst intervals of random length. This kind of spiking behavior is found in many places in the nervous system, most notably, perhaps, in stuttering inhibitory interneurons in cortex. Here we show several practical and inviting aspects of this model, combining analysis of the stochastic dynamics of the model with estimation based on simulations. We show that the parameter of the exponential tail of the interspike interval distribution is in fact continuous over the entire range of plausible applied current, regardless of the bifurcations in the phase portrait of the model. Further, we show that the spike sequence length, apparently studied for the first time here, has a geometric distribution whose associated parameter is continuous as a function of applied current over the entire input range. Hence, this model is applicable over a much wider range of applied current than has been thought.

  10. The Interval-Valued Intuitionistic Fuzzy MULTIMOORA Method for Group Decision Making in Engineering

    Directory of Open Access Journals (Sweden)

    Edmundas Kazimieras Zavadskas

    2015-01-01

    Full Text Available Multiple criteria decision making methods have received different extensions under the uncertain environment in recent years. The aim of the current research is to extend the application of the MULTIMOORA method (Multiobjective Optimization by Ratio Analysis plus Full Multiplicative Form for group decision making in the uncertain environment. Taking into account the advantages of IVIFS (interval-valued intuitionistic fuzzy sets in handling the problem of uncertainty, the development of the interval-valued intuitionistic fuzzy MULTIMOORA (IVIF-MULTIMOORA method for group decision making is considered in the paper. Two numerical examples of real-world civil engineering problems are presented, and ranking of the alternatives based on the suggested method is described. The results are then compared to the rankings yielded by some other methods of decision making with IVIF information. The comparison has shown the conformity of the proposed IVIF-MULTIMOORA method with other approaches. The proposed algorithm is favorable because of the abilities of IVIFS to be used for imagination of uncertainty and the MULTIMOORA method to consider three different viewpoints in analyzing engineering decision alternatives.

  11. An Interval-Valued Intuitionistic Fuzzy TOPSIS Method Based on an Improved Score Function

    Directory of Open Access Journals (Sweden)

    Zhi-yong Bai

    2013-01-01

    Full Text Available This paper proposes an improved score function for the effective ranking order of interval-valued intuitionistic fuzzy sets (IVIFSs and an interval-valued intuitionistic fuzzy TOPSIS method based on the score function to solve multicriteria decision-making problems in which all the preference information provided by decision-makers is expressed as interval-valued intuitionistic fuzzy decision matrices where each of the elements is characterized by IVIFS value and the information about criterion weights is known. We apply the proposed score function to calculate the separation measures of each alternative from the positive and negative ideal solutions to determine the relative closeness coefficients. According to the values of the closeness coefficients, the alternatives can be ranked and the most desirable one(s can be selected in the decision-making process. Finally, two illustrative examples for multicriteria fuzzy decision-making problems of alternatives are used as a demonstration of the applications and the effectiveness of the proposed decision-making method.

  12. Encounter distribution of two random walkers on a finite one-dimensional interval

    Energy Technology Data Exchange (ETDEWEB)

    Tejedor, Vincent; Schad, Michaela; Metzler, Ralf [Physics Department, Technical University of Munich, James Franck Strasse, 85747 Garching (Germany); Benichou, Olivier; Voituriez, Raphael, E-mail: metz@ph.tum.de [Laboratoire de Physique Theorique de la Matiere Condensee (UMR 7600), Universite Pierre et Marie Curie, 4 Place Jussieu, 75255 Paris Cedex (France)

    2011-09-30

    We analyse the first-passage properties of two random walkers confined to a finite one-dimensional domain. For the case of absorbing boundaries at the endpoints of the interval, we derive the probability that the two particles meet before either one of them becomes absorbed at one of the boundaries. For the case of reflecting boundaries, we obtain the mean first encounter time of the two particles. Our approach leads to closed-form expressions that are more easily tractable than a previously derived solution in terms of the Weierstrass' elliptic function. (paper)

  13. Gamma-glutamyltransferase activity in plasma: statistical distributions, individual variations, and reference intervals.

    Science.gov (United States)

    Schiele, F; Guilmin, A M; Detienne, H; Siest, G

    1977-06-01

    Measurement of gamma-glutamyltransferase activity in plasma provides a useful index to liver function. Using as our study population those persons coming to the Center for Preventive Medicine, we described and measured the significance and importance of physiological and environmental variations. We established a classification for the variation factors. The three most important factors affecting this activity were drug intake, alcohol consumption, and excessive weight, followed by sex and age. We suggest a preliminary group of reference intervals for healthy subjects to be used in interpreting a laboratory test.

  14. Reduction Method for Active Distribution Networks

    DEFF Research Database (Denmark)

    Raboni, Pietro; Chen, Zhe

    2013-01-01

    On-line security assessment is traditionally performed by Transmission System Operators at the transmission level, ignoring the effective response of distributed generators and small loads. On the other hand the required computation time and amount of real time data for including Distribution...... Networks also would be too large. In this paper an adaptive aggregation method for subsystems with power electronic interfaced generators and voltage dependant loads is proposed. With this tool may be relatively easier including distribution networks into security assessment. The method is validated...

  15. Generalized Analysis of a Distribution Separation Method

    Directory of Open Access Journals (Sweden)

    Peng Zhang

    2016-04-01

    Full Text Available Separating two probability distributions from a mixture model that is made up of the combinations of the two is essential to a wide range of applications. For example, in information retrieval (IR, there often exists a mixture distribution consisting of a relevance distribution that we need to estimate and an irrelevance distribution that we hope to get rid of. Recently, a distribution separation method (DSM was proposed to approximate the relevance distribution, by separating a seed irrelevance distribution from the mixture distribution. It was successfully applied to an IR task, namely pseudo-relevance feedback (PRF, where the query expansion model is often a mixture term distribution. Although initially developed in the context of IR, DSM is indeed a general mathematical formulation for probability distribution separation. Thus, it is important to further generalize its basic analysis and to explore its connections to other related methods. In this article, we first extend DSM’s theoretical analysis, which was originally based on the Pearson correlation coefficient, to entropy-related measures, including the KL-divergence (Kullback–Leibler divergence, the symmetrized KL-divergence and the JS-divergence (Jensen–Shannon divergence. Second, we investigate the distribution separation idea in a well-known method, namely the mixture model feedback (MMF approach. We prove that MMF also complies with the linear combination assumption, and then, DSM’s linear separation algorithm can largely simplify the EM algorithm in MMF. These theoretical analyses, as well as further empirical evaluation results demonstrate the advantages of our DSM approach.

  16. Image Analysis on Corneal Opacity:A Novel Method to Estimate Postmortem Interval in Rabbits

    Institute of Scientific and Technical Information of China (English)

    周兰; 刘艳; 刘良; 卓荦; 梁曼; 杨帆; 任亮; 朱少华

    2010-01-01

    Corneal opacity is one of the most commonly used parameters for estimating postmortem interval(PMI).This paper proposes a new method to study the relationship between changes of corneal opacity and PMI by processing and analyzing cornea images.Corneal regions were extracted from images of rabbits' eyes and described by color-based and texture-based features,which could represent the changes of cornea at different PMI.A KNN classifier was used to reveal the association of image features and PMI.The result of...

  17. Risky Group Decision-Making Method for Distribution Grid Planning

    Science.gov (United States)

    Li, Cunbin; Yuan, Jiahang; Qi, Zhiqiang

    2015-12-01

    With rapid speed on electricity using and increasing in renewable energy, more and more research pay attention on distribution grid planning. For the drawbacks of existing research, this paper proposes a new risky group decision-making method for distribution grid planning. Firstly, a mixing index system with qualitative and quantitative indices is built. On the basis of considering the fuzziness of language evaluation, choose cloud model to realize "quantitative to qualitative" transformation and construct interval numbers decision matrices according to the "3En" principle. An m-dimensional interval numbers decision vector is regarded as super cuboids in m-dimensional attributes space, using two-level orthogonal experiment to arrange points uniformly and dispersedly. The numbers of points are assured by testing numbers of two-level orthogonal arrays and these points compose of distribution points set to stand for decision-making project. In order to eliminate the influence of correlation among indices, Mahalanobis distance is used to calculate the distance from each solutions to others which means that dynamic solutions are viewed as the reference. Secondly, due to the decision-maker's attitude can affect the results, this paper defines the prospect value function based on SNR which is from Mahalanobis-Taguchi system and attains the comprehensive prospect value of each program as well as the order. At last, the validity and reliability of this method is illustrated by examples which prove the method is more valuable and superiority than the other.

  18. Methods for Distributed Optimal Energy Management

    DEFF Research Database (Denmark)

    Brehm, Robert

    The presented research deals with the fundamental underlying methods and concepts of how the growing number of distributed generation units based on renewable energy resources and distributed storage devices can be most efficiently integrated into the existing utility grid. In contrast to convent......The presented research deals with the fundamental underlying methods and concepts of how the growing number of distributed generation units based on renewable energy resources and distributed storage devices can be most efficiently integrated into the existing utility grid. In contrast...... to conventional centralised optimal energy flow management systems, here-in, focus is set on how optimal energy management can be achieved in a decentralised distributed architecture such as a multi-agent system. Distributed optimisation methods are introduced, targeting optimisation of energy flow in virtual...... micro-grids by prevention of meteorologic power flows into high voltage grids. A method, based on mathematical optimisation and a consensus algorithm is introduced and evaluated to coordinate charge/discharge scheduling for batteries between a number of buildings in order to improve self...

  19. Improved Accuracy of Nonlinear Parameter Estimation with LAV and Interval Arithmetic Methods

    Directory of Open Access Journals (Sweden)

    Humberto Muñoz

    2009-06-01

    Full Text Available The reliable solution of nonlinear parameter es- timation problems is an important computational problem in many areas of science and engineering, including such applications as real time optimization. Its goal is to estimate accurate model parameters that provide the best fit to measured data, despite small- scale noise in the data or occasional large-scale mea- surement errors (outliers. In general, the estimation techniques are based on some kind of least squares or maximum likelihood criterion, and these require the solution of a nonlinear and non-convex optimiza- tion problem. Classical solution methods for these problems are local methods, and may not be reliable for finding the global optimum, with no guarantee the best model parameters have been found. Interval arithmetic can be used to compute completely and reliably the global optimum for the nonlinear para- meter estimation problem. Finally, experimental re- sults will compare the least squares, l2, and the least absolute value, l1, estimates using interval arithmetic in a chemical engineering application.

  20. Assessment and validation of a simple automated method for the detection of gait events and intervals.

    Science.gov (United States)

    Ghoussayni, Salim; Stevens, Christopher; Durham, Sally; Ewins, David

    2004-12-01

    A simple and rapid automatic method for detection of gait events at the foot could speed up and possibly increase the repeatability of gait analysis and evaluations of treatments for pathological gaits. The aim of this study was to compare and validate a kinematic-based algorithm used in the detection of four gait events, heel contact, heel rise, toe contact and toe off. Force platform data is often used to obtain start and end of contact phases, but not usually heel rise and toe contact events. For this purpose synchronised kinematic, kinetic and video data were captured from 12 healthy adult subjects walking both barefoot and shod at slow and normal self-selected speeds. The data were used to determine the gait events using three methods: force, visual inspection and algorithm methods. Ninety percent of all timings given by the algorithm were within one frame (16.7 ms) when compared to visual inspection. There were no statistically significant differences between the visual and algorithm timings. For both heel and toe contact the differences between the three methods were within 1.5 frames, whereas for heel rise and toe off the differences between the force on one side and the visual and algorithm on the other were higher and more varied (up to 175 ms). In addition, the algorithm method provided the duration of three intervals, heel contact to toe contact, toe contact to heel rise and heel rise to toe off, which are not readily available from force platform data. The ability to automatically and reliably detect the timings of these four gait events and three intervals using kinematic data alone is an asset to clinical gait analysis.

  1. A comparison of several methods for the confidence intervals of negative binomial proportions

    Science.gov (United States)

    Thong, Alfred Lim Sheng; Shan, Fam Pei

    2015-12-01

    This study focuses on the comparison of the performances of several approaches in constructing confidence interval of negative binomial proportions (single negative binomial proportion and the difference between two negative binomial proportions). After that, the strengths and weaknesses of the approaches in constructing confidence interval of negative binomial proportions are figured out. Performances of the approaches will be accessed by comparing their coverage probabilities and average lengths of confidence intervals. For the comparison of the performances of the approaches in single negative binomial proportion, Wald confidence interval (WCI-I), Agresti confidence interval (ACI-I), Wilson's Score confidence interval (WSCI-I) and Jeffrey confidence interval (JCI-I) are used. WSCI-I is the better approach for single negative binomial proportion in term of the average length of confidence intervals and average coverage probability. While for the comparison of the performances of the approaches in the difference between two negative binomial proportions, Wald confidence interval (WCI-II), Agresti confidence interval (ACI-II), Newcombe's Score confidence interval (NSCI-II), Jeffrey confidence interval (JCI-II) and Yule confidence interval (YCI-II) are used. Under different situations, a better approach has been discussed and recommended. There will be different approach that performs better for the coverage probability.

  2. A new method for determination of postmortem interval: citrate content of bone.

    Science.gov (United States)

    Schwarcz, Henry P; Agur, Kristina; Jantz, Lee Meadows

    2010-11-01

    Few accurate methods exist currently to determine the time since death (postmortem interval, PMI) of skeletonized human remains found at crime scenes. Citrate is present as a constituent of living human and animal cortical bone at very uniform initial concentration (2.0 ± 0.1 wt %). In skeletal remains found in open landscape settings (whether buried or not), the concentration of citrate remains constant for a period of about 4 weeks, after which it decreases linearly as a function of log(time). The upper limit of the dating range is about 100 years. The precision of determination decreases slightly with age. The rate of decrease appears to be independent of temperature or rainfall but drops to zero for storage temperature <0°C.

  3. The Analysis of Curved Beam Using B-Spline Wavelet on Interval Finite Element Method

    Directory of Open Access Journals (Sweden)

    Zhibo Yang

    2014-01-01

    Full Text Available A B-spline wavelet on interval (BSWI finite element is developed for curved beams, and the static and free vibration behaviors of curved beam (arch are investigated in this paper. Instead of the traditional polynomial interpolation, scaling functions at a certain scale have been adopted to form the shape functions and construct wavelet-based elements. Different from the process of the direct wavelet addition in the other wavelet numerical methods, the element displacement field represented by the coefficients of wavelets expansions is transformed from wavelet space to physical space by aid of the corresponding transformation matrix. Furthermore, compared with the commonly used Daubechies wavelet, BSWI has explicit expressions and excellent approximation properties, which guarantee satisfactory results. Numerical examples are performed to demonstrate the accuracy and efficiency with respect to previously published formulations for curved beams.

  4. Interval analysis method and convex models for impulsive response of structures with uncertain-but-bounded external loads

    Institute of Scientific and Technical Information of China (English)

    Zhiping Qiu; Xiaojun Wang

    2006-01-01

    Two non-probabilistic, set-theoretical methods for determining the maximum and minimum impulsive responses of structures to uncertain-but-bounded impulses are presented. They are, respectively, based on the theories of interval mathematics and convex models. The uncertain-but-bounded impulses are assumed to be a convex set, hyper-rectangle or ellipsoid. For the two non-probabilistic methods, less prior information is required about the uncertain nature of impulses than the probabilistic model. Comparisons between the interval analysis method and the convex model, which are developed as an anti-optimization problem of finding the least favorable impulsive response and the most favorable impulsive response, are made through mathematical analyses and numerical calculations.The results of this study indicate mat under the condition of the interval vector being determined from an ellipsoid containing the uncertain impulses, the width of the impulsive responses predicted by the interval analysis method is larger than that by the convex model; under the condition of the ellipsoid being determined from an interval vector containing the uncertain impulses, the width Of the interval impulsive responses obtained by the interval analysis method is smaller than that by the convex model.

  5. Performance Appraisal Method of Logistic Distribution for Fresh Agricultural Products

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Through the initial selection,screening and simplification,a set of performance appraisal system of logistic distribution suited to fresh agricultural products is established.In the process of establishing the appraisal indicator,the representative appraisal indicator of logistic distribution of fresh agricultural products is further obtained by delivering experts’ survey and applying the ABC screening system.The distribution costs,transportation and service level belong to the first level indicator;packing fees,distribution processing fees,full-load ratio,haulage capacity,customer satisfaction and the strain capability of delivery personnel belong to second level indicator.At the same time,the weighing of each indicator is determined.The quantification is conducted on indicators.The qualitative indicators applies ten-point system and then coverts these indicators into percentage,that is the number between [0,1];as for the quantitative indicators,they are concluded to the interval [0,1] according to the actual value range of the indicators and by applying the grade of membership in the vague mathematics.Through the analyses of the advantages and disadvantages of the frequently used performance evaluation method and its applicable conditions,the comprehensive evaluation of logistic distribution of agricultural products obtained by using the method of fuzzy comprehensive appraisal.The results show that,in terms of reducing distribution costs,the packaging and distribution processing technology of fresh agricultural products should be improved,so as to reduce distribution costs.In the process of introducing the application of advanced technology,the high automatic logistic equipments should be introduced.

  6. A New Finite Interval Lifetime Distribution Model for Fitting Bathtub-Shaped Failure Rate Curve

    Directory of Open Access Journals (Sweden)

    Xiaohong Wang

    2015-01-01

    Full Text Available This paper raised a new four-parameter fitting model to describe bathtub curve, which is widely used in research on components’ life analysis, then gave explanation of model parameters, and provided parameter estimation method as well as application examples utilizing some well-known lifetime data. By comparative analysis between the new model and some existing bathtub curve fitting model, we can find that the new fitting model is very convenient and its parameters are clear; moreover, this model is of universal applicability which is not only suitable for bathtub-shaped failure rate curves but also applicable for the constant, increasing, and decreasing failure rate curves.

  7. Conditional probability distribution (CPD) method in temperature based death time estimation: Error propagation analysis.

    Science.gov (United States)

    Hubig, Michael; Muggenthaler, Holger; Mall, Gita

    2014-05-01

    Bayesian estimation applied to temperature based death time estimation was recently introduced as conditional probability distribution or CPD-method by Biermann and Potente. The CPD-method is useful, if there is external information that sets the boundaries of the true death time interval (victim last seen alive and found dead). CPD allows computation of probabilities for small time intervals of interest (e.g. no-alibi intervals of suspects) within the large true death time interval. In the light of the importance of the CPD for conviction or acquittal of suspects the present study identifies a potential error source. Deviations in death time estimates will cause errors in the CPD-computed probabilities. We derive formulae to quantify the CPD error as a function of input error. Moreover we observed the paradox, that in cases, in which the small no-alibi time interval is located at the boundary of the true death time interval, adjacent to the erroneous death time estimate, CPD-computed probabilities for that small no-alibi interval will increase with increasing input deviation, else the CPD-computed probabilities will decrease. We therefore advise not to use CPD if there is an indication of an error or a contra-empirical deviation in the death time estimates, that is especially, if the death time estimates fall out of the true death time interval, even if the 95%-confidence intervals of the estimate still overlap the true death time interval.

  8. Methods for developing and validating survivability distributions

    Energy Technology Data Exchange (ETDEWEB)

    Williams, R.L.

    1993-10-01

    A previous report explored and discussed statistical methods and procedures that may be applied to validate the survivability of a complex system of systems that cannot be tested as an entity. It described a methodology where Monte Carlo simulation was used to develop the system survivability distribution from the component distributions using a system model that registers the logical interactions of the components to perform system functions. This paper discusses methods that can be used to develop the required survivability distributions based upon three sources of knowledge. These are (1) available test results; (2) little or no available test data, but a good understanding of the physical laws and phenomena which can be applied by computer simulation; and (3) neither test data nor adequate knowledge of the physics are known, in which case, one must rely upon, and quantify, the judgement of experts. This paper describes the relationship between the confidence bounds that can be placed on survivability and the number of tests conducted. It discusses the procedure for developing system level survivability distributions from the distributions for lower levels of integration. It demonstrates application of these techniques by defining a communications network for a Hypothetical System Architecture. A logic model for the performance of this communications network is developed, as well as the survivability distributions for the nodes and links based on two alternate data sets, reflecting the effects of increased testing of all elements. It then shows how this additional testing could be optimized by concentrating only on those elements contained in the low-order fault sets which the methodology identifies.

  9. Supplier evaluation in manufacturing environment using compromise ranking method with grey interval numbers

    Directory of Open Access Journals (Sweden)

    Prasenjit Chatterjee

    2012-04-01

    Full Text Available Evaluation of proper supplier for manufacturing organizations is one of the most challenging problems in real time manufacturing environment due to a wide variety of customer demands. It has become more and more complicated to meet the challenges of international competitiveness and as the decision makers need to assess a wide range of alternative suppliers based on a set of conflicting criteria. Thus, the main objective of supplier selection is to select highly potential supplier through which all the set goals regarding the purchasing and manufacturing activity can be achieved. Because of these reasons, supplier selection has got considerable attention by the academicians and researchers. This paper presents a combined multi-criteria decision making methodology for supplier evaluation for given industrial applications. The proposed methodology is based on a compromise ranking method combined with Grey Interval Numbers considering different cardinal and ordinal criteria and their relative importance. A ‘supplier selection index’ is also proposed to help evaluation and ranking the alternative suppliers. Two examples are illustrated to demonstrate the potentiality and applicability of the proposed method.

  10. Using Interval Methods for the Numerical Solution of ODE’S (Ordinary Differential Equations).

    Science.gov (United States)

    1983-11-01

    28, 393-405 (1977) . 1901 Reichmann , K.: Die Konvergenz von Intervall- (Potenz-) Reihen mit Anvendungen auf Intervall-Anfangsvertprobleme...Dissertation). Preiburger Intervall-Berichte 80/4, Inst.f.angew.Math., Universitaet Freiburg i.Br. (1980) (91] Reichmann , L.: Interval power series. ’Interval

  11. Distributed Reconstruction via Alternating Direction Method

    Directory of Open Access Journals (Sweden)

    Linyuan Wang

    2013-01-01

    Full Text Available With the development of compressive sensing theory, image reconstruction from few-view projections has received considerable research attentions in the field of computed tomography (CT. Total-variation- (TV- based CT image reconstruction has been shown to be experimentally capable of producing accurate reconstructions from sparse-view data. In this study, a distributed reconstruction algorithm based on TV minimization has been developed. This algorithm is very simple as it uses the alternating direction method. The proposed method can accelerate the alternating direction total variation minimization (ADTVM algorithm without losing accuracy.

  12. A parallel optimization method for product configuration and supplier selection based on interval

    Science.gov (United States)

    Zheng, Jian; Zhang, Meng; Li, Guoxi

    2017-06-01

    In the process of design and manufacturing, product configuration is an important way of product development, and supplier selection is an essential component of supply chain management. To reduce the risk of procurement and maximize the profits of enterprises, this study proposes to combine the product configuration and supplier selection, and express the multiple uncertainties as interval numbers. An integrated optimization model of interval product configuration and supplier selection was established, and NSGA-II was put forward to locate the Pareto-optimal solutions to the interval multiobjective optimization model.

  13. A new view to uncertainty in Electre III method by introducing interval numbers

    Directory of Open Access Journals (Sweden)

    Mohammad Kazem Sayyadi

    2012-07-01

    Full Text Available The Electre III is a widely accepted multi attribute decision making model, which takes into account the uncertainty and vagueness. Uncertainty concept in Electre III is introduced by indifference, preference and veto thresholds, but sometimes determining their accurate values can be very hard. In this paper we represent the values of performance matrix as interval numbers and we define the links between interval numbers and concordance matrix .Without changing the concept of concordance, in our propose concept, Electre III is usable in decision making problems with interval numbers.

  14. Avionics Configuration Assessment for Flightdeck Interval Management: A Comparison of Avionics and Notification Methods

    Science.gov (United States)

    Latorella, Kara A.

    2015-01-01

    Flightdeck Interval Management is one of the NextGen operational concepts that FAA is sponsoring to realize requisite National Airspace System (NAS) efficiencies. Interval Management will reduce variability in temporal deviations at a position, and thereby reduce buffers typically applied by controllers - resulting in higher arrival rates, and more efficient operations. Ground software generates a strategic schedule of aircraft pairs. Air Traffic Control (ATC) provides an IM clearance with the IM spacing objective (i.e., the TTF, and at which point to achieve the appropriate spacing from this aircraft) to the IM aircraft. Pilots must dial FIM speeds into the speed window on the Mode Control Panel in a timely manner, and attend to deviations between actual speed and the instantaneous FIM profile speed. Here, the crew is assumed to be operating the aircraft with autothrottles on, with autopilot engaged, and the autoflight system in Vertical Navigation (VNAV) and Lateral Navigation (LNAV); and is responsible for safely flying the aircraft while maintaining situation awareness of their ability to follow FIM speed commands and to achieve the FIM spacing goal. The objective of this study is to examine whether three Notification Methods and four Avionics Conditions affect pilots' performance, ratings on constructs associated with performance (workload, situation awareness), or opinions on acceptability. Three Notification Methods (alternate visual and aural alerts that notified pilots to the onset of a speed target, conformance deviation from the required speed profile, and reminded them if they failed to enter the speed within 10 seconds) were examined. These Notification Methods were: VVV (visuals for all three events), VAV (visuals for all three events, plus an aural for speed conformance deviations), and AAA (visual indications and the same aural to indicate all three of these events). Avionics Conditions were defined by the instrumentation (and location) used to

  15. Power-law inter-spike interval distributions infer a conditional maximization of entropy in cortical neurons.

    Directory of Open Access Journals (Sweden)

    Yasuhiro Tsubo

    Full Text Available The brain is considered to use a relatively small amount of energy for its efficient information processing. Under a severe restriction on the energy consumption, the maximization of mutual information (MMI, which is adequate for designing artificial processing machines, may not suit for the brain. The MMI attempts to send information as accurate as possible and this usually requires a sufficient energy supply for establishing clearly discretized communication bands. Here, we derive an alternative hypothesis for neural code from the neuronal activities recorded juxtacellularly in the sensorimotor cortex of behaving rats. Our hypothesis states that in vivo cortical neurons maximize the entropy of neuronal firing under two constraints, one limiting the energy consumption (as assumed previously and one restricting the uncertainty in output spike sequences at given firing rate. Thus, the conditional maximization of firing-rate entropy (CMFE solves a tradeoff between the energy cost and noise in neuronal response. In short, the CMFE sends a rich variety of information through broader communication bands (i.e., widely distributed firing rates at the cost of accuracy. We demonstrate that the CMFE is reflected in the long-tailed, typically power law, distributions of inter-spike intervals obtained for the majority of recorded neurons. In other words, the power-law tails are more consistent with the CMFE rather than the MMI. Thus, we propose the mathematical principle by which cortical neurons may represent information about synaptic input into their output spike trains.

  16. An Integrated Method for Interval Multi-Objective Planning of a Water Resource System in the Eastern Part of Handan

    Directory of Open Access Journals (Sweden)

    Meiqin Suo

    2017-07-01

    Full Text Available In this study, an integrated solving method is proposed for interval multi-objective planning. The proposed method is based on fuzzy linear programming and an interactive two-step method. It cannot only provide objectively optimal values for multiple objectives at the same time, but also effectively offer a globally optimal interval solution. Meanwhile, the degree of satisfaction related to different objective functions would be obtained. Then, the integrated solving method for interval multi-objective planning is applied to a case study of planning multi-water resources joint scheduling under uncertainty in the eastern part of Handan, China. The solutions obtained are useful for decision makers in easing the contradiction between supply of multi-water resources and demand from different water users. Moreover, it can provide the optimal comprehensive benefits of economy, society, and the environment.

  17. Poisson goes, random walker comes: Explaining the power-law distribution of the durations of stable-polarity intervals

    Science.gov (United States)

    Fabian, Karl; Shcherbakov, Valera

    2010-05-01

    In contrast to the predominant paradigm, recent studies indicate that the lengths of polarity intervals do not follow Poisson statistics, not even if non-stationary Poisson processes are considered. It is here shown that first-passage time (FPT) statistics for a one-dimensional random walk provides a good fit to the polarity time scale (PTS) in the range of stable polarity durations between 10 ka and 3000 ka. This fit is achieved by adjusting only a single diffusion time T , which comes to lie between 70 ka and 100 ka depending on the PTS chosen. A physical interpretation, why the FPT distribution of a random-walk process applies to the geodynamo, could relate to a balance between decay of stochastic turbulence and generation of the magnetic field. A simplified picture assumes the field generation to occur from a collection of 10-100 statistically independent dynamo processes, where each is described, e.g., by a Rikitake equation in the chaotic regime. An interesting feature of the random walk model is that it naturally introduces an internal variable, the position of the walk, which could be linked to field intensity. This connection would suggest that the variance of field intensity increases with the duration of the polarity interval. It does not predict a strong correlation between the strength of the paleofield and the duration of a chron. A further strength of the random walk model is that superchrons are not outliers, but natural rare events within the system. The apparent non-stationary nature of the geodynamo can be interpreted in the random walk model by a continuous shift in the governing parameters, and does not require major restructuring of the internal geodynamo process as in case of the Poisson picture.

  18. Multi-criteria decision-making method based on a cross-entropy with interval neutrosophic sets

    Science.gov (United States)

    Tian, Zhang-peng; Zhang, Hong-yu; Wang, Jing; Wang, Jian-qiang; Chen, Xiao-hong

    2016-11-01

    In this paper, two optimisation models are established to determine the criterion weights in multi-criteria decision-making situations where knowledge regarding the weight information is incomplete and the criterion values are interval neutrosophic numbers. The proposed approach combines interval neutrosophic sets and TOPSIS, and the closeness coefficients are expressed as interval numbers. Furthermore, the relative likelihood-based comparison relations are constructed to determine the ranking of alternatives. A fuzzy cross-entropy approach is proposed to calculate the discrimination measure between alternatives and the absolute ideal solutions, after a transformation operator has been developed to convert interval neutrosophic numbers into simplified neutrosophic numbers. Finally, an illustrative example is provided, and a comparative analysis is conducted between the approach developed in this paper and other existing methods, to verify the feasibility and effectiveness of the proposed approach.

  19. Extension of a chaos control method to unstable trajectories on infinite- or finite-time intervals: Experimental verification

    Energy Technology Data Exchange (ETDEWEB)

    Yagasaki, Kazuyuki [Department of Mechanical and Systems Engineering, Gifu University, Gifu 501-1193 (Japan)], E-mail: yagasaki@gifu-u.ac.jp

    2007-08-20

    In experiments for single and coupled pendula, we demonstrate the effectiveness of a new control method based on dynamical systems theory for stabilizing unstable aperiodic trajectories defined on infinite- or finite-time intervals. The basic idea of the method is similar to that of the OGY method, which is a well-known, chaos control method. Extended concepts of the stable and unstable manifolds of hyperbolic trajectories are used here.

  20. ENVELOPING THEORY BASED METHOD FOR THE DETERMINATION OF PATH INTERVAL AND TOOL PATH OPTIMIZATION FOR SURFACE MACHINING

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    An enveloping theory based method for the determination of path interval in three-axis NC machining of free form surface is presented, and a practical algorithm and the measures for improving the calculating efficiency of the algorithm are given. Not only the given algorithm can be used for ball end cutter, flat end cutter, torus cutter and drum cutter, but also the proposed method can be extended to arbitrary milling cutters. Thus, the problem how to strictly calculate path interval in the occasion of three-axis NC machining of free form surfaces with non-ball end cutters has been resolved effectively. On this basis, the factors that affect path interval are analyzed, and the methods for optimizing tool path are explored.

  1. Better Confidence Intervals for Importance Sampling

    OpenAIRE

    HALIS SAK; WOLFGANG HÖRMANN; JOSEF LEYDOLD

    2010-01-01

    It is well known that for highly skewed distributions the standard method of using the t statistic for the confidence interval of the mean does not give robust results. This is an important problem for importance sampling (IS) as its final distribution is often skewed due to a heavy tailed weight distribution. In this paper, we first explain Hall's transformation and its variants to correct the confidence interval of the mean and then evaluate the performance of these methods for two numerica...

  2. Method for calculating the variance and prediction intervals for biomass estimates obtained from allometric equations

    CSIR Research Space (South Africa)

    Kirton, A

    2010-08-01

    Full Text Available with estimated values. In the case of allometric equations, information about the original fitting of the allometric relationship is needed in order to put a prediction interval around an estimated value. However, often all the information required to calculate...

  3. A Method on the Item Investment Risk Interval Decision-making of Processing Ranking Style

    Institute of Scientific and Technical Information of China (English)

    CHEN Li-wen

    2002-01-01

    In this paper, on the bases of the defeot of riskful type and indefinite type decisions, the concept of the type of item investment probability scheduling decision is given, and a linear programming model and its solution are made out. The feasibility of probability scheduling type item investment plan is studied by applying the quality of interval arithmetic.

  4. Method to measure autonomic control of cardiac function using time interval parameters from impedance cardiography

    NARCIS (Netherlands)

    Meijer, J.H.; Boesveldt, S.; Elbertse, E.; Berendse, H.W.

    2008-01-01

    The time difference between the electrocardiogram and impedance cardiogram can be considered as a measure for the time delay between the electrical and mechanical activities of the heart. This time interval, characterized by the pre-ejection period (PEP), is related to the sympathetic autonomous ner

  5. Establishment of biological reference intervals by indirect method%间接法建立生物参考区间

    Institute of Scientific and Technical Information of China (English)

    沈隽霏; 宋斌斌(综述); 潘柏申(审校)

    2015-01-01

    It is significant for clinical medical laboratories to provide reliable biological reference intervals , which is aimed at guiding proper clinical treatment process .The establishment of biological reference intervals by indirect method which has been arised in recent years has shown great significance in evaluating biological reference intervals periodically within laboratories and establishing biological reference intervals for those projects which cannot be made through direct methods .It is undoubtedly a simple and inexpensive method .This article reviews the basic methods in the process of establishing biological reference intervals by indirect method according to pertinent literatures , which consists of datum acquistion, datum transformation , outlier excluding and reference interval gaining .%医学实验室为临床提供可靠的生物参考区间对正确指导临床诊疗过程有着重要的意义。近年来提出的间接法建立生物参考区间是一种简便、成本低廉的方法,适用于一些无法使用直接法建立生物参考区间的项目和实验室已有检验项目的定期评审。本文基于文献报道,介绍了间接法建立生物参考区间的基本步骤和方法,包括数据采集、数据变换、离群值的剔除和参考区间的取值。

  6. Method of distribution of mobile operator server resources

    Directory of Open Access Journals (Sweden)

    M. A. Skulysh

    2015-03-01

    Full Text Available The functioning of the online charging is important for the efficient work of mobile operator. The main disadvantage of a modern system service calls is that they do not consider the important technical and charging parameters of services, as the result, there is not enough rational utilization of server resources. The article proposes a method of distribution of system resources to ensure efficient processing of applications that allows to monitor the quality of services, taking into account the required number of resources to serve one application and statistics about the number of applications of each type of service received in a given time interval. It will allocate resources in proportion to the requirements of the service and configure the distribution of inputs respectively of the input stream, and to ensure the maximization of economic efficiency of service delivery. In the section “Modern problems of service calls” the main disadvantages of modern systems of billing were analyzed. The work of the billing server is described in the section “Processing of calls in Online Charging System. The problem of resource allocation system is solved in the third section, called “The problem of distribution of server resources”.

  7. An investigation of the effects of a speech-restructuring treatment for stuttering on the distribution of intervals of phonation.

    Science.gov (United States)

    Brown, Lisa; Wilson, Linda; Packman, Ann; Halaki, Mark; Onslow, Mark; Menzies, Ross

    2016-12-01

    The purpose of this study was to investigate whether stuttering reductions following the instatement phase of a speech-restructuring treatment for adults were accompanied by reductions in the frequency of short intervals of phonation (PIs). The study was prompted by the possibility that reductions in the frequency of short PIs is the mechanism underlying such reductions in stuttering. The distribution of PIs was determined for seven adults who stutter, before and immediately after the intensive phase of a speech-restructuring treatment program. Audiovisual recordings of conversational speech were made on both assessment occasions, with PIs recorded with an accelerometer. All seven participants had much lower levels of stuttering after treatment but these were associated with reductions in the frequency of short PIs for only four of them. For the other three participants, two showed no change in frequency of short PIs, while for the other participant the frequency of short PIs actually increased. Stuttering reduction with speech-restructuring treatment can co-occur with reduction in the frequency of short PIs. However, the latter does not appear necessary for this reduction in stuttering to occur. Thus, speech-restructuring treatment must have other, or additional, treatment agents for stuttering to reduce. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. A method for using a time interval counter to measure frequency stability

    Science.gov (United States)

    Greenhall, C. A.

    1987-01-01

    It is shown how a commercial time interval counter can be used to measure the relative stability of two signals that are offset in frequency and mixed down to a beat note of about 1 Hz. To avoid the dead-time problem, the counter is set up to read the time interval between each beat note upcrossing and the next pulse of a 10 Hz reference pulse train. The actual upcrossing times are recovered by a simple algorithm whose outputs can be used for computing residuals and Allan variance. A noise floor-test yielded a delta f/f Allan deviation of 1.3 times 10 to the minus 9th power/tau relative to the beat frequency.

  9. Distributed user profiling via spectral methods

    Directory of Open Access Journals (Sweden)

    Dan-Cristian Tomozei

    2014-09-01

    Full Text Available User profiling is a useful primitive for constructing personalised services, such as content recommendation. In the present paper we investigate the feasibility of user profiling in a distributed setting, with no central authority and only local information exchanges between users. We compute a profile vector for each user (i.e., a low-dimensional vector that characterises her taste via spectral transformation of observed user-produced ratings for items. Our two main contributions follow: (i We consider a low-rank probabilistic model of user taste. More specifically, we consider that users and items are partitioned in a constant number of classes, such that users and items within the same class are statistically identical. We prove that without prior knowledge of the compositions of the classes, based solely on few random observed ratings (namely O(N log N such ratings for N users, we can predict user preference with high probability for unrated items by running a local vote among users with similar profile vectors. In addition, we provide empirical evaluations characterising the way in which spectral profiling performance depends on the dimension of the profile space. Such evaluations are performed on a data set of real user ratings provided by Netflix. (ii We develop distributed algorithms which provably achieve an embedding of users into a low-dimensional space, based on spectral transformation. These involve simple message passing among users, and provably converge to the desired embedding. Our method essentially relies on a novel combination of gossiping and the algorithm proposed by Oja and Karhunen.

  10. [Confidence interval or p-value--similarities and differences between two important methods of statistical inference of quantitative studies].

    Science.gov (United States)

    Harari, Gil

    2014-01-01

    Statistic significance, also known as p-value, and CI (Confidence Interval) are common statistics measures and are essential for the statistical analysis of studies in medicine and life sciences. These measures provide complementary information about the statistical probability and conclusions regarding the clinical significance of study findings. This article is intended to describe the methodologies, compare between the methods, assert their suitability for the different needs of study results analysis and to explain situations in which each method should be used.

  11. 负二项分布参数的贝叶斯区间估计问题%Research of the Bayesian Interval Estimate on the Parameter of Negative Binomial Distribution

    Institute of Scientific and Technical Information of China (English)

    姜培华; 纪习习; 吴玲

    2014-01-01

    In terms of prior distribution of Beta distribution, the Bayesian estimation method on the unknown parame-ter θ of negative binomial distribution was studied. By means of the relations between Beta distribution and the F dis-tribution the general posterior interval estimation of parameter θ was given, and the shortest posterior interval estima-tion by means of conditional extreme was gained. By comparing the discussion analysis and numerical examples den-sity curve shape of the different parameters, it was concluded that in the case of small samples, the shortest confi-dence interval estimation method is worth using.%研究了在先验分布为贝塔分布下,负二项分布未知参数θ的贝叶斯区间估计方法。借助Beta分布与F分布的关系给出了参数θ的一般后验区间估计,并给出了参数θ的最短后验区间估计的条件极值解法。通过对参数取值不同的密度曲线形状的讨论分析和数值实例对比,得出结论:在小样本情况下,最短置信区间估计方法值得采用。

  12. A NEW METHOD FOR CONSTRUCTING CONFIDENCE INTERVAL FOR CPM BASED ON FUZZY DATA

    Directory of Open Access Journals (Sweden)

    Bahram Sadeghpour Gildeh

    2011-06-01

    Full Text Available A measurement control system ensures that measuring equipment and measurement processes are fit for their intended use and its importance in achieving product quality objectives. In most real life applications, the observations are fuzzy. In some cases specification limits (SLs are not precise numbers and they are expressed in fuzzy terms, s o that the classical capability indices could not be applied. In this paper we obtain 100(1 - α% fuzzy confidence interval for C pm fuzzy process capability index, where instead of precise quality we have two membership functions for specification limits.

  13. Advanced Computational Methods for Optimization of Non Periodic Inspection Intervals for Aging Infrastructure

    Science.gov (United States)

    2017-01-05

    4-1 Statistical results for different number of sample ( non -periodic, true value) Number of sample Number of inspections Number of failure Number...distribution unlimited. 51 Table 5-1 Inspection schedule of non -periodic scheme by conditional probability (true value) Inspection no. Inspection time...50 elements Fig. 5-1 Reliability for the non -periodic inspection by conditional probability DISTRIBUTION A. Approved for public release

  14. The Involvement of Centralized and Distributed Processes in Sub-second Time Interval Adaptation: An ERP Investigation of Apparent Motion.

    Science.gov (United States)

    Kaya, Utku; Yildirim, Fazilet Zeynep; Kafaligonul, Hulusi

    2017-09-09

    Accumulating evidence suggests that the timing of brief stationary sounds affects visual motion perception. Recent studies have shown that auditory time interval can alter apparent motion perception not only through concurrent stimulation but also through brief adaptation. The adaptation aftereffects for auditory time intervals were found to be similar to those for visual time intervals, suggesting the involvement of a central timing mechanism. To understand the nature of cortical processes underlying such aftereffects, we adapted observers to different time intervals by using either brief sounds or visual flashes and examined the evoked activity to the subsequently presented visual apparent motion. Both auditory and visual time interval adaptation led to significant changes in the ERPs elicited by the apparent motion. However, the changes induced by each modality were in the opposite direction. Also, they mainly occurred in different time windows and clustered over distinct scalp sites. The effects of auditory time interval adaptation were centered over parietal and parieto-central electrodes while the visual adaptation effects were mostly over occipital and parieto-occipitial regions. Moreover, the changes were much more salient when sounds were used during the adaptation phase. Taken together, our findings within the context of visual motion point to auditory dominance in the temporal domain and highlight the distinct nature of the sensory processes involved in auditory and visual time interval adaptation. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  15. Creating a performance appraisal template for pharmacy technicians using the method of equal-appearing intervals.

    Science.gov (United States)

    Desselle, Shane P; Vaughan, Melissa; Faria, Thomas

    2002-01-01

    To design a highly quantitative template for the evaluation of community pharmacy technicians' job performance that enables managers to provide sufficient feedback and fairly allocate organizational rewards. Two rounds of interviews with two convenience samples of community pharmacists and pharmacy technicians were conducted. The interview in phase 1 was qualitative, and responses were used to design the second interview protocol. During the phase 2 interviews, a new group of respondents ranked technicians' job responsibilities, identified through the initial interviewees' responses, using scales the researchers had designed using an interval-level scaling technique called equal-appearing intervals. Chain and independent pharmacies. Phase 1-20 pharmacists and 20 technicians from chain and independent pharmacies; phase 2-20 pharmacists and 9 technicians from chain and independent pharmacies. Ratings of the importance of technician practice functions and corresponding responsibilities. Weights were calculated for each practice function. A weighted list of practice functions was developed, and this may serve as a performance evaluation template. Customer service-related activities were judged by pharmacists and technicians alike to be the most important technician functions. Many pharmacies either lack formal performance appraisal systems or fail to implement them properly. Technicians may desire more consistent feedback from pharmacists and value information that may lead to organizational rewards. Using a weighted, behaviorally anchored performance appraisal system may help pharmacists and pharmacy managers meet these demands.

  16. A generic method for the evaluation of interval type-2 fuzzy linguistic summaries.

    Science.gov (United States)

    Boran, Fatih Emre; Akay, Diyar

    2014-09-01

    Linguistic summarization has turned out to be an important knowledge discovery technique by providing the most relevant natural language-based sentences in a human consistent manner. While many studies on linguistic summarization have handled ordinary fuzzy sets [type-1 fuzzy set (T1FS)] for modeling words, only few of them have dealt with interval type-2 fuzzy sets (IT2FS) even though IT2FS is better capable of handling uncertainties associated with words. Furthermore, the existent studies work with the scalar cardinality based degree of truth which might lead to inconsistency in the evaluation of interval type-2 fuzzy (IT2F) linguistic summaries. In this paper, to overcome this shortcoming, we propose a novel probabilistic degree of truth for evaluating IT2F linguistic summaries in the forms of type-I and type-II quantified sentences. We also extend the properties that should be fulfilled by any degree of truth on linguistic summarization with T1FS to IT2F environment. We not only prove that our probabilistic degree of truth satisfies the given properties, but also illustrate by examples that it provides more consistent results when compared to the existing degree of truth in the literature. Furthermore, we carry out an application on linguistic summarization of time series data of Europe Brent Spot Price, along with a comparison of the results achieved with our approach and that of the existing degree of truth in the literature.

  17. A Grey Interval Relational Degree-Based Dynamic Multiattribute Decision Making Method and Its Application in Investment Decision Making

    Directory of Open Access Journals (Sweden)

    Yuhong Wang

    2014-01-01

    Full Text Available The purpose of this paper is to propose a three-dimensional grey interval relational degree model for dynamic Multiattribute decision making. In the model, the observed values are interval grey numbers. Elements are selected in the system as the points in an m-dimensional linear space. Then observation data of each element to different time and objects are as the coordinates of point. An optimization model is employed to obtain each scheme’s affiliate degree for the positive and negative ideal schemes. And a three-dimensional grey interval relational degree model based on time, index, and scheme is constructed in the paper. The result shows that the three-dimensional grey relational degree simplifies the traditional dynamic multiattribute decision making method and can better resolve the dynamic multiattribute decision making problem of interval numbers. The example illustrates that the method presented in the paper can be used to deal with problems of uncertainty such as dynamic multiattribute decision making.

  18. A new method for assessing judgmental distributions

    NARCIS (Netherlands)

    Moors, J.J.A.; Schuld, M.H.; Mathijssen, A.C.A.

    1995-01-01

    For a number of statistical applications subjective estimates of some distributional parameters - or even complete densities are needed. The literature agrees that it is wise behaviour to ask only for some quantiles of the distribution; from these, the desired quantities are extracted. Quite a lot o

  19. A new method for assessing judgmental distributions

    NARCIS (Netherlands)

    Moors, J.J.A.; Schuld, M.H.; Mathijssen, A.C.A.

    1995-01-01

    For a number of statistical applications subjective estimates of some distributional parameters - or even complete densities are needed. The literature agrees that it is wise behaviour to ask only for some quantiles of the distribution; from these, the desired quantities are extracted. Quite a lot o

  20. Comparison of the Reference Intervals Used for the Evaluation of Maternal Thyroid Function During Pregnancy Using Sequential and Nonsequential Methods

    Institute of Scientific and Technical Information of China (English)

    Jian-Xia Fan; Shuai Yang; Wei Qian; Feng-Tao Shi; He-Feng Huang

    2016-01-01

    Background:Maternal thyroid dysfunction is common during pregnancy,and physiological changes during pregnancy can lead to the overdiagnosis of hyperthyroidism and misdiagnosis of hypothyroidism with nongestation-specific reference intervals.Our aim was to compare sequential with nonsequential methods for the evaluation of thyroid function in pregnant women.Methods:We tested pregnant women who underwent their trimester prenatal screening at our hospital from February 2011 to September 2012 for serum thyroid stimulating hormone (TSH) and free thyroxine (FT4) using the Abbott and Roche kits.There were 447 and 200 patients enrolled in the nonsequential and sequential groups,respectively.The central 95% range between the 2.5th and the 97.5th percentiles was used as the reference interval for the thyroid function parameter.Results:The nonsequential group exhibited a significantly larger degree of dispersion in the TSH reference interval during the 2nd and 3rd trimesters as measured using both the Abbott and Roche kits (all P < 0.05).The TSH reference intervals were significantly larger in the nonsequential group than in the sequential group during the 3rd trimester as measured with both the Abbott (4.95 vs.3.77 mU/L,P < 0.001) and Roche kits (6.62 vs.5.01 mU/L,P =0.004).The nonsequential group had a significantly larger FT4 reference interval as measured with the Abbott kit during all trimesters (12.64 vs.5.82 pmol/L;7.96 vs.4.77 pmol/L;8.10 vs.4.77 pmol/L,respectively,all P < 0.05),whereas a significantly larger FT4 reference interval was only observed during the 24 trimester with the Roche kit (7.76 vs.5.52 pmol/L,P =0.002).Conclusions:It was more reasonable to establish reference intervals for the evaluation of maternal thyroid function using the sequential method during each trimester of pregnancy.Moreover,the exclusion of pregnancy-related complications should be considered in the inclusion criteria for thyroid function tests.

  1. New method to estimate the sample size for calculation of a proportion assuming binomial distribution.

    Science.gov (United States)

    Vallejo, Adriana; Muniesa, Ana; Ferreira, Chelo; de Blas, Ignacio

    2013-10-01

    Nowadays the formula to calculate the sample size for estimate a proportion (as prevalence) is based on the Normal distribution, however it would be based on a Binomial distribution which confidence interval was possible to be calculated using the Wilson Score method. By comparing the two formulae (Normal and Binomial distributions), the variation of the amplitude of the confidence intervals is relevant in the tails and the center of the curves. In order to calculate the needed sample size we have simulated an iterative sampling procedure, which shows an underestimation of the sample size for values of prevalence closed to 0 or 1, and also an overestimation for values closed to 0.5. Attending to these results we proposed an algorithm based on Wilson Score method that provides similar values for the sample size than empirically obtained by simulation.

  2. One-way ANOVA based on interval information

    Science.gov (United States)

    Hesamian, Gholamreza

    2016-08-01

    This paper deals with extending the one-way analysis of variance (ANOVA) to the case where the observed data are represented by closed intervals rather than real numbers. In this approach, first a notion of interval random variable is introduced. Especially, a normal distribution with interval parameters is introduced to investigate hypotheses about the equality of interval means or test the homogeneity of interval variances assumption. Moreover, the least significant difference (LSD method) for investigating multiple comparison of interval means is developed when the null hypothesis about the equality of means is rejected. Then, at a given interval significance level, an index is applied to compare the interval test statistic and the related interval critical value as a criterion to accept or reject the null interval hypothesis of interest. Finally, the method of decision-making leads to some degrees to accept or reject the interval hypotheses. An applied example will be used to show the performance of this method.

  3. Improved robust stabilization method for linear systems with interval time-varying input delays by using Wirtinger inequality.

    Science.gov (United States)

    Liu, Yuzhi; Li, Muguo

    2015-05-01

    This paper investigates the robust stabilization problem for uncertain linear systems with interval time-varying delays. By constructing novel Lyapunov-Krasovskii functionals and developing delay-partitioning approaches, some delay-dependent stability criteria are derived based on an improved Wirtinger׳s inequality and the reciprocally convex method. The proposed methods have improved the stability conditions without increasing much computational complexity. A state feedback controller design approach is also presented based on the proposed criteria. Numerical examples are finally given to illustrate the effectiveness of the proposed method.

  4. The BASIC program to analyse the polymodal frequency distribution into normal distributions with Marqualdt's method

    National Research Council Canada - National Science Library

    Akamine, T

    1984-01-01

    The method analysing the polymodal distribution into normal distribution enables by plotting the frequencies in the medium values of the classes to perform a resolution to the regression curve method...

  5. Structural hybrid reliability index and its convergent solving method based on random–fuzzy–interval reliability model

    Directory of Open Access Journals (Sweden)

    Hai An

    2016-08-01

    Full Text Available Aiming to resolve the problems of a variety of uncertainty variables that coexist in the engineering structure reliability analysis, a new hybrid reliability index to evaluate structural hybrid reliability, based on the random–fuzzy–interval model, is proposed in this article. The convergent solving method is also presented. First, the truncated probability reliability model, the fuzzy random reliability model, and the non-probabilistic interval reliability model are introduced. Then, the new hybrid reliability index definition is presented based on the random–fuzzy–interval model. Furthermore, the calculation flowchart of the hybrid reliability index is presented and it is solved using the modified limit-step length iterative algorithm, which ensures convergence. And the validity of convergent algorithm for the hybrid reliability model is verified through the calculation examples in literature. In the end, a numerical example is demonstrated to show that the hybrid reliability index is applicable for the wear reliability assessment of mechanisms, where truncated random variables, fuzzy random variables, and interval variables coexist. The demonstration also shows the good convergence of the iterative algorithm proposed in this article.

  6. An ideal interval method of multi-objective decision-making for comprehensive assessment of water resources renewability

    Institute of Scientific and Technical Information of China (English)

    YANG Xiaohua; YANG Zhifeng; SHEN Zhenyao; LI Jianqiang

    2004-01-01

    In order to estimate water resources renewability scientifically, an Ideal Interval Method of Multiple Objective Decision-Making (IIMMODM) is presented. This method is developed through improving an ideal point method of multiple objective decision-making. The ideal interval is obtained with assessment standard instead of ideal points. The weights are decided by using the basic point and gray code accelerating genetic algorithm. This method has synthesized the expert's suggestion and avoided giving a mark for the objective again. It could solve the complicated problem of compatible or incompatible multi-objective assessment. The principle of IIMMODM is presented in this paper. It is used to assess the water resources renewability for nine administrative divisions in the Yellow River basin. The result shows that the water resources renewability in the Yellow River basin is very low. Compared with the gray associate analysis method, fuzzy synthesis method and genetic projection pursuit method,the IIMMODM is easier to use. Compared with the ideal point method of multiple objective decision-making, the IIMMODM has good robustness, which is applicable to the comprehensive assessments of water resources.

  7. 基于生存贝塔分布的几何分布参数精确区间估计%Accurate interval estimation of geometric distribution parameter based on survival beta distribution

    Institute of Scientific and Technical Information of China (English)

    徐玉茹; 徐付霞

    2016-01-01

    证明了几何分布参数的充分统计量服从负二项分布,由此将负二项分布转化为生存贝塔分布,构造出了参数的精确置信区间,并且在不同的置信度组合中选出最佳组合,得到精确最短置信区间。讨论了大样本下几何分布的近似区间估计,通过数值模拟,直观展示区间估计的精度变化,说明了精确最短区间估计的优良性。%This paper proved that the sufficient statistic of a geometric distribution parameter is subjected to the negative binomial distribution .Therefore, constructed the exact confi-dence interval of the parameter by converting the negative binomial distribution into the sur -vival beta distribution, and select the best combination in different levels of the confidence to get the accurate shortest confidence interval of its parameter .The approximate interval esti-mate under the large sample of a geometric distribution was discussed in this paper .Through numerical simulation, the change of the accuracy of an interval estimation was intuitively demonstrated, and then the superiority of the accurate shortest confidence interval was illus -trated.

  8. New variable selection method using interval segmentation purity with application to blockwise kernel transform support vector machine classification of high-dimensional microarray data.

    Science.gov (United States)

    Tang, Li-Juan; Du, Wen; Fu, Hai-Yan; Jiang, Jian-Hui; Wu, Hai-Long; Shen, Guo-Li; Yu, Ru-Qin

    2009-08-01

    One problem with discriminant analysis of microarray data is representation of each sample by a large number of genes that are possibly irrelevant, insignificant, or redundant. Methods of variable selection are, therefore, of great significance in microarray data analysis. A new method for key gene selection has been proposed on the basis of interval segmentation purity that is defined as the purity of samples belonging to a certain class in intervals segmented by a mode search algorithm. This method identifies key variables most discriminative for each class, which offers possibility of unraveling the biological implication of selected genes. A salient advantage of the new strategy over existing methods is the capability of selecting genes that, though possibly exhibit a multimodal distribution, are the most discriminative for the classes of interest, considering that the expression levels of some genes may reflect systematic difference in within-class samples derived from different pathogenic mechanisms. On the basis of the key genes selected for individual classes, a support vector machine with block-wise kernel transform is developed for the classification of different classes. The combination of the proposed gene mining approach with support vector machine is demonstrated in cancer classification using two public data sets. The results reveal that significant genes have been identified for each class, and the classification model shows satisfactory performance in training and prediction for both data sets.

  9. Sampling Methods for Wallenius' and Fisher's Noncentral Hypergeometric Distributions

    DEFF Research Database (Denmark)

    Fog, Agner

    2008-01-01

    the mode, ratio-of-uniforms rejection method, and rejection by sampling in the tau domain. Methods for the multivariate distributions include: simulation of urn experiments, conditional method, Gibbs sampling, and Metropolis-Hastings sampling. These methods are useful for Monte Carlo simulation of models......Several methods for generating variates with univariate and multivariate Wallenius' and Fisher's noncentral hypergeometric distributions are developed. Methods for the univariate distributions include: simulation of urn experiments, inversion by binary search, inversion by chop-down search from...... of biased sampling and models of evolution and for calculating moments and quantiles of the distributions....

  10. 一般分布区间型符号数据的描述统计与分析%Descriptive statistics and analysis of interval symbolic data with general distribution

    Institute of Scientific and Technical Information of China (English)

    郭均鹏; 李汶华; 高峰

    2011-01-01

    以对大规模个体数据通过打包形成的区间型符号数据为研究对象,针对个体在区间内往往不服从均匀分布的实际情况,研究一般分布的区间型符号数据的描述统计和分析方法.对符号数据分析进行了概述,并定义了一般分布的区间变量.研究了一般分布的区间变量的经验分布函数和经验联合分布函数.在此基础上,讨论了一般分布区间变量的描述统计量的求解.最后给出了算例,运用一般分布区间型符号数据的因子分析方法,以中国股市为背景进行了应用研究.结论表明:以往研究基于均匀分布假设所给出的描述统计量的计算,可看作文中所给求解公式的特例.另外,研究方法基于经验分布理论,无需知道个体在区间内服从分布函数的具体表达式,且在计算过程中充分利用了区间内的个体信息.%Interval symbolic data gained by data packaging on the original individuals of a sample are subjects of this paper. The individuals are always non-uniformly distributed within the intervals. Regarding this situation, this paper concentrates on descriptive statistics and analysis of generally distributed interval data, within which each individual is arbitrarily distributed. The basic theory of symbolic data analysis was first introduced. Then the definition of generally distributed interval was proposed. In the following, the study on empirical distribution function and empirical joint distribution function for generally distributed interval symbolic data were put forward. Based on this, the descriptive statistics of generally distributed interval variables were obtained. Finally a numerical example was given. And an application study in Chinese stock market was carried through using factor analysis of generally distributed interval symbolic data. Research shows that the previous works supposing uniform distribution are especial case of this work. Besides this, the method presented in

  11. New version of Optimal Homotopy Asymptotic Method for the solution of nonlinear boundary value problems in finite and infinite intervals

    Directory of Open Access Journals (Sweden)

    Liaqat Ali

    2016-09-01

    Full Text Available In this research work a new version of Optimal Homotopy Asymptotic Method is applied to solve nonlinear boundary value problems (BVPs in finite and infinite intervals. It comprises of initial guess, auxiliary functions (containing unknown convergence controlling parameters and a homotopy. The said method is applied to solve nonlinear Riccati equations and nonlinear BVP of order two for thin film flow of a third grade fluid on a moving belt. It is also used to solve nonlinear BVP of order three achieved by Mostafa et al. for Hydro-magnetic boundary layer and micro-polar fluid flow over a stretching surface embedded in a non-Darcian porous medium with radiation. The obtained results are compared with the existing results of Runge-Kutta (RK-4 and Optimal Homotopy Asymptotic Method (OHAM-1. The outcomes achieved by this method are in excellent concurrence with the exact solution and hence it is proved that this method is easy and effective.

  12. Simulation Research of Interval Analysis Method for Measuring 220 Rn%多时间分析方法测量220Rn的仿真研究

    Institute of Scientific and Technical Information of China (English)

    颜拥军; 付德顺; 曹真伟; 易凌帆

    2014-01-01

    The kind of multiple time analysis method was used to measure and track the bas-ic principle of radon activity and the corresponding calculation formula .According to ran-dom nuclear signal in time a poison distribution ,it produces the data of decay pulse serials of radon and its daughters by using matlab and using time interval method analyzed .the re-sults show that the time interval analysis method was effectiveness for measuring the low activity radon events in the high background .%采用多时间分析方法测量和追踪氡活度,以M atlab软件仿真方式,对产生220 Rn及其子体的核脉冲随机序列数据进行多时间分析,结果表明,该方法能够在天然本底和其他本底环境(如222 Rn)的干扰(氡混合场中)情况下有效测量低活度220 Rn的平均浓度。

  13. Bootstrap 方法建立生物参考区间的研究%Estimation of reference intervals by bootstrap methods

    Institute of Scientific and Technical Information of China (English)

    李小佩; 王建新; 施秀英; 王惠民

    2016-01-01

    intervals became wider with fewer samples .A minimum threshold of 60 could be considered a good limit to obtain reliable reference intervals .Conclusion The bootstrap methods can be used to calculate reference intervals and provide new way of thinking for clinical la‐boratory ,getting over the problem of the lack of sample size and the variable′s probability distribution .

  14. Confidence Intervals for the Coefficient of Variation for the Normal Distribution%正态分布变差系数的置信区间

    Institute of Scientific and Technical Information of China (English)

    孙祝岭

    2009-01-01

    A problem of the estimation of the coefficient of variation for the normal distribution was discussed. A new pivotal variable to construct the classical confidence intervals for the coefficient of variation was given. And precision confidence intervals for the coefficient of variation of the normal distribution with a simple expression were also obtained.%研究正态分布的变差系数估计问题.提出了一个新的枢轴量来构造变差系数的经典置信区间,给出了正态分布变差系数的具有简单表达式的精确置信区间.

  15. New proposed method for prediction of reservoir sedimentation distribution

    Institute of Scientific and Technical Information of China (English)

    H. Hosseinjanzadeh; K. Hosseini; K. Kaveh; S.F. Mousavi

    2015-01-01

    abstract Prediction of sediment distribution in reservoirs is an important issue for dam designers to determine the reservoir active storage capacity. Methods proposed to calculate sediment distribution are varied, and mainly empirical. Among all the methods currently available, only area-reduction and area-increment methods are considered as the principal methods for prediction of sediment distribution. In this paper, data of 16 reservoirs in the United States are used to propose a new empirical method for prediction of sediment distribution in reservoirs. In the proposed method, reservoir sediment distribu-tion is related to sediment volume and original reservoir characteristics. To validate the accuracy of the new proposed method, obtained results are compared with survey data for two reservoirs. The results of this investigation showed that the proposed method has an acceptable accuracy.

  16. Review of Congestion Management Methods for Distribution Networks with High Penetration of Distributed Energy Resources

    DEFF Research Database (Denmark)

    Huang, Shaojun; Wu, Qiuwei; Liu, Zhaoxi;

    2014-01-01

    This paper reviews the existing congestion management methods for distribution networks with high penetration of DERs documented in the recent research literatures. The congestion management methods for distribution networks reviewed can be grouped into two categories – market methods and direct...... control methods. The market methods consist of dynamic tariff, distribution capacity market, shadow price and flexible service market. The direct control methods are comprised of network reconfiguration, reactive power control and active power control. Based on the review of the existing methods......, the authors suggest a priority list of the existing methods....

  17. Method for spatially distributing a population

    Science.gov (United States)

    Bright, Edward A [Knoxville, TN; Bhaduri, Budhendra L [Knoxville, TN; Coleman, Phillip R [Knoxville, TN; Dobson, Jerome E [Lawrence, KS

    2007-07-24

    A process for spatially distributing a population count within a geographically defined area can include the steps of logically correlating land usages apparent from a geographically defined area to geospatial features in the geographically defined area and allocating portions of the population count to regions of the geographically defined area having the land usages, according to the logical correlation. The process can also include weighing the logical correlation for determining the allocation of portions of the population count and storing the allocated portions within a searchable data store. The logically correlating step can include the step of logically correlating time-based land usages to geospatial features of the geographically defined area. The process can also include obtaining a population count for the geographically defined area, organizing the geographically defined area into a plurality of sectors, and verifying the allocated portions according to direct observation.

  18. THE COMPILATION OF SHANNON ENTROPY MEASUREMENT EQUATION FOR NONLINEAR DYNAMIC SYSTEMS BY USING THE INTERVAL ANALYSIS METHODS

    Directory of Open Access Journals (Sweden)

    Yu. P. Machekhin

    2015-01-01

    Full Text Available The article considers the issue of measurement of dynamic variables of open nonlinear dynamical systems. Most of real physical and biological systems in the surrounding world are the nonlinear dynamic systems. The spatial, temporal and spatio-temporal structures are formed in such systems because of dissipation. The collective effects that associated with the processes of self-organization and evolution are possible there too. The objective of this research is a compilation of the Shannon entropy measurement equations for case of nonlinear dynamical systems. It’s proposed to use the interval mathematics methods for this. It is shown that the measurement and measurement results analysis for variables with complex dynamics, as a rule, cannot be described by classical metrological approaches, that metrological documents, for example GUM, contain. The reason of this situation is the mismatch between the classical mathematical and physical approaches on the one hand and processes that occur in real dynamic systems on the other hand. For measurement of nonlinear dynamical systems variables the special measurement model and measurement results analysis model are created. They are based on Open systems theory, Dynamical chaos theory and Information theory. It’s proposed to use the fractal, entropic and temporal scales as tools for evaluation of a systems state. As a result of research the Shannon entropy measurement equations, based on interval representations of measurement results. are created, like for an individual dynamic variable as for nonlinear dynamic system. It is shown that the measurement equations, based on interval mathematics methods, contains the exact solutions and allows take into account full uncertainty. The new results will complement the measurement model and the measurement results analysis model for case of nonlinear dynamic systems.

  19. METHODS OF MEASURING THE EFFECTS OF LIGHTNING BY SIMULATING ITS STRIKES WITH THE INTERVAL ASSESSMENT OF THE RESULTS OF MEASUREMENTS

    Directory of Open Access Journals (Sweden)

    P. V. Kriksin

    2017-01-01

    Full Text Available The article presents the results of the development of new methods aimed at more accurate interval estimate of the experimental values of voltages on grounding devices of substations and circuits in the control cables, that occur when lightning strikes to lightning rods; the abovementioned estimate made it possible to increase the accuracy of the results of the study of lightning noise by 28 %. A more accurate value of interval estimation were achieved by developing a measurement model that takes into account, along with the measured values, different measurement errors and includes the special processing of the measurement results. As a result, the interval of finding the true value of the sought voltage is determined with an accuracy of 95 %. The methods can be applied to the IK-1 and IKP-1 measurement complexes, consisting in the aperiodic pulse generator, the generator of high-frequency pulses and selective voltmeters, respectively. To evaluate the effectiveness of the developed methods series of experimental voltage assessments of grounding devices of ten active high-voltage substation have been fulfilled in accordance with the developed methods and traditional techniques. The evaluation results confirmed the possibility of finding the true values of voltage over a wide range, that ought to be considered in the process of technical diagnostics of lightning protection of substations when the analysis of the measurement results and the development of measures to reduce the effects of lightning are being fulfilled. Also, a comparative analysis of the results of measurements made in accordance with the developed methods and traditional techniques has demonstrated that the true value of the sought voltage may exceed the measured value at an average of 28 %, that ought to be considered in the further analysis of the parameters of lightning protection at the facility and in the development of corrective actions. The developed methods have been

  20. Distributed User Profiling via Spectral Methods

    CERN Document Server

    Tomozei, Dan-Cristian

    2011-01-01

    User profiling is a useful primitive for constructing personalised services, such as content recommendation. In the present paper we investigate the feasibility of user profiling in a distributed setting, with no central authority and only local information exchanges between users. We compute a profile vector for each user (i.e., a low-dimensional vector that characterises her taste) via spectral transformation of observed user-produced ratings for items. Our two main contributions follow: i) We consider a low-rank probabilistic model of user taste. More specifically, we consider that users and items are partitioned in a constant number of classes, such that users and items within the same class are statistically identical. We prove that without prior knowledge of the compositions of the classes, based solely on few random observed ratings (namely $O(N\\log N)$ such ratings for $N$ users), we can predict user preference with high probability for unrated items by running a local vote among users with similar pr...

  1. New Interval-Valued Intuitionistic Fuzzy Behavioral MADM Method and Its Application in the Selection of Photovoltaic Cells

    Directory of Open Access Journals (Sweden)

    Xiaolu Zhang

    2016-10-01

    Full Text Available As one of the emerging renewable resources, the use of photovoltaic cells has become a promise for offering clean and plentiful energy. The selection of a best photovoltaic cell for a promoter plays a significant role in aspect of maximizing income, minimizing costs and conferring high maturity and reliability, which is a typical multiple attribute decision making (MADM problem. Although many prominent MADM techniques have been developed, most of them are usually to select the optimal alternative under the hypothesis that the decision maker or expert is completely rational and the decision data are represented by crisp values. However, in the selecting processes of photovoltaic cells the decision maker is usually bounded rational and the ratings of alternatives are usually imprecise and vague. To address these kinds of complex and common issues, in this paper we develop a new interval-valued intuitionistic fuzzy behavioral MADM method. We employ interval-valued intuitionistic fuzzy numbers (IVIFNs to express the imprecise ratings of alternatives; and we construct LINMAP-based nonlinear programming models to identify the reference points under IVIFNs contexts, which avoid the subjective randomness of selecting the reference points. Finally we develop a prospect theory-based ranking method to identify the optimal alternative, which takes fully into account the decision maker’s behavioral characteristics such as reference dependence, diminishing sensitivity and loss aversion in the decision making process.

  2. Integrated Power Flow and Short Circuit Calculation Method for Distribution Network with Inverter Based Distributed Generation

    Directory of Open Access Journals (Sweden)

    Shan Yang

    2016-01-01

    Full Text Available Power flow calculation and short circuit calculation are the basis of theoretical research for distribution network with inverter based distributed generation. The similarity of equivalent model for inverter based distributed generation during normal and fault conditions of distribution network and the differences between power flow and short circuit calculation are analyzed in this paper. Then an integrated power flow and short circuit calculation method for distribution network with inverter based distributed generation is proposed. The proposed method let the inverter based distributed generation be equivalent to Iθ bus, which makes it suitable to calculate the power flow of distribution network with a current limited inverter based distributed generation. And the low voltage ride through capability of inverter based distributed generation can be considered as well in this paper. Finally, some tests of power flow and short circuit current calculation are performed on a 33-bus distribution network. The calculated results from the proposed method in this paper are contrasted with those by the traditional method and the simulation method, whose results have verified the effectiveness of the integrated method suggested in this paper.

  3. Efficient Numerical Methods for Stable Distributions

    Science.gov (United States)

    2007-11-02

    0 and cutoffs c1 = −128 and c2 = +127 are used, corresponding to the common values used in digital signal processing. Five new functions for discrete...variables using the Chambers- Mallows - Stuck method, rounding them to the nearest integer, and then cutting off if the value is too high or too low...within the common matlab environment they use. We comment briefly on the commercialization of this in the last section. 3 -100 -50 0 50 100 0. 0 0. 01 0

  4. Luminous Flame Temperature Distribution Measurement Using the Emission Method

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Flame temperature distribution is one of the most important characteristic parameters in combustion research. The emission method is a good way to measure the luminous flame temperature field. The maximum entropy method is introduced to the temperature distribution measurement of a luminous flame using the emission method. A simplified mathematical model was derived by combining the thermal radiation theory, reconstruction algorithm and maximum entropy method. Suitable parameters were selected in the computing process. Good experimental results were obtained with pulverized coal flames.

  5. Optimal Point-to-Point Motion Planning of Flexible Parallel Manipulator with Multi-Interval Radau Pseudospectral Method

    Directory of Open Access Journals (Sweden)

    Kong Minxiu

    2016-01-01

    Full Text Available Optimal point-to-point motion planning of flexible parallel manipulator was investigated in this paper and the 3RRR parallel manipulator is taken as the object. First, an optimal point-to-point motion planning problem was constructed with the consideration of the rigid-flexible coupling dynamic model and actuator dynamics. Then, the multi-interval Legendre–Gauss–Radau (LGR pseudospectral method was introduced to transform the optimal control problem into Nonlinear Programming (NLP problem. At last, the simulation and experiment were carried out on the flexible parallel manipulator. Compared with the line motion of quantic polynomial planning, the proposed method could constrain the flexible displacement amplitude and suppress the residue vibration.

  6. Distributed Interior-point Method for Loosely Coupled Problems

    DEFF Research Database (Denmark)

    Pakazad, Sina Khoshfetrat; Hansson, Anders; Andersen, Martin Skovgaard

    2014-01-01

    ’s method and utilizes proximal splitting to distribute the computations for calculating the Newton step at each iteration. A combination of this algorithm and the interior-point method is then used to introduce a distributed algorithm for solving constrained loosely coupled problems. We also provide...

  7. 29 CFR 4041A.42 - Method of distribution.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Method of distribution. 4041A.42 Section 4041A.42 Labor Regulations Relating to Labor (Continued) PENSION BENEFIT GUARANTY CORPORATION PLAN TERMINATIONS TERMINATION OF MULTIEMPLOYER PLANS Closeout of Sufficient Plans § 4041A.42 Method of distribution. The...

  8. Cathode power distribution system and method of using the same for power distribution

    Science.gov (United States)

    Williamson, Mark A; Wiedmeyer, Stanley G; Koehl, Eugene R; Bailey, James L; Willit, James L; Barnes, Laurel A; Blaskovitz, Robert J

    2014-11-11

    Embodiments include a cathode power distribution system and/or method of using the same for power distribution. The cathode power distribution system includes a plurality of cathode assemblies. Each cathode assembly of the plurality of cathode assemblies includes a plurality of cathode rods. The system also includes a plurality of bus bars configured to distribute current to each of the plurality of cathode assemblies. The plurality of bus bars include a first bus bar configured to distribute the current to first ends of the plurality of cathode assemblies and a second bus bar configured to distribute the current to second ends of the plurality of cathode assemblies.

  9. A Comparison of Distribution Free and Non-Distribution Free Factor Analysis Methods

    Science.gov (United States)

    Ritter, Nicola L.

    2012-01-01

    Many researchers recognize that factor analysis can be conducted on both correlation matrices and variance-covariance matrices. Although most researchers extract factors from non-distribution free or parametric methods, researchers can also extract factors from distribution free or non-parametric methods. The nature of the data dictates the method…

  10. A Decentralized Variable Ordering Method for Distributed Constraint Optimization

    Science.gov (United States)

    2005-05-01

    Either aproach can be used depending on how densely the nodes are connected inside blocks. If inside-block connec- tivity is sparse, the latter method ...A Decentralized Variable Ordering Method for Distributed Constraint Optimization Anton Chechetka Katia Sycara CMU-RI-TR-05-18 May 2005 Robotics...00-00-2005 4. TITLE AND SUBTITLE A Decentralized Variable Ordering Method for Distributed Constraint Optimization 5a. CONTRACT NUMBER 5b. GRANT

  11. Sprint interval training (SIT) is an effective method to maintain cardiorespiratory fitness (CRF) and glucose homeostasis in Scottish adolescents.

    Science.gov (United States)

    Martin, R; Buchan, D S; Baker, J S; Young, J; Sculthorpe, N; Grace, F M

    2015-11-01

    The present study examined the physiological impact of a school based sprint interval training (SIT) intervention in replacement of standard physical education (SPE) class on cardio-respiratory fitness (CRF) and glucose homeostasis during the semester following summer vacation. Participants (n=49) were randomly allocated to either intervention (SIT; n=26, aged 16.9 ± 0.3 yrs) or control group who underwent standard physical education (SPE; n=23, aged 16.8 ± 0.6 yrs). CRF (VO2max) and glucose homeostasis were obtained prior-to and following 7 weeks of SIT exercise. Significant group x time interaction was observed for CRF (P SIT exercise is an effective method of maintaining (but not improving) CRF and fasting insulin homeostasis amongst school-going adolescents. SIT exercise demonstrates potential as a time efficient physiological adjunct to standard PE class in order to maintain CRF during the school term.

  12. 区间删失数据情形下GE分布参数的最大似然估计%The MLE of GE distribution with interval censored data

    Institute of Scientific and Technical Information of China (English)

    程从华; 陈进源

    2012-01-01

    讨论了样本数据为区间型数据时参数的最大似然估计问题.当数据为区间删失情形时,参数最大似然估计的精确表达式不存在,甚至近似解都很难得到.由于区间型数据是一种不完全数据,所以利用EM算法来求参数的近似最大似然估计.为了演示,提供了一个真实寿命数据分析的实例.%The two-parameter maximum likelihood estimation (MLE) problem for the GE distribution with consideration of interval data was studied.In the presence of interval data,the analytical forms for the restricted MLE of the parameters of GE distribution did not exist.Since the interval data were somewhat incomplete,we proposed to use EM algorithm to compute the MLEs of the parameters.A real life data analysis was presented to illustrate the method.

  13. Research on magnetic testing method of stress distribution

    Institute of Scientific and Technical Information of China (English)

    李路明; 黄松岭; 汪来富; 杨海青; 施克仁

    2002-01-01

    For implementing nondestructive evaluation of stress distribution inside ferromagnetic material, a magnetic testing method was developed which does not need artificial magnetizing field. This method was implemented by testing the normal component of the magnetic flux leakage above the object being tested with a constant lift-off from 1 to 10*!mm. The distribution of the stress inside the specimen can be gotten from that of the normal component of the magnetic flux leakage. A stress concentration specimen, which is a 10*!mm thickness mild steel plate with a welding seam on it, was tested using this method. The stress distribution of the magnetic testing was identical with that of small hole stress testing method. It indicates that the stress distribution of ferromagnetic material can be known by the magnetic testing method.

  14. Image processing methods to obtain symmetrical distribution from projection image.

    Science.gov (United States)

    Asano, H; Takenaka, N; Fujii, T; Nakamatsu, E; Tagami, Y; Takeshima, K

    2004-10-01

    Flow visualization and measurement of cross-sectional liquid distribution is very effective to clarify the effects of obstacles in a conduit on heat transfer and flow characteristics of gas-liquid two-phase flow. In this study, two methods to obtain cross-sectional distribution of void fraction are applied to vertical upward air-water two-phase flow. These methods need projection image only from one direction. Radial distributions of void fraction in a circular tube and a circular-tube annuli with a spacer were calculated by Abel transform based on the assumption of axial symmetry. On the other hand, cross-sectional distributions of void fraction in a circular tube with a wire coil whose conduit configuration rotates about the tube central axis periodically were measured by CT method based on the assumption that the relative distributions of liquid phase against the wire were kept along the flow direction.

  15. comparison of estimation methods for fitting weibull distribution to ...

    African Journals Online (AJOL)

    Tersor

    JOURNAL OF RESEARCH IN FORESTRY, WILDLIFE AND ENVIRONMENT VOLUME 7, No.2 SEPTEMBER, 2015. ... method was more accurate in fitting the Weibull distribution to the natural stand. ... appropriate for mixed age group.

  16. Dynamic Subsidy Method for Congestion Management in Distribution Networks

    DEFF Research Database (Denmark)

    Huang, Shaojun; Wu, Qiuwei

    2016-01-01

    management in distribution networks, including the market mechanism, the mathematical formulation through a two-level optimization, and the method solving the optimization by tightening the constraints and linearization. Case studies were conducted with a one node system and the Bus 4 distribution network......Dynamic subsidy (DS) is a locational price paid by the distribution system operator (DSO) to its customers in order to shift energy consumption to designated hours and nodes. It is promising for demand side management and congestion management. This paper proposes a new DS method for congestion...... of the Roy Billinton Test System (RBTS) with high penetration of electric vehicles (EVs) and heat pumps (HPs). The case studies demonstrate the efficacy of the DS method for congestion management in distribution networks. Studies in this paper show that the DS method offers the customers a fair opportunity...

  17. Dynamic Subsidy Method for Congestion Management in Distribution Networks

    DEFF Research Database (Denmark)

    Huang, Shaojun; Wu, Qiuwei

    2016-01-01

    Dynamic subsidy (DS) is a locational price paid by the distribution system operator (DSO) to its customers in order to shift energy consumption to designated hours and nodes. It is promising for demand side management and congestion management. This paper proposes a new DS method for congestion...... management in distribution networks, including the market mechanism, the mathematical formulation through a two-level optimization, and the method solving the optimization by tightening the constraints and linearization. Case studies were conducted with a one node system and the Bus 4 distribution network...... of the Roy Billinton Test System (RBTS) with high penetration of electric vehicles (EVs) and heat pumps (HPs). The case studies demonstrate the efficacy of the DS method for congestion management in distribution networks. Studies in this paper show that the DS method offers the customers a fair opportunity...

  18. Distribution and Migration of Heavy Metals in Undisturbed Forest Soils: A High Resolution Sampling Method

    Institute of Scientific and Technical Information of China (English)

    RUAN Xin-Ling; ZHANG Gan-Lin; NI Liu-Jian; HE Yue

    2008-01-01

    The vertical distribution and migration of Cu,Zn,Pb,and Cd in two forest soil profiles near an industrial emission source were investigated using a high resolution sampling method together with reference element Ti.One-meter soil profile was sectioned horizontally at 2 cm intervals in the first 40 cm,5 cm intervals in the next 40 cm,and 10 cm intervals in the last 20 cm.The migration distance and rate of heavy metals in the soil profiles were calculated according to their relative concentrations in the profiles,as calibrated by the reference element Ti.The enrichment of heavy metals appeared in the uppermost layer of the forest soil,and the soil heavy metal concentrations decreased down the profile until reaching their background values.The calculated average migration rates of Cd,Cu,Pb,and Zn were 0.70,0.33,0.37,and 0.76cm year-1,respectively,which were comparable to other methods.A simulation model was proposed,which could well describe the distribution of Cu,Zn,Pb,and Cd in natural forest soils.

  19. Cooperative Certificate Revocation List Distribution Methods in VANETs

    Science.gov (United States)

    Nowatkowski, Michael; McManus, Chris; Wolfgang, Jennie; Owen, Henry

    This paper discusses two new methods for distributing certificate revocation lists (CRL) in a vehicular ad hoc network environment using cooperative methods. The main purpose for using cooperative methods is to attempt to reduce the number of collisions in the dedicated short range communication (DSRC) broadcast environment. The reduced number of collisions will increase the effective throughput of the medium. The first method uses a polling scheme to determine which nodes possess the most number of CRL file pieces. The second method takes advantage of the multiple service channels available in DSRC. Both methods use a form of coding that reduces the impact of the piece problem. Both methods are compared to the Code Torrent method of file distribution.

  20. A Comparative Study of Distribution System Parameter Estimation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup

    2016-07-17

    In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of both methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.

  1. A comparison of confidence interval methods for the intraclass correlation coefficient in community-based cluster randomization trials with a binary outcome.

    Science.gov (United States)

    Braschel, Melissa C; Svec, Ivana; Darlington, Gerarda A; Donner, Allan

    2016-04-01

    Many investigators rely on previously published point estimates of the intraclass correlation coefficient rather than on their associated confidence intervals to determine the required size of a newly planned cluster randomized trial. Although confidence interval methods for the intraclass correlation coefficient that can be applied to community-based trials have been developed for a continuous outcome variable, fewer methods exist for a binary outcome variable. The aim of this study is to evaluate confidence interval methods for the intraclass correlation coefficient applied to binary outcomes in community intervention trials enrolling a small number of large clusters. Existing methods for confidence interval construction are examined and compared to a new ad hoc approach based on dividing clusters into a large number of smaller sub-clusters and subsequently applying existing methods to the resulting data. Monte Carlo simulation is used to assess the width and coverage of confidence intervals for the intraclass correlation coefficient based on Smith's large sample approximation of the standard error of the one-way analysis of variance estimator, an inverted modified Wald test for the Fleiss-Cuzick estimator, and intervals constructed using a bootstrap-t applied to a variance-stabilizing transformation of the intraclass correlation coefficient estimate. In addition, a new approach is applied in which clusters are randomly divided into a large number of smaller sub-clusters with the same methods applied to these data (with the exception of the bootstrap-t interval, which assumes large cluster sizes). These methods are also applied to a cluster randomized trial on adolescent tobacco use for illustration. When applied to a binary outcome variable in a small number of large clusters, existing confidence interval methods for the intraclass correlation coefficient provide poor coverage. However, confidence intervals constructed using the new approach combined with Smith

  2. Conditional prediction intervals of wind power generation

    OpenAIRE

    Pinson, Pierre; Kariniotakis, Georges

    2010-01-01

    A generic method for the providing of prediction intervals of wind power generation is described. Prediction intervals complement the more common wind power point forecasts, by giving a range of potential outcomes for a given probability, their so-called nominal coverage rate. Ideally they inform of the situation-specific uncertainty of point forecasts. In order to avoid a restrictive assumption on the shape of forecast error distributions, focus is given to an empirical and nonparametric app...

  3. Statistical properties of interval mapping methods on quantitative trait loci location: impact on QTL/eQTL analyses

    Directory of Open Access Journals (Sweden)

    Wang Xiaoqiang

    2012-04-01

    Full Text Available Abstract Background Quantitative trait loci (QTL detection on a huge amount of phenotypes, like eQTL detection on transcriptomic data, can be dramatically impaired by the statistical properties of interval mapping methods. One of these major outcomes is the high number of QTL detected at marker locations. The present study aims at identifying and specifying the sources of this bias, in particular in the case of analysis of data issued from outbred populations. Analytical developments were carried out in a backcross situation in order to specify the bias and to propose an algorithm to control it. The outbred population context was studied through simulated data sets in a wide range of situations. The likelihood ratio test was firstly analyzed under the "one QTL" hypothesis in a backcross population. Designs of sib families were then simulated and analyzed using the QTL Map software. On the basis of the theoretical results in backcross, parameters such as the population size, the density of the genetic map, the QTL effect and the true location of the QTL, were taken into account under the "no QTL" and the "one QTL" hypotheses. A combination of two non parametric tests - the Kolmogorov-Smirnov test and the Mann-Whitney-Wilcoxon test - was used in order to identify the parameters that affected the bias and to specify how much they influenced the estimation of QTL location. Results A theoretical expression of the bias of the estimated QTL location was obtained for a backcross type population. We demonstrated a common source of bias under the "no QTL" and the "one QTL" hypotheses and qualified the possible influence of several parameters. Simulation studies confirmed that the bias exists in outbred populations under both the hypotheses of "no QTL" and "one QTL" on a linkage group. The QTL location was systematically closer to marker locations than expected, particularly in the case of low QTL effect, small population size or low density of markers, i

  4. Distributed MIMO-ISAR Sub-image Fusion Method

    Directory of Open Access Journals (Sweden)

    Gu Wenkun

    2017-02-01

    Full Text Available The fast fluctuation associated with maneuvering a target’s radar cross-section often affects the imaging performance stability of traditional monostatic Inverse Synthetic Aperture Radar (ISAR. To address this problem, in this study, we propose an imaging method based on the fusion of sub-images of frequencydiversity-distributed multiple Input-Multiple Output-Inverse Synthetic Aperture Radar (MIMO-ISAR. First, we establish the analytic expression of a two-dimensional ISAR sub-image acquired by different channels of distributed MIMO-ISAR. Then, we derive the distance and azimuth distortion factors of the image acquired by the different channels. By compensating for the distortion of the ISAR image, we ultimately realize distributed MIMO-ISAR fusion imaging. Simulations verify the validity of this imaging method using distributed MIMO-ISAR.

  5. A Grouping Method of Distribution Substations Using Cluster Analysis

    Science.gov (United States)

    Ohtaka, Toshiya; Iwamoto, Shinichi

    Recently, it has been considered to group distribution substations together for evaluating the reinforcement planning of distribution systems. However, the grouping is carried out by the knowledge and experience of an expert who is in charge of distribution systems, and a subjective feeling of a human being causes ambiguous grouping at the moment. Therefore, a method for imitating the grouping by the expert has been desired in order to carry out a systematic grouping which has numerical corroboration. In this paper, we propose a grouping method of distribution substations using cluster analysis based on the interconnected power between the distribution substations. Moreover, we consider the geographical constraints such as rivers, roads, business office boundaries and branch boundaries, and also examine a method for adjusting the interconnected power. Simulations are carried out to verify the validity of the proposed method using an example system. From the simulation results, we can find that the imitation of the grouping by the expert becomes possible due to considering the geographical constraints and adjusting the interconnected power, and also the calculation time and iterations can be greatly reduced by introducing the local and tabu search methods.

  6. Information-theoretic methods for estimating of complicated probability distributions

    CERN Document Server

    Zong, Zhi

    2006-01-01

    Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur

  7. Mathematical methods linear algebra normed spaces distributions integration

    CERN Document Server

    Korevaar, Jacob

    1968-01-01

    Mathematical Methods, Volume I: Linear Algebra, Normed Spaces, Distributions, Integration focuses on advanced mathematical tools used in applications and the basic concepts of algebra, normed spaces, integration, and distributions.The publication first offers information on algebraic theory of vector spaces and introduction to functional analysis. Discussions focus on linear transformations and functionals, rectangular matrices, systems of linear equations, eigenvalue problems, use of eigenvectors and generalized eigenvectors in the representation of linear operators, metric and normed vector

  8. Conceptual framework for the thermal distribution method of test

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, J.W.

    1994-11-01

    A Standard Method of Test for residential thermal distribution efficiency is being developed under the auspices of the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE). Thermal distribution systems are the ductwork, piping, or other means used to transport heat or cooling effect from the equipment that produces this thermal energy to the building spaces that need it. Because thermal distribution systems are embedded in and interact with the larger building system as a whole, a new set of parameters has been developed to describe these systems. This paper was written to fill a perceived need for a concise introduction to this terminology.

  9. The frequency-independent control method for distributed generation systems

    DEFF Research Database (Denmark)

    Naderi, Siamak; Pouresmaeil, Edris; Gao, Wenzhong David

    2012-01-01

    In this paper a novel frequency-independent control method suitable for distributed generation (DG) is presented. This strategy is derived based on the . abc/. αβ transformation and . abc/. dq transformation of the ac system variables. The active and reactive currents injected by the DG...... are controlled in the synchronously rotating orthogonal . dq reference frame. The transformed variables are used in control of the voltage source inverter that connects DG to distribution network. Due to importance of distributed resources in modern power systems, development of new, practical, cost...

  10. Polynomial probability distribution estimation using the method of moments.

    Science.gov (United States)

    Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.

  11. Method of sequential mesh on Koopman-Darmois distributions

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    For costly and/or destructive tests,the sequential method with a proper maximum sample size is needed.Based on Koopman-Darmois distributions,this paper proposes the method of sequential mesh,which has an acceptable maximum sample size.In comparison with the popular truncated sequential probability ratio test,our method has the advantage of a smaller maximum sample size and is especially applicable for costly and/or destructive tests.

  12. Confidence intervals with a priori parameter bounds

    CERN Document Server

    Lokhov, A V

    2014-01-01

    We review the methods of constructing confidence intervals that account for a priori information about one-sided constraints on the parameter being estimated. We show that the so-called method of sensitivity limit yields a correct solution of the problem. Derived are the solutions for the cases of a continuous distribution with non-negative estimated parameter and a discrete distribution, specifically a Poisson process with background. For both cases, the best upper limit is constructed that accounts for the a priori information. A table is provided with the confidence intervals for the parameter of Poisson distribution that correctly accounts for the information on the known value of the background along with the software for calculating the confidence intervals for any confidence levels and magnitudes of the background (the software is freely available for download via Internet).

  13. A cumulative entropy method for distribution recognition of model error

    Science.gov (United States)

    Liang, Yingjie; Chen, Wen

    2015-02-01

    This paper develops a cumulative entropy method (CEM) to recognize the most suitable distribution for model error. In terms of the CEM, the Lévy stable distribution is employed to capture the statistical properties of model error. The strategies are tested on 250 experiments of axially loaded CFT steel stub columns in conjunction with the four national building codes of Japan (AIJ, 1997), China (DL/T, 1999), the Eurocode 4 (EU4, 2004), and United States (AISC, 2005). The cumulative entropy method is validated as more computationally efficient than the Shannon entropy method. Compared with the Kolmogorov-Smirnov test and root mean square deviation, the CEM provides alternative and powerful model selection criterion to recognize the most suitable distribution for the model error.

  14. An Assignment Method for IPUs in Distributed Systems

    Institute of Scientific and Technical Information of China (English)

    LI Liang; YANG Guowei

    1999-01-01

    In a distributed system, one of themost important things is to establish an assignment method fordistributing tasks. It is assumed that a distributed system does nothave a central administrator, all independent processing units in this systemwant to cooperate for the best results, but they cannot know theconditions of one another. So in order to undertake the tasks inadmirable proportions, they have to adjust their undertaking tasks onlyby self-learning. In this paper, the performance of thissystem is analyzed by Markov chains, and a robust method of self-learningfor independent processing units in this kind of systems is presented.This method can lead the tasks of the system to be distributed very wellamong all the independent processing units, and can also be used tosolve the general assignment problem.

  15. Methods to correct and compute confidence and prediction intervals of models neglecting sub-parameterization heterogeneity - From the ideal toward practice

    Science.gov (United States)

    Christensen, Steen

    2017-02-01

    This paper derives and tests methods to correct regression-based confidence and prediction intervals for groundwater models that neglect sub-parameterization heterogeneity within the hydraulic property fields of the groundwater system. Several levels of knowledge and uncertainty about the system are considered. It is shown by a two-dimensional groundwater flow example that when reliable probabilistic models are available for the property fields, the corrected confidence and prediction intervals are nearly accurate; when the probabilistic models must be suggested from subjective judgment, the corrected confidence intervals are likely to be much more accurate than their uncorrected counterparts; when no probabilistic information is available then conservative bound values can be used to correct the intervals but they are likely to be very wide. The paper also shows how confidence and prediction intervals can be computed and corrected when the weights applied to the data are estimated as part of the regression. It is demonstrated that in this case it cannot be guaranteed that applying the conservative bound values will lead to conservative confidence and prediction intervals. Finally, it is demonstrated by the two-dimensional flow example that the accuracy of the corrected confidence and prediction intervals deteriorates for very large covariance of the log-transmissivity field, and particularly when the weight matrix differs from the inverse total error covariance matrix. It is argued that such deterioration is less likely to happen for three-dimensional groundwater flow systems.

  16. Contrasting effects of a mixed-methods high-intensity interval training intervention in girl football players.

    Science.gov (United States)

    Wright, Matthew D; Hurst, Christopher; Taylor, Jonathan M

    2016-10-01

    Little is known about the responses of girl athletes to training interventions throughout maturation. This study evaluated group and individual responses to an 8-week, mixed-methods, high-intensity interval training (HIIT) programme in girl football players. Thirty-seven players (age 13.4 ± 1.5 years) were tested for 20-m speed, repeated-sprint ability, change-of-direction speed and level 1 yo-yo intermittent recovery (YYIR). Players were subcategorised into before-, at- and after-PHV (peak height velocity) based on maturity offset. Very likely moderate (25%; ±90% confidence limits = 9.2) improvements occurred in YYIR, but data were unclear in players before-PHV with moderate individual differences in response. Decrements in repeated-sprint ability were most likely very large (6.5%; ±3.2) before-PHV, and likely moderate (1.7%; ±1.0) at-PHV. Data were unclear after-PHV. A very likely moderate (2.7%; ±1.0) decrement occurred in change-of-direction speed at-PHV while there was a very likely increase (-2.4%; ±1.3) in after-PHV players. Possibly small (-1.1%; ±1.4) improvements in 20-m speed occurred before-PHV but the effect was otherwise unclear with moderate to large individual differences. These data reflect specific responses to training interventions in girls of different biological maturity, while highlighting individual responses to HIIT interventions. This can assist practitioners in providing effective training prescription.

  17. Bootstrap Signal-to-Noise Confidence Intervals: An Objective Method for Subject Exclusion and Quality Control in ERP Studies

    Science.gov (United States)

    Parks, Nathan A.; Gannon, Matthew A.; Long, Stephanie M.; Young, Madeleine E.

    2016-01-01

    Analysis of event-related potential (ERP) data includes several steps to ensure that ERPs meet an appropriate level of signal quality. One such step, subject exclusion, rejects subject data if ERP waveforms fail to meet an appropriate level of signal quality. Subject exclusion is an important quality control step in the ERP analysis pipeline as it ensures that statistical inference is based only upon those subjects exhibiting clear evoked brain responses. This critical quality control step is most often performed simply through visual inspection of subject-level ERPs by investigators. Such an approach is qualitative, subjective, and susceptible to investigator bias, as there are no standards as to what constitutes an ERP of sufficient signal quality. Here, we describe a standardized and objective method for quantifying waveform quality in individual subjects and establishing criteria for subject exclusion. The approach uses bootstrap resampling of ERP waveforms (from a pool of all available trials) to compute a signal-to-noise ratio confidence interval (SNR-CI) for individual subject waveforms. The lower bound of this SNR-CI (SNRLB) yields an effective and objective measure of signal quality as it ensures that ERP waveforms statistically exceed a desired signal-to-noise criterion. SNRLB provides a quantifiable metric of individual subject ERP quality and eliminates the need for subjective evaluation of waveform quality by the investigator. We detail the SNR-CI methodology, establish the efficacy of employing this approach with Monte Carlo simulations, and demonstrate its utility in practice when applied to ERP datasets. PMID:26903849

  18. A Symplectic Method to Generate Multivariate Normal Distributions

    CERN Document Server

    Baumgarten, Christian

    2012-01-01

    The AMAS group at the Paul Scherrer Institute developed an object oriented library for high performance simulation of high intensity ion beam transport with space charge. Such particle-in-cell (PIC) simulations require a method to generate multivariate particle distributions as starting conditions. In a preceeding publications it has been shown that the generators of symplectic transformations in two dimensions are a subset of the real Dirac matrices (RDMs) and that few symplectic transformations are required to transform a quadratic Hamiltonian into diagonal form. Here we argue that the use of RDMs is well suited for the generation of multivariate normal distributions with arbitrary covariances. A direct and simple argument supporting this claim is that this is the "natural" way how such distributions are formed. The transport of charged particle beams may serve as an example: An uncorrelated gaussian distribution of particles starting at some initial position of the accelerator is subject to linear deformat...

  19. Global fuel consumption optimization of an open-time terminal rendezvous and docking with large-eccentricity elliptic-orbit by the method of interval analysis

    Science.gov (United States)

    Ma, Hongliang; Xu, Shijie

    2016-11-01

    By defining two open-time impulse points, the optimization of a two-impulse, open-time terminal rendezvous and docking with target spacecraft on large-eccentricity elliptical orbit is proposed in this paper. The purpose of optimization is to minimize the velocity increment for a terminal elliptic-reference-orbit rendezvous and docking. Current methods for solving this type of optimization problem include for example genetic algorithms and gradient based optimization. Unlike these methods, interval methods can guarantee that the globally best solution is found for a given parameterization of the input. The non-linear Tschauner- Hempel(TH) equations of the state transitions for a terminal elliptic target orbit are transformed form time domain to target orbital true anomaly domain. Their homogenous solutions and approximate state transition matrix for the control with a short true anomaly interval can be used to avoid interval integration. The interval branch and bound optimization algorithm is introduced for solving the presented rendezvous and docking optimization problem and optimizing two open-time impulse points and thruster pulse amplitudes, which systematically eliminates parts of the control and open-time input spaces that do not satisfy the path and final time state constraints. Several numerical examples are undertaken to validate the interval optimization algorithm. The results indicate that the sufficiently narrow spaces containing the global optimization solution for the open-time two-impulse terminal rendezvous and docking with target spacecraft on large-eccentricity elliptical orbit can be obtained by the interval algorithm (IA). Combining the gradient-based method, the global optimization solution for the discontinuous nonconvex optimization problem in the specifically remained search space can be found. Interval analysis is shown to be a useful tool and preponderant in the discontinuous nonconvex optimization problem of the terminal rendezvous and

  20. Random numbers from the tails of probability distributions using the transformation method

    CERN Document Server

    Fulger, Daniel; Germano, Guido

    2009-01-01

    The speed of many one-line transformation methods for the production of, for example, Levy alpha-stable random numbers, which generalize Gaussian ones, and Mittag-Leffler random numbers, which generalize exponential ones, is very high and satisfactory for most purposes. However, for the class of decreasing probability densities fast rejection implementations like the Ziggurat by Marsaglia and Tsang promise a significant speed-up if it is possible to complement them with a method that samples the tails of the infinite support. This requires the fast generation of random numbers greater or smaller than a certain value. We present a method to achieve this, and also to generate random numbers within any arbitrary interval. We demonstrate the method showing the properties of the transform maps of the above mentioned distributions as examples of stable and geometric stable random numbers used for the stochastic solution of the space-time fractional diffusion equation.

  1. BAYESIAN DEMONSTRATION TEST METHOD WITH MIXED BETA DISTRIBUTION

    Institute of Scientific and Technical Information of China (English)

    MING Zhimao; TAO Junyong; CHEN Xun; ZHANG Yunan

    2008-01-01

    A complex mechatronics system Bayesian plan of demonstration test is studied based on the mixed beta distribution. During product design and improvement various information is appropriately considered by introducing inheritance factor, moreover, the inheritance factor is thought as a random variable, and the Bayesian decision of the qualification test plan is obtained, and the correctness of a Bayesian model presented is verified. The results show that the quantity of the test is too conservative according to classical methods under small binomial samples. Although traditional Bayesian analysis can consider test information of related or similar products, it ignores differences between such products. The method has solved the above problem, furthermore, considering the requirement in many practical projects, the differences among this method, the classical method and Bayesian with beta distribution are compared according to the plan of reliability acceptance test.

  2. Distribution of reference range of QT interval based on the artificial neural networks%基于人工神经网络的中国人心电图QT间期参考值的地理分布

    Institute of Scientific and Technical Information of China (English)

    张雯; 岑敏仪; 葛淼; 何进伟; 路春爱; 张莹; 杨绍芳; 姜吉琳; 许金辉; 刘新蕾

    2015-01-01

    目的:为制定中国人心电图 QT 间期参考值的统一标准提供依据。方法收集中国84个市(县)级医院、研究单位及高等院校测定的15164例健康成年人心电图 QT 间期参考值。运用相关分析、主成分分析、人工神经网络、GIS 空间分析等方法研究其与经度、纬度、海拔、年平均气温、年平均相对湿度、年降水量、气温年较差、表土粉粒百分率、表土阳离子交换量(黏土)、表土阳离子交换量(潜育土)、表土碱度的关系。结果通过五层人工神经网络模拟健康中国人心电图 QT 间期的参考值范围,并通过克里格插值法精确地内插出中国人心电图 QT 间期参考值范围的地理分布图。结论知道中国某地的地理要素,就可用此模型估算该地区成年人心电图 QT 间期参考值,同时也可用空间分布图直观得到中国任何地方的成年人心电图 QT 间期参考值。%Objective To supply a scientific basis for unifying the reference value standard of QT interval of Chinese people .Meth-ods The QT interval reference values were collected from 15 164 healthy people in 84 areas of China.The correlations between the measured values of QT interval and the 11 geographical factors were analyzed .Results After 5 layers′Neural Networks and self-study about building the analogue rules for 60 times, the relationship between the normal reference value of QT interval and geographic factors had be limitated with the methods of BP neural networks and Spatial analysis of GIS .The geography distribution map of the reference range of QT interval was inserted exactly with the method of Kriging .Conclusions If geographical factors of a particular area are obtained ,the QT interval reference value of Chinese people in this area could be calculated using this model and the normal QT interval reference values of Chinese men any -where in China could be obtained from geography distribution

  3. Differential Transformation Method for Temperature Distribution in a Radiating Fin

    DEFF Research Database (Denmark)

    Rahimi, M.; Hosseini, M. J.; Barari, Amin

    2011-01-01

    Radiating extended surfaces are widely used to enhance heat transfer between a primary surface and the environment. In this paper, the differential transformation method (DTM) is proposed for solving nonlinear differential equation of temperature distribution in a heat radiating fin. The concept...... of differential transformation is briefly introduced, and then we employed it to derive solutions of two nonlinear equations. The results obtained by DTM are compared with those derived from the analytical solution to verify the accuracy of the proposed method....

  4. Research on Rolling Load Distribution Method based on Data Mining

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yan-hua; LIU Xiang-hua; WANG Guo-dong

    2005-01-01

    A new method of establishing rolling load distribution model was developed by online intelligent information-processing technology for plate rolling. The model combines knowledge model and mathematical model with using knowledge discovery in database (KDD) and data mining (DM) as the start. The online maintenance and optimization of the load model are realized. The effectiveness of this new method was testified by offline simulation and online application.

  5. interval functions

    Directory of Open Access Journals (Sweden)

    J. A. Chatfield

    1978-01-01

    Full Text Available Suppose N is a Banach space of norm |•| and R is the set of real numbers. All integrals used are of the subdivision-refinement type. The main theorem [Theorem 3] gives a representation of TH where H is a function from R×R to N such that H(p+,p+, H(p,p+, H(p−,p−, and H(p−,p each exist for each p and T is a bounded linear operator on the space of all such functions H. In particular we show that TH=(I∫abfHdα+∑i=1∞[H(xi−1,xi−1+−H(xi−1+,xi−1+]β(xi−1+∑i=1∞[H(xi−,xi−H(xi−,xi−]Θ(xi−1,xiwhere each of α, β, and Θ depend only on T, α is of bounded variation, β and Θ are 0 except at a countable number of points, fH is a function from R to N depending on H and {xi}i=1∞ denotes the points P in [a,b]. for which [H(p,p+−H(p+,p+]≠0 or [H(p−,p−H(p−,p−]≠0. We also define an interior interval function integral and give a relationship between it and the standard interval function integral.

  6. Multi-level methods and approximating distribution functions

    Science.gov (United States)

    Wilson, D.; Baker, R. E.

    2016-07-01

    Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie's direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparable to Gillespie's direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146-179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.

  7. System and Method for Monitoring Distributed Asset Data

    Science.gov (United States)

    Gorinevsky, Dimitry (Inventor)

    2015-01-01

    A computer-based monitoring system and monitoring method implemented in computer software for detecting, estimating, and reporting the condition states, their changes, and anomalies for many assets. The assets are of same type, are operated over a period of time, and outfitted with data collection systems. The proposed monitoring method accounts for variability of working conditions for each asset by using regression model that characterizes asset performance. The assets are of the same type but not identical. The proposed monitoring method accounts for asset-to-asset variability; it also accounts for drifts and trends in the asset condition and data. The proposed monitoring system can perform distributed processing of massive amounts of historical data without discarding any useful information where moving all the asset data into one central computing system might be infeasible. The overall processing is includes distributed preprocessing data records from each asset to produce compressed data.

  8. Effect of Encoding Method on the Distribution of Cardiac Arrhythmias

    CERN Document Server

    Mora, Luis A

    2011-01-01

    This paper presents the evaluation of the effect of the method of ECG signal encoding, based on nonlinear characteristics such as information entropy and Lempel-Ziv complexity, on the distribution of cardiac arrhythmias. Initially proposed a procedure electrocardiographic gating to compensate for errors inherent in the process of filtering segments. For the evaluation of distributions and determine which of the different encoding methods produces greater separation between different kinds of arrhythmias studied (AFIB, AFL, SVTA, VT, Normal's), use a function based on the dispersion of the elements on the centroid of its class, the result being that the best encoding for the entire system is through the method of threshold value for a ternary code with E = 1 / 12.

  9. Synchronization Methods for Three Phase Distributed Power Generation Systems

    DEFF Research Database (Denmark)

    Timbus, Adrian Vasile; Teodorescu, Remus; Blaabjerg, Frede

    2005-01-01

    Nowadays, it is a general trend to increase the electricity production using Distributed Power Generation Systems (DPGS) based on renewable energy resources such as wind, sun or hydrogen. If these systems are not properly controlled, their connection to the utility network can generate problems...... on the grid side. Therefore, considerations about power generation, safe running and grid synchronization must be done before connecting these systems to the utility network. This paper is mainly dealing with the grid synchronization issues of distributed systems. An overview of the synchronization methods...

  10. A hybrid method for assessment of soil pollutants spatial distribution

    Science.gov (United States)

    Tarasov, D. A.; Medvedev, A. N.; Sergeev, A. P.; Shichkin, A. V.; Buevich, A. G.

    2017-07-01

    The authors propose a hybrid method to predict the distribution of topsoil pollutants (Cu and Cr). The method combines artificial neural networks and kriging. Corresponding computer models were built and tested on real data on example of subarctic regions of Russia. The network structure selection was based on the minimization of the Root-mean-square error between real and predicted concentrations. The constructed models show that the prognostic accuracy of the artificial neural network is higher than in case of the geostatistical (kriging) and deterministic methods. The conclusion is that hybridization of models (artificial neural network and kriging) provides the improvement of the total predictive accuracy.

  11. Conditional prediction intervals of wind power generation

    DEFF Research Database (Denmark)

    Pinson, Pierre; Kariniotakis, Georges

    2010-01-01

    A generic method for the providing of prediction intervals of wind power generation is described. Prediction intervals complement the more common wind power point forecasts, by giving a range of potential outcomes for a given probability, their so-called nominal coverage rate. Ideally they inform...... on the characteristics of prediction errors for providing conditional interval forecasts. By simultaneously generating prediction intervals with various nominal coverage rates, one obtains full predictive distributions of wind generation. Adapted resampling is applied here to the case of an onshore Danish wind farm......, for which three point forecasting methods are considered as input. The probabilistic forecasts generated are evaluated based on their reliability and sharpness, while compared to forecasts based on quantile regression and the climatology benchmark. The operational application of adapted resampling...

  12. Electrical power distribution control methods, electrical energy demand monitoring methods, and power management devices

    Science.gov (United States)

    Chassin, David P.; Donnelly, Matthew K.; Dagle, Jeffery E.

    2006-12-12

    Electrical power distribution control methods, electrical energy demand monitoring methods, and power management devices are described. In one aspect, an electrical power distribution control method includes providing electrical energy from an electrical power distribution system, applying the electrical energy to a load, providing a plurality of different values for a threshold at a plurality of moments in time and corresponding to an electrical characteristic of the electrical energy, and adjusting an amount of the electrical energy applied to the load responsive to an electrical characteristic of the electrical energy triggering one of the values of the threshold at the respective moment in time.

  13. Communication Systems and Study Method for Active Distribution Power systems

    DEFF Research Database (Denmark)

    Wei, Mu; Chen, Zhe

    Due to the involvement and evolvement of communication technologies in contemporary power systems, the applications of modern communication technologies in distribution power system are becoming increasingly important. In this paper, the International Organization for Standardization (ISO....... The suitability of the communication technology to the distribution power system with active renewable energy based generation units is discussed. Subsequently the typical possible communication systems are studied by simulation. In this paper, a novel method of integrating communication system impact into power...... system simulation is presented to address the problem of lack of off-shelf research tools on the power system communication. The communication system is configured and studied by the OPNET, and the performance of an active distribution power system integrated with the communication system is simulated...

  14. Recurrence interval analysis of trading volumes.

    Science.gov (United States)

    Ren, Fei; Zhou, Wei-Xing

    2010-06-01

    We study the statistical properties of the recurrence intervals τ between successive trading volumes exceeding a certain threshold q. The recurrence interval analysis is carried out for the 20 liquid Chinese stocks covering a period from January 2000 to May 2009, and two Chinese indices from January 2003 to April 2009. Similar to the recurrence interval distribution of the price returns, the tail of the recurrence interval distribution of the trading volumes follows a power-law scaling, and the results are verified by the goodness-of-fit tests using the Kolmogorov-Smirnov (KS) statistic, the weighted KS statistic and the Cramér-von Mises criterion. The measurements of the conditional probability distribution and the detrended fluctuation function show that both short-term and long-term memory effects exist in the recurrence intervals between trading volumes. We further study the relationship between trading volumes and price returns based on the recurrence interval analysis method. It is found that large trading volumes are more likely to occur following large price returns, and the comovement between trading volumes and price returns is more pronounced for large trading volumes.

  15. Interval arithmetic operations for uncertainty analysis with correlated interval variables

    Institute of Scientific and Technical Information of China (English)

    Chao Jiang; Chun-Ming Fu; Bing-Yu Ni; Xu Han

    2016-01-01

    A new interval arithmetic method is proposed to solve interval functions with correlated intervals through which the overestimation problem existing in interval analy-sis could be significantly alleviated. The correlation between interval parameters is defined by the multidimensional par-allelepiped model which is convenient to describe the correlative and independent interval variables in a unified framework. The original interval variables with correlation are transformed into the standard space without correlation, and then the relationship between the original variables and the standard interval variables is obtained. The expressions of four basic interval arithmetic operations, namely addi-tion, subtraction, multiplication, and division, are given in the standard space. Finally, several numerical examples and a two-step bar are used to demonstrate the effectiveness of the proposed method.

  16. Novel Method of Unambiguous Moving Target Detection in Pulse-Doppler Radar with Random Pulse Repetition Interval

    Directory of Open Access Journals (Sweden)

    Liu Zhen

    2012-03-01

    Full Text Available Blind zones and ambiguities in range and velocity measurement are two important issues in traditional pulse-Doppler radar. By generating random deviations with respect to a mean Pulse Repetition Interval (PRI, this paper proposes a novel algorithm of Moving Target Detection (MTD based on the Compressed Sensing (CS theory, in which the random deviations of the PRIare converted to the Restricted Isometry Property (RIP of the observing matrix. The ambiguities of range and velocity are eliminated by designing the signal parameters. The simulation results demonstrate that this scheme has high performance of detection, and there is no ambiguity and blind zones as well. It can also shorten the coherent processing interval compared to traditional staggered PRI mode because only one pulse train is needed instead of several trains.

  17. AN IMAGE RETRIEVAL METHOD BASED ON SPATIAL DISTRIBUTION OF COLOR

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Color histogram is now widely used in image retrieval. Color histogram-based image retrieval methods are simple and efficient but without considering the spatial distribution information of the color. To overcome the shortcoming of conventional color histogram-based image retrieval methods, an image retrieval method based on Radon Transform (RT) is proposed. In order to reduce the computational complexity,wavelet decomposition is used to compress image data. Firstly, images are decomposed by Mallat algorithm.The low-frequency components are then projected by RT to generate the spatial color feature. Finally the moment feature matrices which are saved along with original images are obtained. Experimental results show that the RT based retrieval is more accurate and efficient than traditional color histogram-based method in case that there are obvious objects in images. Further more, RT based retrieval runs significantly faster than the traditional color histogram methods.

  18. Interval Estimation for the Pareto Distribution Based on the Progressive Type Type II Censored Data%逐步增加Ⅱ型截尾下Pareto分布参数的区间估计

    Institute of Scientific and Technical Information of China (English)

    李凤

    2011-01-01

    基于逐步增加的Ⅱ型截尾,讨论了Pareto分布形状参数和尺度参数的区间估计.得到了两参数的区间估计和联合区间估计.最后以估计区间的最短长度为标准,通过数值模拟得到参数的最优区间估计方法.%The interval estimation of the parameter θ and the joint confidence region of the two parameters of Pareto distribution based on the progressive type Ⅱ censored are proposed. Finally, the statistical performances of the three methods are compared to by obtaining a smaller confidence area based on Monte-Carlo simulation study.

  19. Accuracy of popular automatic QT Interval algorithms assessed by a 'Gold Standard' and comparison with a Novel method: computer simulation study

    Directory of Open Access Journals (Sweden)

    Hunt Anthony

    2005-09-01

    Full Text Available Abstract Background Accurate measurement of the QT interval is very important from a clinical and pharmaceutical drug safety screening perspective. Expert manual measurement is both imprecise and imperfectly reproducible, yet it is used as the reference standard to assess the accuracy of current automatic computer algorithms, which thus produce reproducible but incorrect measurements of the QT interval. There is a scientific imperative to evaluate the most commonly used algorithms with an accurate and objective 'gold standard' and investigate novel automatic algorithms if the commonly used algorithms are found to be deficient. Methods This study uses a validated computer simulation of 8 different noise contaminated ECG waveforms (with known QT intervals of 461 and 495 ms, generated from a cell array using Luo-Rudy membrane kinetics and the Crank-Nicholson method, as a reference standard to assess the accuracy of commonly used QT measurement algorithms. Each ECG contaminated with 39 mixtures of noise at 3 levels of intensity was first filtered then subjected to three threshold methods (T1, T2, T3, two T wave slope methods (S1, S2 and a Novel method. The reproducibility and accuracy of each algorithm was compared for each ECG. Results The coefficient of variation for methods T1, T2, T3, S1, S2 and Novel were 0.36, 0.23, 1.9, 0.93, 0.92 and 0.62 respectively. For ECGs of real QT interval 461 ms the methods T1, T2, T3, S1, S2 and Novel calculated the mean QT intervals(standard deviations to be 379.4(1.29, 368.5(0.8, 401.3(8.4, 358.9(4.8, 381.5(4.6 and 464(4.9 ms respectively. For ECGs of real QT interval 495 ms the methods T1, T2, T3, S1, S2 and Novel calculated the mean QT intervals(standard deviations to be 396.9(1.7, 387.2(0.97, 424.9(8.7, 386.7(2.2, 396.8(2.8 and 493(0.97 ms respectively. These results showed significant differences between means at >95% confidence level. Shifting ECG baselines caused large errors of QT interval with T1 and T2

  20. Numerical methods for computing the temperature distribution in satellite systems

    OpenAIRE

    Gómez-Valadés Maturano, Francisco José

    2012-01-01

    [ANGLÈS] The present thesis has been done at ASTRIUM company to find new methods to obtain temperature distributions. Current software packages such as ESATAN or ESARAD provide not only excellent thermal analysis solutions, at a high price as they are very time consuming though, but also radiative simulations in orbit scenarios. Since licenses of this product are usually limited for the use of many engineers, it is important to provide new tools to do these calculations. In consequence, a dif...

  1. Numerical methods for computing the temperature distribution in satellite systems

    OpenAIRE

    Gómez-Valadés Maturano, Francisco José

    2012-01-01

    [ANGLÈS] The present thesis has been done at ASTRIUM company to find new methods to obtain temperature distributions. Current software packages such as ESATAN or ESARAD provide not only excellent thermal analysis solutions, at a high price as they are very time consuming though, but also radiative simulations in orbit scenarios. Since licenses of this product are usually limited for the use of many engineers, it is important to provide new tools to do these calculations. In consequence, a dif...

  2. Applying the Priority Distribution Method for Employee Motivation

    Directory of Open Access Journals (Sweden)

    Jonas Žaptorius

    2013-09-01

    Full Text Available In an age of increasing healthcare expenditure, the efficiency of healthcare services is a burning issue. This paper deals with the creation of a performance-related remuneration system, which would meet requirements for efficiency and sustainable quality. In real world scenarios, it is difficult to create an objective and transparent employee performance evaluation model dealing with both qualitative and quantitative criteria. To achieve these goals, the use of decision support methods is suggested and analysed. The systematic approach of practical application of the Priority Distribution Method to healthcare provider organisations is created and described.

  3. Rock sampling. [method for controlling particle size distribution

    Science.gov (United States)

    Blum, P. (Inventor)

    1971-01-01

    A method for sampling rock and other brittle materials and for controlling resultant particle sizes is described. The method involves cutting grooves in the rock surface to provide a grouping of parallel ridges and subsequently machining the ridges to provide a powder specimen. The machining step may comprise milling, drilling, lathe cutting or the like; but a planing step is advantageous. Control of the particle size distribution is effected primarily by changing the height and width of these ridges. This control exceeds that obtainable by conventional grinding.

  4. Product interval automata

    Indian Academy of Sciences (India)

    Deepak D’Souza; P S Thiagarajan

    2002-04-01

    We identify a subclass of timed automata called product interval automata and develop its theory. These automata consist of a network of timed agents with the key restriction being that there is just one clock for each agent and the way the clocks are read and reset is determined by the distribution of shared actions across the agents. We show that the resulting automata admit a clean theory in both logical and language theoretic terms. We also show that product interval automata are expressive enough to model the timed behaviour of asynchronous digital circuits.

  5. Method of measuring charge distribution of nanosized aerosols.

    Science.gov (United States)

    Kim, S H; Woo, K S; Liu, B Y H; Zachariah, M R

    2005-02-01

    In this paper, we present the development of a method to accurately measure the positive and negative charge distribution of nanosized aerosols using a tandem differential mobility analyzer (TDMA) system. From the series of TDMA measurements, the charge fraction of nanosized aerosol particles was obtained as a function of equivalent mobility particle diameter ranging from 50 to 200 nm. The capability of this new approach was implemented by sampling from a laminar diffusion flame which provides a source of highly charged particles due to naturally occurring flame ionization process. The results from the TDMA measurement provide the charge distribution of nanosized aerosols which we found to be in reasonable agreement with Boltzmann equilibrium charge distribution theory and a theory based upon charge population balance equation (PBE) combined with Fuchs theory (N.A. Fuchs, Geofis. Pura Appl. 56 (1963) 185). The theoretically estimated charge distribution of aerosol particles based on the PBE provides insight into the charging processes of nanosized aerosols surrounded by bipolar ions and electrons, and agree well with the TDMA results.

  6. A method for statistically comparing spatial distribution maps

    Directory of Open Access Journals (Sweden)

    Reynolds Mary G

    2009-01-01

    Full Text Available Abstract Background Ecological niche modeling is a method for estimation of species distributions based on certain ecological parameters. Thus far, empirical determination of significant differences between independently generated distribution maps for a single species (maps which are created through equivalent processes, but with different ecological input parameters, has been challenging. Results We describe a method for comparing model outcomes, which allows a statistical evaluation of whether the strength of prediction and breadth of predicted areas is measurably different between projected distributions. To create ecological niche models for statistical comparison, we utilized GARP (Genetic Algorithm for Rule-Set Production software to generate ecological niche models of human monkeypox in Africa. We created several models, keeping constant the case location input records for each model but varying the ecological input data. In order to assess the relative importance of each ecological parameter included in the development of the individual predicted distributions, we performed pixel-to-pixel comparisons between model outcomes and calculated the mean difference in pixel scores. We used a two sample Student's t-test, (assuming as null hypothesis that both maps were identical to each other regardless of which input parameters were used to examine whether the mean difference in corresponding pixel scores from one map to another was greater than would be expected by chance alone. We also utilized weighted kappa statistics, frequency distributions, and percent difference to look at the disparities in pixel scores. Multiple independent statistical tests indicated precipitation as the single most important independent ecological parameter in the niche model for human monkeypox disease. Conclusion In addition to improving our understanding of the natural factors influencing the distribution of human monkeypox disease, such pixel-to-pixel comparison

  7. A method to deconvolve mass ratio distribution from binary stars

    CERN Document Server

    Cure, Michel; Christen, Alejandra; Cassetti, Julia; Boffin, Henri M J

    2014-01-01

    To better understand the evolution of stars in binary systems as well as to constrain the formation of binary stars, it is important to know the binary mass-ratio distribution. However, in most cases, i.e. for single-lined spectroscopic binaries, the mass ratio cannot be measured directly but only derived as the convolution of a function that depends on the mass ratio and the unknown inclination angle of the orbit on the plane of the sky. We extend our previous method to deconvolve this inverse problem (Cure et al. 2014), i.e., we obtain as an integral the cumulative distribution function (CDF) for the mass ratio distribution. After a suitable transformation of variables it turns out that this problem is the same as the one for rotational velocities $v \\sin i$, allowing a close analytic formulation for the CDF. We then apply our method to two real datasets: a sample of Am stars binary systems, and a sample of massive spectroscopic binaries in the Cyg OB2 Association.} {We are able to reproduce the previous re...

  8. Method of Analytic Evolution of Flat Distribution Amplitudes in QCD

    CERN Document Server

    Tandogan, Asli

    2011-01-01

    A new analytical method of performing ERBL evolution is described. The main goal is to develop an approach that works for distribution amplitudes that do not vanish at the end points, for which the standard method of expansion in Gegenbauer polynomials is inefficient. Two cases of the initial DA are considered: a purely flat DA, given by the same constant for all x, and an antisymmetric DA given by opposite constants for x 1/2. For a purely flat DA, the evolution is governed by an overall (x (1-x))^t dependence on the evolution parameter t times a factor that was calculated as an expansion in t. For an antisymmetric flat DA, an extra overall factor |1-2x|^{2t} appears due to a jump at x=1/2. A good convergence was observed in the t < 1/2 region. For larger t, one can use the standard method of the Gegenbauer expansion.

  9. Standard test method for distribution coefficients of inorganic species by the batch method

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This test method covers the determination of distribution coefficients of chemical species to quantify uptake onto solid materials by a batch sorption technique. It is a laboratory method primarily intended to assess sorption of dissolved ionic species subject to migration through pores and interstices of site specific geomedia. It may also be applied to other materials such as manufactured adsorption media and construction materials. Application of the results to long-term field behavior is not addressed in this method. Distribution coefficients for radionuclides in selected geomedia are commonly determined for the purpose of assessing potential migratory behavior of contaminants in the subsurface of contaminated sites and waste disposal facilities. This test method is also applicable to studies for parametric studies of the variables and mechanisms which contribute to the measured distribution coefficient. 1.2 The values stated in SI units are to be regarded as standard. No other units of measurement a...

  10. A New Linearization Method of Unbalanced Electrical Distribution Networks

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Guodong [ORNL; Xu, Yan [ORNL; Ceylan, Oguzhan [ORNL; Tomsovic, Kevin [University of Tennessee, Knoxville (UTK)

    2014-01-01

    Abstract--- With increasing penetration of distributed generation in the distribution networks (DN), the secure and optimal operation of DN has become an important concern. As DN control and operation strategies are mostly based on the linearized sensitivity coefficients between controlled variables (e.g., node voltages, line currents, power loss) and control variables (e.g., power injections, transformer tap positions), efficient and precise calculation of these sensitivity coefficients, i.e. linearization of DN, is of fundamental importance. In this paper, the derivation of the node voltages and power loss as functions of the nodal power injections and transformers' tap-changers positions is presented, and then solved by a Gauss-Seidel method. Compared to other approaches presented in the literature, the proposed method takes into account different load characteristics (e.g., constant PQ, constant impedance, constant current and any combination of above) of a generic multi-phase unbalanced DN and improves the accuracy of linearization. Numerical simulations on both IEEE 13 and 34 nodes test feeders show the efficiency and accuracy of the proposed method.

  11. A Network Reconfiguration Method Considering Data Uncertainties in Smart Distribution Networks

    Directory of Open Access Journals (Sweden)

    Ke-yan Liu

    2017-05-01

    Full Text Available This work presents a method for distribution network reconfiguration with the simultaneous consideration of distributed generation (DG allocation. The uncertainties of load fluctuation before the network reconfiguration are also considered. Three optimal objectives, including minimal line loss cost, minimum Expected Energy Not Supplied, and minimum switch operation cost, are investigated. The multi-objective optimization problem is further transformed into a single-objective optimization problem by utilizing weighting factors. The proposed network reconfiguration method includes two periods. The first period is to create a feasible topology network by using binary particle swarm optimization (BPSO. Then the DG allocation problem is solved by utilizing sensitivity analysis and a Harmony Search algorithm (HSA. In the meanwhile, interval analysis is applied to deal with the uncertainties of load and devices parameters. Test cases are studied using the standard IEEE 33-bus and PG&E 69-bus systems. Different scenarios and comparisons are analyzed in the experiments. The results show the applicability of the proposed method. The performance analysis of the proposed method is also investigated. The computational results indicate that the proposed network reconfiguration algorithm is feasible.

  12. Visual Method for Spectral Energy Distribution Calculation of Blazars

    Indian Academy of Sciences (India)

    Y. Huang; J. H. Fan

    2014-09-01

    In this work, we propose to use `The Geometer’s Sketchpad’ to the fitting of a spectral energy distribution of blazar based on three effective spectral indices, RO, OX, and RX and the flux density in the radio band. It can make us to see the fitting in detail with both the peak frequency and peak luminosity given immediately. We used our method to those sources whose peak frequency and peak luminosity are given and found that our results are consistent with those given in the work of Sambruna et al. (1996).

  13. Analysis of the Spatial Distribution of Galaxies by Multiscale Methods

    Directory of Open Access Journals (Sweden)

    E. Saar

    2005-09-01

    Full Text Available Galaxies are arranged in interconnected walls and filaments forming a cosmic web encompassing huge, nearly empty, regions between the structures. Many statistical methods have been proposed in the past in order to describe the galaxy distribution and discriminate the different cosmological models. We present in this paper multiscale geometric transforms sensitive to clusters, sheets, and walls: the 3D isotropic undecimated wavelet transform, the 3D ridgelet transform, and the 3D beamlet transform. We show that statistical properties of transform coefficients measure in a coherent and statistically reliable way, the degree of clustering, filamentarity, sheetedness, and voidedness of a data set.

  14. A Verification System for Distributed Objects with Asynchronous Method Calls

    Science.gov (United States)

    Ahrendt, Wolfgang; Dylla, Maximilian

    We present a verification system for Creol, an object-oriented modeling language for concurrent distributed applications. The system is an instance of KeY, a framework for object-oriented software verification, which has so far been applied foremost to sequential Java. Building on KeY characteristic concepts, like dynamic logic, sequent calculus, explicit substitutions, and the taclet rule language, the system presented in this paper addresses functional correctness of Creol models featuring local cooperative thread parallelism and global communication via asynchronous method calls. The calculus heavily operates on communication histories which describe the interfaces of Creol units. Two example scenarios demonstrate the usage of the system.

  15. An interval UTA method based on the satisfaction degree of decision maker%基于决策人满意度的区间UTA方法

    Institute of Scientific and Technical Information of China (English)

    熊文涛; 冯育强

    2016-01-01

    针对区间数多准则决策问题,扩展了传统的效用加性(utility additive,UTA)方法,提出了一种区间UTA方法。该方法首先根据传统的UTA方法,将参考方案的所有指标值转换为效用范围,即效用区间;然后利用区间数运算,得到参考方案的综合效用,进一步根据决策人的满意度和区间数的中点、半宽构建一个线性规划模型,计算出最小误差;在再优化分析中,以各指标下所有节点郊用的方差最小为目标函数,建立二次规划模型,计算出每一指标下各节点的效用值,利用效用值得到待评方案的综合效用区间和排序。算例表明,提出的区间UTA方法能有效地对方案排序,并与决策人以往的偏好信息一致。%An interval UTA method was proposed for inferring interval utility functions from a partial preorder of alter-natives evaluated on multiple criteria,which was an extension of the well-known UTA method capable to handle the in-terval evaluation data.Firstly,according to the original UTA method,the interval attribute values of all reference op-tions were transformed into the ranges of utility,namely,the utility intervals.Next,the overall utility intervals were calculated using the arithmetic operations of interval number.A linear programming model was constructed based on the satisfaction degree of decision maker utilizing the mid-point and half-width of interval numbers.After the total error val-ue was obtained,a quadratic programming model was established in the post-optimization step,where the objective function was the minimum utility variance of all nodes along all criteria.The obtained utility values of all nodes were used to calculate the overall utility intervals and ranking of alternatives under evaluation.Numerical example showed that the alternatives could effectively ranked using the proposed interval UTA method,which was compatible with the preference information of decision

  16. Interval Estimation of Seismic Hazard Parameters

    Science.gov (United States)

    Orlecka-Sikora, Beata; Lasocki, Stanislaw

    2016-11-01

    The paper considers Poisson temporal occurrence of earthquakes and presents a way to integrate uncertainties of the estimates of mean activity rate and magnitude cumulative distribution function in the interval estimation of the most widely used seismic hazard functions, such as the exceedance probability and the mean return period. The proposed algorithm can be used either when the Gutenberg-Richter model of magnitude distribution is accepted or when the nonparametric estimation is in use. When the Gutenberg-Richter model of magnitude distribution is used the interval estimation of its parameters is based on the asymptotic normality of the maximum likelihood estimator. When the nonparametric kernel estimation of magnitude distribution is used, we propose the iterated bias corrected and accelerated method for interval estimation based on the smoothed bootstrap and second-order bootstrap samples. The changes resulted from the integrated approach in the interval estimation of the seismic hazard functions with respect to the approach, which neglects the uncertainty of the mean activity rate estimates have been studied using Monte Carlo simulations and two real dataset examples. The results indicate that the uncertainty of mean activity rate affects significantly the interval estimates of hazard functions only when the product of activity rate and the time period, for which the hazard is estimated, is no more than 5.0. When this product becomes greater than 5.0, the impact of the uncertainty of cumulative distribution function of magnitude dominates the impact of the uncertainty of mean activity rate in the aggregated uncertainty of the hazard functions. Following, the interval estimates with and without inclusion of the uncertainty of mean activity rate converge. The presented algorithm is generic and can be applied also to capture the propagation of uncertainty of estimates, which are parameters of a multiparameter function, onto this function.

  17. 脉冲激光测距的时间间隔测量方法%Method of Measurement on Time-Interval in Pulsed Laser Ranging

    Institute of Scientific and Technical Information of China (English)

    陈瑞强; 江月松

    2013-01-01

    The performance of pulsed laser ranging is directly influenced by the method how to measure the time-interval. Refer to the idea of interpolation time-interval measurement, the character of triangle reference signal is used to realize measurement on time-interval which has advantages of long measurement range and high precision. The principle how to use triangle reference signal to realize measurement on time-interval is described. The factors that affect the precision of time-interval measurement is quantitatively analyzed. It is pointed out that the frequency of triangle reference signal and the noise attached in triangle reference signal are the two main factors which affect the precision of time-interval measurement. Then, their affect on the precision of time-interval measurement is numerically simulated. A confirmatory experiment is designed to explain how the noise attached in triangle reference signal affects the precision of time-interval measurement. Both the numerical simulation and the experimental result show that the method of measurement on time-interval in pulsed laser ranging which uses low frequency triangle reference signal can achieve high precision. Besides, both improving the frequency of triangle reference signal and reducing the noise attached in triangle reference signal can effectively improve the precision of time-interval measurement. The numerical simulation and experimental results also confirm the feasibility of measurement on time-interval on pulsed laser ranging.%时间间隔测量方法的优劣直接影响脉冲激光测距的性能.借鉴插值法的思想,提出了利用三角波参考信号的特点实现时间间隔测量的方法,具有测量范围大、测量精度高的优点.阐述了利用三角波参考信号实现时间间隔测量的原理,定量分析了影响时间间隔测量精度的因素,指出三角波参考信号的频率和噪声是影响测量精度的主要原因,并进行了数值仿真;设计实验

  18. Models and Methods for Urban Power Distribution Network Planning

    Institute of Scientific and Technical Information of China (English)

    余贻鑫; 王成山; 葛少云; 肖俊; 严雪飞; 黄纯华

    2004-01-01

    The models, methods and their application experiences of a practical GIS(geographic information system)-based computer decision-making support system of urban power distribution network planning with seven subsystems, termed CNP, are described. In each subsystem there is at least one or one set of practical mathematical methobs. Some new models and mathematical methods have been introduced. In the development of GNP the idea of cognitive system engineering has been insisted on, which claims that human and computer intelligence should be combined together to solve the complex engineering problems cooperatively. Practical applications have shown that not only the optimal plan can be automatically reached with many complicated factors considered, but also the computation,analysis and graphic drawing burden can be released considerably.

  19. Improved method to extract nucleon helicity distributions using event weighting

    Science.gov (United States)

    Pretz, J.

    2017-02-01

    An improved analysis method to extract quark helicity distributions in leading order (LO) QCD from semi-inclusive double spin asymmetries in deep inelastic scattering is presented. The method relies on the fact that fragmentation functions, describing the fragmentation of a quark into a hadron, have a strong dependence on the energy fraction z of the observed hadron. Hadrons with large z contain more information about the struck quark. This can be used in a weighting procedure to improve the figure of merit (= inverse of variance). In numerical examples it is shown that one could gain 15–39% depending on the quark flavor and cut on z. Mathematically the problem can be described as finding an optimal solution in terms of the figure of merit for parameters Θ determined from a system of linear equations B(x) Θ =Y(x), where the measured input vector Y(x) is given as event distributions depending on a random variable x, the coefficients of the matrix B(x) depend as well on x, whereas the parameter vector Θ to be determined does not.

  20. Improved Method to extract Nucleon Helicity Distributions using Event Weighting

    CERN Document Server

    Pretz, Jörg

    2016-01-01

    An improved analysis method to extract quark helicity distributions from semi-inclusive double spin asymmetries in deep inelastic scattering is presented. The method relies on the fact that fragmentation functions, describing the fragmentation of a quark into a hadron, have a strong dependence on the energy fraction $z$ of the observed hadron. Hadrons with large $z$ contain more information about the struck quark. This can be used in a weighting procedure to improve the figure of merit (= inverse of the statistical uncertainty). In numerical examples it is shown that one could gain 15-39\\% depending on the quark flavor and cut on $z$. Mathematically the problem can be described as finding an optimal solution in terms of the figure of merit for parameters $\\bf X$ determined from a system of linear equations ${\\bf B}(z) {\\bf X} ={\\bf Y}(z)$, where the measured input vector ${\\bf Y}(z)$ is given as event distributions depending on a random variable $z$, the coefficients of the matrix $\\bf {B}(z)$ depend as well ...

  1. Study on the Medical Image Distributed Dynamic Processing Method

    Institute of Scientific and Technical Information of China (English)

    张全海; 施鹏飞

    2003-01-01

    To meet the challenge of implementing rapidly advanced, time-consuming medical image processing algorithms,it is necessary to develop a medical image processing technology to process a 2D or 3D medical image dynamically on the web. But in a premier system, only static image processing can be provided with the limitation of web technology. The development of Java and CORBA (common object request broker architecture) overcomes the shortcoming of the web static application and makes the dynamic processing of medical images on the web available. To develop an open solution of distributed computing, we integrate the Java, and web with the CORBA and present a web-based medical image dynamic processing methed, which adopts Java technology as the language to program application and components of the web and utilies the CORBA architecture to cope with heterogeneous property of a complex distributed system. The method also provides a platform-independent, transparent processing architecture to implement the advanced image routines and enable users to access large dataset and resources according to the requirements of medical applications. The experiment in this paper shows that the medical image dynamic processing method implemented on the web by using Java and the CORBA is feasible.

  2. 时间间隔服从二项分布的冲击模型的特征量的分布%The Distribution of Characteristics of Shock Model of Time Interval Obeying the Binomial Distribution

    Institute of Scientific and Technical Information of China (English)

    马明; 陆琬; 吉佩玉

    2015-01-01

    This study did a research on a type of the random shock model. In the case of time interval of random shock model obeying the binomial distribution, this study researched the three indicators:the arrival time of shock, the total number of shocking time at any time, and if the shock of time reached the frequency and found the shock arrival time, the number of shocking time, and the probability distribution of the chance of shocking at any given time.%对一类随机冲击模型进行了研究,在随机冲击模型冲击到达的时间间隔服从二项分布的情况下,对冲击到达时刻、到任一时刻为止共冲击次数、时刻是否有冲击到达的概率这3个指标做了研究,得到了冲击到达时刻、冲击次数和任一时刻是否有冲击的概率分布。

  3. Analysis of the event structure by the rapidity interval method in K/sup -/p interactions at 32 GeV/c and pp interactions at 69 GeV/c

    Energy Technology Data Exchange (ETDEWEB)

    Babintsev, V.V.; Bumazhnov, V.A.; Moiseev, A.M.; Ukhanov, M.N. (Gosudarstvennyj Nomitet po Ispol' zovaniyu Atomnoj Ehnergii SSSR, Serpukhov. Inst. Fiziki Vysokikh Ehnergij); Nruglov, N.A.; Proskuryakov, A.S.; Smirnova, L.N. (Moskovskij Gosudarstvennyj Univ. (USSR). Nauchno-Issledovatel' skij Inst. Yadernoj Fiziki)

    1981-09-01

    The experimental material is obtained by measuring photographs with liquid-hydrogen bubble chamber ''Mirabel'' irradiated in the accelerator. Approximately 43000 completely measured events with n>=6 multiplicity of charged particles in K/sup -/p- interactions and approximately 5000 similar events in pp-interactions are used for the analysis. The method of the analysis of distributions in the value of rapidity gaps occupied by a fixed number m of charged particles is suggested. The structure of the distributions in the value rsub(m)sup(n) of the rapidity intervals involving m charged particles in events with n charged particles is analysed for K/sup -/p interactions at 32 GeV/c and pp interaction at 69 GeV/c. It is found that all distributions correspond to a smooth curve with a one maximum. The shape of the experimental distributions for K/sup -/p interactions is compared to the distributions for generated events associated with the multireggeon model.

  4. Dirichlet and Related Distributions Theory, Methods and Applications

    CERN Document Server

    Ng, Kai Wang; Tang, Man-Lai

    2011-01-01

    The Dirichlet distribution appears in many areas of application, which include modelling of compositional data, Bayesian analysis, statistical genetics, and nonparametric inference. This book provides a comprehensive review of the Dirichlet distribution and two extended versions, the Grouped Dirichlet Distribution (GDD) and the Nested Dirichlet Distribution (NDD), arising from likelihood and Bayesian analysis of incomplete categorical data and survey data with non-response. The theoretical properties and applications are also reviewed in detail for other related distributions, such as the inve

  5. Simple method of generating and distributing frequency-entangled qudits

    Science.gov (United States)

    Jin, Rui-Bo; Shimizu, Ryosuke; Fujiwara, Mikio; Takeoka, Masahiro; Wakabayashi, Ryota; Yamashita, Taro; Miki, Shigehito; Terai, Hirotaka; Gerrits, Thomas; Sasaki, Masahide

    2016-11-01

    High-dimensional, frequency-entangled photonic quantum bits (qudits for d-dimension) are promising resources for quantum information processing in an optical fiber network and can also be used to improve channel capacity and security for quantum communication. However, up to now, it is still challenging to prepare high-dimensional frequency-entangled qudits in experiments, due to technical limitations. Here we propose and experimentally implement a novel method for a simple generation of frequency-entangled qudts with d\\gt 10 without the use of any spectral filters or cavities. The generated state is distributed over 15 km in total length. This scheme combines the technique of spectral engineering of biphotons generated by spontaneous parametric down-conversion and the technique of spectrally resolved Hong-Ou-Mandel interference. Our frequency-entangled qudits will enable quantum cryptographic experiments with enhanced performances. This distribution of distinct entangled frequency modes may also be useful for improved metrology, quantum remote synchronization, as well as for fundamental test of stronger violation of local realism.

  6. Stability criteria for T-S fuzzy systems with interval time-varying delays and nonlinear perturbations based on geometric progression delay partitioning method.

    Science.gov (United States)

    Chen, Hao; Zhong, Shouming; Li, Min; Liu, Xingwen; Adu-Gyamfi, Fehrs

    2016-07-01

    In this paper, a novel delay partitioning method is proposed by introducing the theory of geometric progression for the stability analysis of T-S fuzzy systems with interval time-varying delays and nonlinear perturbations. Based on the common ratio α, the delay interval is unequally separated into multiple subintervals. A newly modified Lyapunov-Krasovskii functional (LKF) is established which includes triple-integral terms and augmented factors with respect to the length of every related proportional subintervals. In addition, a recently developed free-matrix-based integral inequality is employed to avoid the overabundance of the enlargement when dealing with the derivative of the LKF. This innovative development can dramatically enhance the efficiency of obtaining the maximum upper bound of the time delay. Finally, much less conservative stability criteria are presented. Numerical examples are conducted to demonstrate the significant improvements of this proposed approach.

  7. A Distributed Cooperative Power Allocation Method for Campus Buildings

    Energy Technology Data Exchange (ETDEWEB)

    Hao, He; Sun, Yannan; Carroll, Thomas E.; Somani, Abhishek

    2015-09-01

    We propose a coordination algorithm for cooperative power allocation among a collection of commercial buildings within a campus. We introduced thermal and power models of a typical commercial building Heating, Ventilation, and Air Conditioning (HVAC) system, and utilize model predictive control to characterize their power flexibility. The power allocation problem is formulated as a cooperative game using the Nash Bargaining Solution (NBS) concept, in which buildings collectively maximize the product of their utilities subject to their local flexibility constraints and a total power limit set by the campus coordinator. To solve the optimal allocation problem, a distributed protocol is designed using dual decomposition of the Nash bargaining problem. Numerical simulations are performed to demonstrate the efficacy of our proposed allocation method

  8. GEOMETRIC METHOD OF SEQUENTIAL ESTIMATION RELATED TO MULTINOMIAL DISTRIBUTION MODELS

    Institute of Scientific and Technical Information of China (English)

    WEIBOCHENG; LISHOUYE

    1995-01-01

    In 1980's differential geometric methods are successfully used to study curved expomential families and normal nonlinear regression models.This paper presents a new geometric structure to study multinomial distribution models which contain a set of nonlinear parameters.Based on this geometric structure,the suthors study several asymptotic properties for sequential estimation.The bias,the variance and the information loss of the sequential estimates are given from geomentric viewpoint,and a limit theorem connected with the observed and expected Fisher information is obtained in terms of curvatvre measures.The results show that the sequential estimation procednce has some better properties which are generally impossible for nonsequential estimation procedures.

  9. A new method of assessing cardiac autonomic function and its comparison with spectral analysis and coefficient of variation of R-R interval.

    Science.gov (United States)

    Toichi, M; Sugiura, T; Murai, T; Sengoku, A

    1997-01-12

    A new non-linear method of assessing cardiac autonomic function was examined in a pharmacological experiment in ten healthy volunteers. The R-R interval data obtained under a control condition and in autonomic blockade by atropine and by propranolol were analyzed by each of the new methods employing Lorenz plot, spectral analysis and the coefficient of variation. With our method we derived two measures, the cardiac vagal index and the cardiac sympathetic index, which indicate vagal and sympathetic function separately. These two indices were found to be more reliable than those obtained by the other two methods. We anticipate that the non-invasive assessment of short-term cardiac autonomic function will come to be performed more reliably and conveniently by this method.

  10. Hypothesis Testing, "p" Values, Confidence Intervals, Measures of Effect Size, and Bayesian Methods in Light of Modern Robust Techniques

    Science.gov (United States)

    Wilcox, Rand R.; Serang, Sarfaraz

    2017-01-01

    The article provides perspectives on p values, null hypothesis testing, and alternative techniques in light of modern robust statistical methods. Null hypothesis testing and "p" values can provide useful information provided they are interpreted in a sound manner, which includes taking into account insights and advances that have…

  11. A SECOND INTERVAL OPTIMIZATION METHOD ON UNCERTAINTY STRUCTURE BASED ON EPSILON METHOD%基于Epsilon算法的不确定性结构二阶区间优化方法

    Institute of Scientific and Technical Information of China (English)

    麻凯; 李鹏; 刘巧伶

    2013-01-01

    This paper presents an interval modal optimization method on uncertain structures. At first, this paper present an optimization with constraint conditions would be transformed into an optimization without constraint conditions by the Lagrange multiplier method, then an interval second Taylor expand function would be build to approximately describe the modal interval of the uncertainty structure with interval parameters. In the interval expression, the second constant term, Hessian matrix, is hardly computed by a common method normally. Therefore, The DFP method would be used to approximatively iterate it. At last, the wanted structural parameters and their interval can be computed by the interval function. During the iteration, the Epsilon method would be used to achieve the result more rapidly and accurately. The optimization method was used in an example of a plate-shell with stiffeners, which prove this method is a useful interval optimization.%该文提出一种求解不确定性结构模态的二阶区间优化算法,首先应用拉格朗日乘子法将带有约束条件的模态优化问题转化为非约束优化,再用区间扩展的二阶泰勒展开式近似表述不确定性结构的模态区间函数.由于其二阶常数项(海森矩阵)的计算十分繁琐,这里采用DFP方法(Davidon and Fletcher-Powell method)近似迭代计算该常数项,同时计算满足约束条件和优化目标的结构参数和参数不确定性区间.在结构重分析中采用Epsilon算法,从而在保证计算精度的同时节省了计算时间.通过算例计算进一步证明该方法对于板壳加筋不确定结构的优化是有效的.

  12. Development of methods for DSM and distribution automation planning

    Energy Technology Data Exchange (ETDEWEB)

    Kaerkkaeinen, S.; Kekkonen, V. [VTT Energy, Espoo (Finland); Rissanen, P. [Tietosavo Oy (Finland)

    1998-08-01

    Demand-Side Management (DSM) is usually an utility (or sometimes governmental) activity designed to influence energy demand of customers (both level and load variation). It includes basic options like strategic conservation or load growth, peak clipping. Load shifting and fuel switching. Typical ways to realize DSM are direct load control, innovative tariffs, different types of campaign etc. Restructuring of utility in Finland and increased competition in electricity market have had dramatic influence on the DSM. Traditional ways are impossible due to the conflicting interests of generation, network and supply business and increased competition between different actors in the market. Costs and benefits of DSM are divided to different companies, and different type of utilities are interested only in those activities which are beneficial to them. On the other hand, due to the increased competition the suppliers are diversifying to different types of products and increasing number of customer services partly based on DSM are available. The aim of this project was to develop and assess methods for DSM and distribution automation planning from the utility point of view. The methods were also applied to case studies at utilities

  13. Interval value information system concept lattice reduction method%区间值信息系统的概念格约简方法

    Institute of Scientific and Technical Information of China (English)

    刘冬

    2013-01-01

      区间值信息系统是属性值取值为区间值形式的一种特殊信息系统。通过把区间值信息系统转化为0-1形式背景,利用概念格属性约简方法,区间值信息系统协调集的判定定理,并引入可辨识属性矩阵,研究区间值信息系统上基于概念格属性约简的理论方法。%Interval value information system is a special information system that the attribute value is in the form of interval value. In this paper, the definitions of 0-1 form background and the corresponding concept lattice are described. Then the decision theorem of coordinate set and the equal proposition of reduction set are given. It introduces the discernibility attribute matrix, researches on theoretical method of interval value information system based on concept lattice attribute reduction.

  14. An interval-possibilistic basic-flexible programming method for air quality management of municipal energy system through introducing electric vehicles.

    Science.gov (United States)

    Yu, L; Li, Y P; Huang, G H; Shan, B G

    2017-09-01

    Contradictions of sustainable transportation development and environmental issues have been aggravated significantly and been one of the major concerns for energy systems planning and management. A heavy emphasis is placed on stimulation of electric vehicles (EVs) to handle these problems associated with various complexities and uncertainties in municipal energy system (MES). In this study, an interval-possibilistic basic-flexible programming (IPBFP) method is proposed for planning MES of Qingdao, where uncertainties expressed as interval-flexible variables and interval-possibilistic parameters can be effectively reflected. Support vector regression (SVR) is used for predicting electricity demand of the city under various scenarios. Solutions of EVs stimulation levels and satisfaction levels in association with flexible constraints and predetermined necessity degrees are analyzed, which can help identify the optimized energy-supply patterns that could plunk for improvement of air quality and hedge against violation of soft constraints. Results disclose that largely developing EVs can help facilitate the city's energy system with an environment-effective way. However, compared to the rapid growth of transportation, the EVs' contribution of improving the city's air quality is limited. It is desired that, to achieve an environmentally sustainable MES, more concerns should be focused on the integration of increasing renewable energy resources, stimulating EVs as well as improving energy transmission, transport and storage. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. 微带电路谱域Prony方法采样间隔的研究%Study on Sampling Interval in Spectral Domain Prony's Method

    Institute of Scientific and Technical Information of China (English)

    高立新; 龚主前; 李元新

    2014-01-01

    对计算微带电路S参数的改进谱域Prony方法进行研究,分析比较时域有限差分法( FDTD)中波端口激励方式和模式端口激励方式,提出采样间隔选取标准,然后通过几个实际的工程算例来讨论改进谱域Prony方法的性能。数值结果表明,在非常小的采样间隔条件下改进谱域Prony方法仍可以准确计算相位常数和S参数。%The improved spectral domain Prony’s method that calculates S-parameters for microstrip cir-cuits is investigated. Mode-port excitation technique and wave-port excitation technique in the finite difference time domain method ( FDTD) are analyzed and compared. A sampling interval selection criterion is proposed. Several practical engineering examples are provided to demonstrate the performance of the im-proved spectral domain Prony’s method. Numerical results show that the phase constant and S-parameters still can be accurately calculated under very small sampling interval condition with the improved spectral domain Prony’s method.

  16. Mathematical methods in physics distributions, Hilbert space operators, variational methods, and applications in quantum physics

    CERN Document Server

    Blanchard, Philippe

    2015-01-01

    The second edition of this textbook presents the basic mathematical knowledge and skills that are needed for courses on modern theoretical physics, such as those on quantum mechanics, classical and quantum field theory, and related areas.  The authors stress that learning mathematical physics is not a passive process and include numerous detailed proofs, examples, and over 200 exercises, as well as hints linking mathematical concepts and results to the relevant physical concepts and theories.  All of the material from the first edition has been updated, and five new chapters have been added on such topics as distributions, Hilbert space operators, and variational methods.   The text is divided into three main parts. Part I is a brief introduction to distribution theory, in which elements from the theories of ultradistributions and hyperfunctions are considered in addition to some deeper results for Schwartz distributions, thus providing a comprehensive introduction to the theory of generalized functions. P...

  17. Assessing common classification methods for the identification of abnormal repolarization using indicators of T-wave morphology and QT interval

    DEFF Research Database (Denmark)

    Shakibfar, Saeed; Graff, Claus; Ehlers, Lars Holger;

    2012-01-01

    volunteers and LQT2 carriers were used to train classification algorithms using measures of T-wave morphology and QTc. The ability to correctly classify a third group of test subjects before and after receiving d,l-sotalol was evaluated using classification rules derived from training. As a single......Various parameters based on QTc and T-wave morphology have been shown to be useful discriminators for drug induced I(Kr)-blocking. Using different classification methods this study compares the potential of these two features for identifying abnormal repolarization on the ECG. A group of healthy...

  18. Comparison of two methods of estimating reader variability in QT interval measurements in thorough QT/QTc studies.

    Science.gov (United States)

    Salvi, Vaibhav; Karnad, Dilip R; Kerkar, Vaibhav; Panicker, Gopi Krishna; Natekar, Mili; Kothari, Snehal

    2014-03-01

    Two methods of estimating reader variability (RV) in QT measurements between 12 readers were compared. Using data from 500 electrocardiograms (ECGs) analyzed twice by 12 readers, we bootstrapped 1000 datasets each for both methods. In grouped analysis design (GAD), the same 40 ECGs were read twice by all readers. In pairwise analysis design (PAD), 40 ECGs analyzed by each reader in a clinical trial were reanalyzed by the same reader (intra-RV) and also by another reader (inter-RV); thus, variability between each pair of readers was estimated using different ECGs. Inter-RV (mean [95% CI]) between pairs of readers by GAD and PAD was 3.9 ms (2.1-5.5 ms) and 4.1 ms (2.6-5.4 ms), respectively, using ANOVA, 0 ms (-0.0 to 0.4 ms), and 0 ms (-0.7 to 0.6 ms), respectively, by actual difference between readers and 7.7 ms (6.2-9.8 ms) and 7.7 ms (6.6-9.1 ms), respectively, by absolute difference between readers. Intra-RV too was comparable. RV estimates by the grouped- and pairwise analysis designs are comparable. © 2014 Wiley Periodicals, Inc.

  19. the Interval Decision Making Methods Based on Intuitionistic Fuzzy Sets%基于直觉模糊集的全区间决策方法

    Institute of Scientific and Technical Information of China (English)

    董明娟; 李俊宏

    2014-01-01

    For the fuzzy multiple attribute decision making problems , in which the attribute values take the form of intuitionistic fuzzy sets and the attribute weights are known , the interval decision making method based on intuitionistic fuzzy sets is put forward based on the intuitionistic fuzzy arithmetic weighted averaging operator .The interval decision making function introduces the attitude index k , which can reflect the change of the decision maker’ s attitude, and the changes of decision-making information in the whole interval are considered with k changing from 0 to 1 .Its advantage is that the past point judgment method is extended to the interval judgment method compareing with the score function and the closeness degrees based on the distance TOPSIS , which can avoid the loss of decision information and make decision-making more accurate and reasonable .Finally, a practi-cal example shows the correctness , effectiveness and rationality of the proposed method with certain reference value.%针对属性值为直觉模糊集且属性权重已知的模糊多属性决策问题,本文基于直觉模糊算术加权平均算子,提出了一种基于直觉模糊集的全区间决策方法。全区间决策函数引入了态度指标k,从而可以反映决策者态度的变化,从0到1变化k值,可以在整个区间内挖掘决策信息的变化,与得分函数法和基于距离TOPSIS贴近度方法相比,将过去的点值判断延伸至全区间判断,避免了决策信息的丢失现象,决策更加准确合理。实例计算表明该方法的正确性、有效性和合理性,具有一定的推广借鉴价值。

  20. Application of non-parametric bootstrap methods to estimate confidence intervals for QTL location in a beef cattle QTL experimental population.

    Science.gov (United States)

    Jongjoo, Kim; Davis, Scott K; Taylor, Jeremy F

    2002-06-01

    Empirical confidence intervals (CIs) for the estimated quantitative trait locus (QTL) location from selective and non-selective non-parametric bootstrap resampling methods were compared for a genome scan involving an Angus x Brahman reciprocal fullsib backcross population. Genetic maps, based on 357 microsatellite markers, were constructed for 29 chromosomes using CRI-MAP V2.4. Twelve growth, carcass composition and beef quality traits (n = 527-602) were analysed to detect QTLs utilizing (composite) interval mapping approaches. CIs were investigated for 28 likelihood ratio test statistic (LRT) profiles for the one QTL per chromosome model. The CIs from the non-selective bootstrap method were largest (87 7 cM average or 79-2% coverage of test chromosomes). The Selective II procedure produced the smallest CI size (42.3 cM average). However, CI sizes from the Selective II procedure were more variable than those produced by the two LOD drop method. CI ranges from the Selective II procedure were also asymmetrical (relative to the most likely QTL position) due to the bias caused by the tendency for the estimated QTL position to be at a marker position in the bootstrap samples and due to monotonicity and asymmetry of the LRT curve in the original sample.

  1. Confidence Intervals for Standardized Effect Sizes: Theory, Application, and Implementation

    Directory of Open Access Journals (Sweden)

    Ken Kelley

    2007-02-01

    Full Text Available The behavioral, educational, and social sciences are undergoing a paradigmatic shift in methodology, from disciplines that focus on the dichotomous outcome of null hypothesis significance tests to disciplines that report and interpret effect sizes and their corresponding confidence intervals. Due to the arbitrariness of many measurement instruments used in the behavioral, educational, and social sciences, some of the most widely reported effect sizes are standardized. Although forming confidence intervals for standardized effect sizes can be very beneficial, such confidence interval procedures are generally difficult to implement because they depend on noncentral t, F, and x2 distributions. At present, no main-stream statistical package provides exact confidence intervals for standardized effects without the use of specialized programming scripts. Methods for the Behavioral, Educational, and Social Sciences (MBESS is an R package that has routines for calculating confidence intervals for noncentral t, F, and x2 distributions, which are then used in the calculation of exact confidence intervals for standardized effect sizes by using the confidence interval transformation and inversion principles. The present article discusses the way in which confidence intervals are formed for standardized effect sizes and illustrates how such confidence intervals can be easily formed using MBESS in R.

  2. Fitting a distribution to censored contamination data using Markov Chain Monte Carlo methods and samples selected with unequal probabilities.

    Science.gov (United States)

    Williams, Michael S; Ebel, Eric D

    2014-11-18

    The fitting of statistical distributions to chemical and microbial contamination data is a common application in risk assessment. These distributions are used to make inferences regarding even the most pedestrian of statistics, such as the population mean. The reason for the heavy reliance on a fitted distribution is the presence of left-, right-, and interval-censored observations in the data sets, with censored observations being the result of nondetects in an assay, the use of screening tests, and other practical limitations. Considerable effort has been expended to develop statistical distributions and fitting techniques for a wide variety of applications. Of the various fitting methods, Markov Chain Monte Carlo methods are common. An underlying assumption for many of the proposed Markov Chain Monte Carlo methods is that the data represent independent and identically distributed (iid) observations from an assumed distribution. This condition is satisfied when samples are collected using a simple random sampling design. Unfortunately, samples of food commodities are generally not collected in accordance with a strict probability design. Nevertheless, pseudosystematic sampling efforts (e.g., collection of a sample hourly or weekly) from a single location in the farm-to-table continuum are reasonable approximations of a simple random sample. The assumption that the data represent an iid sample from a single distribution is more difficult to defend if samples are collected at multiple locations in the farm-to-table continuum or risk-based sampling methods are employed to preferentially select samples that are more likely to be contaminated. This paper develops a weighted bootstrap estimation framework that is appropriate for fitting a distribution to microbiological samples that are collected with unequal probabilities of selection. An example based on microbial data, derived by the Most Probable Number technique, demonstrates the method and highlights the

  3. An experiment with content distribution methods in touchscreen mobile devices.

    Science.gov (United States)

    Garcia-Lopez, Eva; Garcia-Cabot, Antonio; de-Marcos, Luis

    2015-09-01

    This paper compares the usability of three different content distribution methods (scrolling, paging and internal links) in touchscreen mobile devices as means to display web documents. Usability is operationalized in terms of effectiveness, efficiency and user satisfaction. These dimensions are then measured in an experiment (N = 23) in which users are required to find words in regular-length web documents. Results suggest that scrolling is statistically better in terms of efficiency and user satisfaction. It is also found to be more effective but results were not significant. Our findings are also compared with existing literature to propose the following guideline: "try to use vertical scrolling in web pages for mobile devices instead of paging or internal links, except when the content is too large, then paging is recommended". With an ever increasing number of touchscreen web-enabled mobile devices, this new guideline can be relevant for content developers targeting the mobile web as well as institutions trying to improve the usability of their content for mobile platforms.

  4. 车辆跟驰安全距离的区间分析方法%Interval Analysis Method for Safety Distance of Car-following

    Institute of Scientific and Technical Information of China (English)

    余朝蓬; 王营; 高峰

    2009-01-01

    In the car-following process, there are some uncertain parameters in the vehicle brake system and the driver, such as the brake acceleration, the action time of the vehicle arrester and the response time of the driver, etc. Generally, the ranges of these uncertain parameters are easy to be found, but the effect of them on the car-following process is one of the hard issues. This paper used the interval number to describe the uncertain parameters in the car-following process. Based on the approximate computational method of the safety distance, the interval analysis was employed to calculate the safety distance of vehicles under two typical conditions. The results show that the safety distance calculated by the interval analysis method is not a certain value, but an interval value. Therefore, the intervalanalysis method can forecast the effects of the uncertain parameters on the safety distance in the car-following process.%在车辆跟驰过程中,车辆制动系统和驾驶员自身的一些参数均具有一定程度的不确定性,例如车辆的制动减速度、车辆制动器的作用时间及驾驶员的反应时间等.这些不确定量的变化范围通常比较容易确定,然而,如何预测这些不确定量对车辆跟驰行为的影响是目前已有车辆跟驰模型较难以解决的问题之一.文中使用区间数学中的区间数来描述车辆跟驰过程中的车辆制动系统和驾驶员自身参数的不确定性,基于车辆跟驰过程中车辆安全距离的近似计算公式,采用区间分析方法计算了两种典型工况下的车辆安全距离.结果表明,该方法计算得到的车辆安全距离不是确定值而是区间值或者说是一个变化范围,因此能更加真实地预测各种不确定量对车辆安全距离的影响.

  5. Applying sequential Monte Carlo methods into a distributed hydrologic model: lagged particle filtering approach with regularization

    Directory of Open Access Journals (Sweden)

    S. J. Noh

    2011-10-01

    Full Text Available Data assimilation techniques have received growing attention due to their capability to improve prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC methods, known as "particle filters", are a Bayesian learning process that has the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response times of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until the uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on the Markov chain Monte Carlo (MCMC methods is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, water and energy transfer processes (WEP, is implemented for the sequential data assimilation through the updating of state variables. The lagged regularized particle filter (LRPF and the sequential importance resampling (SIR particle filter are implemented for hindcasting of streamflow at the Katsura catchment, Japan. Control state variables for filtering are soil moisture content and overland flow. Streamflow measurements are used for data assimilation. LRPF shows consistent forecasts regardless of the process noise assumption, while SIR has different values of optimal process noise and shows sensitive variation of confidential intervals, depending on the process noise. Improvement of LRPF forecasts compared to SIR is particularly found for rapidly varied high flows due to preservation of sample diversity from the kernel, even if particle impoverishment takes place.

  6. Applications of interval computations

    CERN Document Server

    Kreinovich, Vladik

    1996-01-01

    Primary Audience for the Book • Specialists in numerical computations who are interested in algorithms with automatic result verification. • Engineers, scientists, and practitioners who desire results with automatic verification and who would therefore benefit from the experience of suc­ cessful applications. • Students in applied mathematics and computer science who want to learn these methods. Goal Of the Book This book contains surveys of applications of interval computations, i. e. , appli­ cations of numerical methods with automatic result verification, that were pre­ sented at an international workshop on the subject in EI Paso, Texas, February 23-25, 1995. The purpose of this book is to disseminate detailed and surveyed information about existing and potential applications of this new growing field. Brief Description of the Papers At the most fundamental level, interval arithmetic operations work with sets: The result of a single arithmetic operation is the set of all possible results as the o...

  7. Fitting statistical distributions the generalized lambda distribution and generalized bootstrap methods

    CERN Document Server

    Karian, Zaven A

    2000-01-01

    Throughout the physical and social sciences, researchers face the challenge of fitting statistical distributions to their data. Although the study of statistical modelling has made great strides in recent years, the number and variety of distributions to choose from-all with their own formulas, tables, diagrams, and general properties-continue to create problems. For a specific application, which of the dozens of distributions should one use? What if none of them fit well?Fitting Statistical Distributions helps answer those questions. Focusing on techniques used successfully across many fields, the authors present all of the relevant results related to the Generalized Lambda Distribution (GLD), the Generalized Bootstrap (GB), and Monte Carlo simulation (MC). They provide the tables, algorithms, and computer programs needed for fitting continuous probability distributions to data in a wide variety of circumstances-covering bivariate as well as univariate distributions, and including situations where moments do...

  8. Interval ridge regression (iRR) as a fast and robust method for quantitative prediction and variable selection applied to edible oil adulteration.

    Science.gov (United States)

    Jović, Ozren; Smrečki, Neven; Popović, Zora

    2016-04-01

    A novel quantitative prediction and variable selection method called interval ridge regression (iRR) is studied in this work. The method is performed on six data sets of FTIR, two data sets of UV-vis and one data set of DSC. The obtained results show that models built with ridge regression on optimal variables selected with iRR significantly outperfom models built with ridge regression on all variables in both calibration (6 out of 9 cases) and validation (2 out of 9 cases). In this study, iRR is also compared with interval partial least squares regression (iPLS). iRR outperfomed iPLS in validation (insignificantly in 6 out of 9 cases and significantly in one out of 9 cases for poil, a well known health beneficial nutrient, is studied in this work by mixing it with cheap and widely used oils such as soybean (So) oil, rapeseed (R) oil and sunflower (Su) oil. Binary mixture sets of hempseed oil with these three oils (HSo, HR and HSu) and a ternary mixture set of H oil, R oil and Su oil (HRSu) were considered. The obtained accuracy indicates that using iRR on FTIR and UV-vis data, each particular oil can be very successfully quantified (in all 8 cases RMSEPoil (R(2)>0.99).

  9. In Defense of Intervals

    Science.gov (United States)

    1988-01-01

    such as an original distribution of probability over the sentences of a language prior to any evidence (a la Carnap [1950]), or a prior distribution...Bibliography Carnap , Rudolf: The Logical Foundations of Probability, University of Chicago Press, Chicago, 1950. Fisher, Ronald A.: Statistical Methods and

  10. Cut off values of laser fluorescence for different storage methods at different time intervals in comparison to frozen condition: A 1 year in vitro study

    Science.gov (United States)

    Kaul, Rudra; Kaul, Vibhuti; Farooq, Riyaz; Wazir, Nikhil Dev; Khateeb, Shafayat Ullah; Malik, Altaf H; Masoodi, Ajaz Amin

    2014-01-01

    Aims: The aim of the following study is to evaluate the change in laser fluorescence (LF) values for extracted teeth stored in different solutions over 1 year period, to give cut-off values for different storage media at different time intervals to get them at par with the in vivo conditions and to see which medium gives best results with the least change in LF values and while enhancing the validity of DIAGNOdent in research. Materials and Methods: Ninety extracted teeth selected, from a pool of frozen teeth, were divided into nine groups of 10 each. Specimens in Groups 1-8 were stored in 1% chloramine, 10% formalin, 10% buffered formalin, 0.02% thymol, 0.12% chlorhexidine, 3% sodium hypochlorite, a commercially available saliva substitute-Wet Mouth (ICPA Pharmaceuticals) and normal saline respectively at 4°C. The last group was stored under frozen condition at −20°C without contact with any storage solution. DIAGNOdent was used to measure the change the LF values at day 30, 45, 60, 160 and 365. Statistical Analysis Used: The mean change in LF values in different storage mediums at different time intervals were compared using two-way ANOVA. Results: At the end of 1 year, significant decrease in fluorescence (P < 0.05) was observed in Groups 1-8. Maximum drop in LF values occurred between day 1 and 30. Group 9 (frozen specimens) did not significantly change their fluorescence response. Conclusions: An inevitable change in LF takes place due to various storage media commonly used in dental research at different time intervals. The values obtained from our study can remove the bias caused by the storage media and the values of LF thus obtained can hence be conveniently extrapolated to the in vivo condition. PMID:24778506

  11. Cut off values of laser fluorescence for different storage methods at different time intervals in comparison to frozen condition: A 1 year in vitro study

    Directory of Open Access Journals (Sweden)

    Rudra Kaul

    2014-01-01

    Full Text Available Aims: The aim of the following study is to evaluate the change in laser fluorescence (LF values for extracted teeth stored in different solutions over 1 year period, to give cut-off values for different storage media at different time intervals to get them at par with the in vivo conditions and to see which medium gives best results with the least change in LF values and while enhancing the validity of DIAGNOdent in research. Materials and Methods: Ninety extracted teeth selected, from a pool of frozen teeth, were divided into nine groups of 10 each. Specimens in Groups 1-8 were stored in 1% chloramine, 10% formalin, 10% buffered formalin, 0.02% thymol, 0.12% chlorhexidine, 3% sodium hypochlorite, a commercially available saliva substitute-Wet Mouth (ICPA Pharmaceuticals and normal saline respectively at 4°C. The last group was stored under frozen condition at −20°C without contact with any storage solution. DIAGNOdent was used to measure the change the LF values at day 30, 45, 60, 160 and 365. Statistical Analysis Used: The mean change in LF values in different storage mediums at different time intervals were compared using two-way ANOVA. Results: At the end of 1 year, significant decrease in fluorescence (P < 0.05 was observed in Groups 1-8. Maximum drop in LF values occurred between day 1 and 30. Group 9 (frozen specimens did not significantly change their fluorescence response. Conclusions: An inevitable change in LF takes place due to various storage media commonly used in dental research at different time intervals. The values obtained from our study can remove the bias caused by the storage media and the values of LF thus obtained can hence be conveniently extrapolated to the in vivo condition.

  12. 基于元素区间编码的GML数据索引方法%GML data index method based on element interval coding

    Institute of Scientific and Technical Information of China (English)

    於时才; 郭润牛; 吴衍智

    2013-01-01

    According to the demand of data query of GML,a GML indexing method was proposed based on extending the element interval coding,and analyzing the XML file coding techniques and spatial indexing method.Firstly through extending the interval coding method to encode the element,attribute,text,and geometric object in GML file.Then the non-spatial nodes,spatial nodes,and element nodes were separated from GML file tree to generate sequence of element coding based on element coding algorithm.On this basis and according to the difference among the nodes,a B+ tree index was built up for attribute and text notes to realize value query and a R tree index was built up for on geometric object note to realize spatial data analysis,and by means of query optimization algorithm the unnecessary overall query of the nodes was avoided,so that the query efficiency was further improved.Experimental result showed that the indexing method based on the element interval coding was feasible and high-efficient.%根据GML数据查询的需要,在分析XML文档编码和空间索引技术的基础上,提出一种基于扩展的元素区间编码的GML索引方法.首先通过扩展的区间编码方法对GML文档中的元素、属性、文本、几何体等要素进行编码;其次依据元素编码算法并将非空间节点、空间节点、元素节点从GML文档树中分离,产生元素编码序列;在此基础上根据节点类型的不同对属性和文本节点建立B+树索引以实现值查询,对几何体节点建立R树索引以实现空间数据的分析操作,并在查询处理时通过查询优化算法避免不必要的节点的遍历,进一步提高查询效率.实验结果表明,基于元素区间编码的GML数据索引方法是可行的、高效的.

  13. Calculation method of reflectance distributions for computer-generated holograms using the finite-difference time-domain method.

    Science.gov (United States)

    Ichikawa, Tsubasa; Sakamoto, Yuji; Subagyo, Agus; Sueoka, Kazuhisa

    2011-12-01

    The research on reflectance distributions in computer-generated holograms (CGHs) is particularly sparse, and the textures of materials are not expressed. Thus, we propose a method for calculating reflectance distributions in CGHs that uses the finite-difference time-domain method. In this method, reflected light from an uneven surface made on a computer is analyzed by finite-difference time-domain simulation, and the reflected light distribution is applied to the CGH as an object light. We report the relations between the surface roughness of the objects and the reflectance distributions, and show that the reflectance distributions are given to CGHs by imaging simulation.

  14. Method of preparing mercury with an arbitrary isotopic distribution

    Science.gov (United States)

    Grossman, M.W.; George, W.A.

    1986-12-16

    This invention provides for a process for preparing mercury with a predetermined, arbitrary, isotopic distribution. In one embodiment, different isotopic types of Hg[sub 2]Cl[sub 2], corresponding to the predetermined isotopic distribution of Hg desired, are placed in an electrolyte solution of HCl and H[sub 2]O. The resulting mercurous ions are then electrolytically plated onto a cathode wire producing mercury containing the predetermined isotopic distribution. In a similar fashion, Hg with a predetermined isotopic distribution is obtained from different isotopic types of HgO. In this embodiment, the HgO is dissolved in an electrolytic solution of glacial acetic acid and H[sub 2]O. The isotopic specific Hg is then electrolytically plated onto a cathode and then recovered. 1 fig.

  15. The Comparison of Two Methods of Exercise (intense interval training and concurrent resistance- endurance training on Fasting Sugar, Insulin and Insulin Resistance in Women with Mellitus Diabetes

    Directory of Open Access Journals (Sweden)

    F Bazyar

    2016-05-01

    Full Text Available Background & aim: Exercise is an important component of health and an integral approach to the management of diabetes mellitus. The purpose of this study was to compare the effects of intense interval training and concurrent resistance- endurance training on fasting sugar, insulin and insulin resistance in women with mellitus diabetes.   Methods: Fifty-two overweight female diabetic type 2 patients (aged 45-60 years old with fasting blood glucose≥ 126 mg/dl were selected to participate in the present study. Participants were assigned to intense interval training group (N=17, concurrent resistance- endurance training group (N=17 and control group (N=18. The exercises incorporated 10 weeks of concurrent resistance- endurance training and intense interval training. Fasting blood sugar, serum insulin concentrations levels were measured. Concurrent training group trained eight weeks, three times a week of endurance training at 60% of maximum heart rate (MHR and two resistance training sessions per week with 70% of one repetition maximum (1-RM. Intense interval training group trained for eight weeks, three sessions per week for 4 to 10 repeats Wingate test on the ergometer 30s performed with maximum effort. The control group did no systematic exercise. At the end of experiment 42 subjects were succeed and completed the study period, and 10 subjects were removed due to illness and absence in the exercise sessions. Fasting blood sugar and insulin levels 24 hours before and 48 hours after the last training session was measured.   Results: The findings indicated that in periodic fasting, the blood sugar in intensive training group had a marked decrease (p= 0.000 however, the fasting blood sugar of exercise and power stamina groups reduced significantly (p=0.062. The results showed no significant difference between the groups (171/0 p =0.171. Fasting insulin (p <0.001 and insulin resistance (0001/0 = p=0.001 in periodic intensive training group were

  16. Automatic Error Analysis Using Intervals

    Science.gov (United States)

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  17. Minimax confidence intervals in geomagnetism

    Science.gov (United States)

    Stark, Philip B.

    1992-01-01

    The present paper uses theory of Donoho (1989) to find lower bounds on the lengths of optimally short fixed-length confidence intervals (minimax confidence intervals) for Gauss coefficients of the field of degree 1-12 using the heat flow constraint. The bounds on optimal minimax intervals are about 40 percent shorter than Backus' intervals: no procedure for producing fixed-length confidence intervals, linear or nonlinear, can give intervals shorter than about 60 percent the length of Backus' in this problem. While both methods rigorously account for the fact that core field models are infinite-dimensional, the application of the techniques to the geomagnetic problem involves approximations and counterfactual assumptions about the data errors, and so these results are likely to be extremely optimistic estimates of the actual uncertainty in Gauss coefficients.

  18. Resampling methods for particle filtering:identical distribution, a new method, and comparable study

    Institute of Scientific and Technical Information of China (English)

    Tian-cheng LI; Gabriel VILLARRUBIA; Shu-dong SUN; Juan M CORCHADO; Javier BAJO

    2015-01-01

    Resampling is a critical procedure that is of both theoretical and practical significance for efficient implementation of the particle filter. To gain an insight of the resampling process and the filter, this paper contributes in three further respects as a sequel to the tutorial (Li et al., 2015). First, identical distribution (ID) is established as a general principle for the resampling design, which requires the distribution of particles before and after resampling to be statistically identical. Three consistent met-rics including the (symmetrical) Kullback-Leibler divergence, Kolmogorov-Smirnov statistic, and the sampling variance are introduced for assessment of the ID attribute of resampling, and a corresponding, qualitative ID analysis of representative resampling methods is given. Second, a novel resampling scheme that obtains the optimal ID attribute in the sense of minimum sampling variance is proposed. Third, more than a dozen typical resampling methods are compared via simulations in terms of sample size variation, sampling variance, computing speed, and estimation accuracy. These form a more comprehensive under-standing of the algorithm, providing solid guidelines for either selection of existing resampling methods or new implementations.

  19. 三参数区间灰数排序及其在区间DEA效率评价中的应用%Method of ranking three parameters interval grey numbers and its application in interval DEA model

    Institute of Scientific and Technical Information of China (English)

    王洁方; 刘思峰

    2011-01-01

    定义了三参数区间灰数与实数比较的相对优势度的概念,给出了两类典型的三参数区间灰数与实数比较的相对优势度的代数表达式.提出了一种基于相对优势度的三参数区间灰数的排序方法,并应用于变量为三参数区间灰数数据包络分析(data envelopment analysis,DEA)模型,用算例验证了其有效性.%This article proposes the definition of relative superiority degree between three parameters interval grey number and the real numbers, and two types of algebraic expressions are given. Based on the relative superiority degree, the ranking steps of three parameters interval grey numbers are set up and used in data envelopment analysis (DEA) model when the inputs and outputs are three parameters interval grey numbers,and a numerical example is given to illustrate its effectiveness.

  20. Uncertainty analysis of a model of an energy distribution system with solar panel generation by Time-Varying Data Analysis, Monte Carlo Simulation and Fuzzy Interval Analysis

    OpenAIRE

    Ferrario, Elisa; Pini, Alessia

    2013-01-01

    International audience; The uncertainties in the model of an energy distribution system made of a solar panel, a storage energy system and loads (power demanded by the end-users) are investigated, treating the epistemic variables as possibilistic and the aleatory ones as probabilistic. In particular, time-varying probabilistic distributions of the solar irradiation and the power demanded by the end-users is inferred from historical data. Then a computational framework for the joint propagatio...

  1. Desenvolvimento do sistema radicular do algodoeiro na camada arável do solo Distribution of cotton roots in the upper soil layers at three different time intervals

    Directory of Open Access Journals (Sweden)

    A. C. Magalhães

    1962-01-01

    Full Text Available Sabe-se que o sistema radicular do algodoeiro se situa predominantemente na região compreendida pelos primeiros 20 cm de profundidade do solo. Como a cultura exige intensas práticas culturais, torna-se útil conhecer a distribuição progressiva do sistema radicular naquela região, sobretudo nos primeiros meses do ciclo vegetative época em que a cultura exige a intensificação das capinas. Estudos sôbre a questão foram efetuados em um ensaio de campo com a variedade IAC 12-57/566, em solo tipo terra-roxa-misturado, fozendo-se observações aos 42, 61 e 81 dias após a germinação das sementes. Os dados mostraram maior concentração de raízes na camada de 3 a 15 cm de profundidade do solo e até a uma distância aproximada de 25 cm lateralmente às plantas. O ritmo de crescimento do sistema radicular do algodoeiro foi mais intenso do 42.° ao 61.° dia após a germinação. A má utilização dos implementos agrícolas nesse período mais critico, poderá pois, provocar grandes danos à cultura, principalmente se forem empregados cultivos profundos.The distribution of the cotton plant root system in the upper 20 cm layer of soil was studied at three different times (42, 61 ond 81-day old plants. These studies were carried out in a cotton field of the variety IAC 12-57/566 planted on a "terra-roxa-misturada" type of soil. The spacing was 80 cm between rows and 15 cm between plants in the row. The method employed consisted in excavating a ditch at a right angle to the plant rows, including four plants, and then removing the soil as blocks. Five loyers of soil blocks were taken: the first and second were 3 cm thick; the third, 4 cm thick; and the fourth and fifth, 5 cm thick. After washing off the soil of each block, the roots in it were air dried and weighed. A representation of the root distribution os encountered is given in figure 2. Far the cotton field studied, most of the roots were found between 3 and 15 cm of depth up to a

  2. Comparing interval estimates for small sample ordinal CFA models.

    Science.gov (United States)

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  3. Examining the spatial distribution of flower thrips in southern highbush blueberries by utilizing geostatistical methods.

    Science.gov (United States)

    Rhodes, Elena M; Liburd, Oscar E; Grunwald, Sabine

    2011-08-01

    Flower thrips (Frankliniella spp.) are one of the key pests of southern highbush blueberries (Vaccinium corymbosum L. x V. darrowii Camp), a high-value crop in Florida. Thrips' feeding and oviposition injury to flowers can result in fruit scarring that renders the fruit unmarketable. Flower thrips often form areas of high population, termed "hot spots", in blueberry plantings. The objective of this study was to model thrips spatial distribution patterns with geostatistical techniques. Semivariogram models were used to determine optimum trap spacing and two commonly used interpolation methods, inverse distance weighting (IDW) and ordinary kriging (OK), were compared for their ability to model thrips spatial patterns. The experimental design consisted of a grid of 100 white sticky traps spaced at 15.24-m and 7.61-m intervals in 2008 and 2009, respectively. Thirty additional traps were placed randomly throughout the sampling area to collect information on distances shorter than the grid spacing. The semivariogram analysis indicated that, in most cases, spacing traps at least 28.8 m apart would result in spatially independent samples. Also, the 7.61-m grid spacing captured more of the thrips spatial variability than the 15.24-m grid spacing. IDW and OK produced maps with similar accuracy in both years, which indicates that thrips spatial distribution patterns, including "hot spots," can be modeled using either interpolation method. Future studies can use this information to determine if the formation of "hot spots" can be predicted using flower density, temperature, and other environmental factors. If so, this development would allow growers to spot treat the "hot spots" rather than their entire field.

  4. Scalable Optimization Methods for Distribution Networks with High PV Integration

    Energy Technology Data Exchange (ETDEWEB)

    Guggilam, Swaroop S.; Dall' Anese, Emiliano; Chen, Yu Christine; Dhople, Sairaj V.; Giannakis, Georgios B.

    2016-07-01

    This paper proposes a suite of algorithms to determine the active- and reactive-power setpoints for photovoltaic (PV) inverters in distribution networks. The objective is to optimize the operation of the distribution feeder according to a variety of performance objectives and ensure voltage regulation. In general, these algorithms take a form of the widely studied ac optimal power flow (OPF) problem. For the envisioned application domain, nonlinear power-flow constraints render pertinent OPF problems nonconvex and computationally intensive for large systems. To address these concerns, we formulate a quadratic constrained quadratic program (QCQP) by leveraging a linear approximation of the algebraic power-flow equations. Furthermore, simplification from QCQP to a linearly constrained quadratic program is provided under certain conditions. The merits of the proposed approach are demonstrated with simulation results that utilize realistic PV-generation and load-profile data for illustrative distribution-system test feeders.

  5. Planning and Optimization Methods for Active Distribution Systems

    DEFF Research Database (Denmark)

    Abbey, Chad; Baitch, Alex; Bak-Jensen, Birgitte

    ”. This report assesses the various requirements to facilitate the transition towards active distribution systems (ADSs). Specifically, the report starts from a survey of requirements of planning methodologies and identifies a new framework and methodologies for short, medium and long term models for active...... in the distribution business since the exploitation of existing assets with Advanced Automation and Control may be a valuable alternative to network expansion or reinforcement. Information and communication technology (ICT) cannot be considered as a simple add-on of the power system and simultaneous analysis (co...

  6. Probabilistic robust stabilization of fractional order systems with interval uncertainty.

    Science.gov (United States)

    Alagoz, Baris Baykant; Yeroglu, Celaleddin; Senol, Bilal; Ates, Abdullah

    2015-07-01

    This study investigates effects of fractional order perturbation on the robust stability of linear time invariant systems with interval uncertainty. For this propose, a probabilistic stability analysis method based on characteristic root region accommodation in the first Riemann sheet is developed for interval systems. Stability probability distribution is calculated with respect to value of fractional order. Thus, we can figure out the fractional order interval, which makes the system robust stable. Moreover, the dependence of robust stability on the fractional order perturbation is analyzed by calculating the order sensitivity of characteristic polynomials. This probabilistic approach is also used to develop a robust stabilization algorithm based on parametric perturbation strategy. We present numerical examples demonstrating utilization of stability probability distribution in robust stabilization problems of interval uncertain systems.

  7. Uncertainty Management of Dynamic Tariff Method for Congestion Management in Distribution Networks

    DEFF Research Database (Denmark)

    Huang, Shaojun; Wu, Qiuwei; Cheng, Lin

    2016-01-01

    The dynamic tariff (DT) method is designed for the distribution system operator (DSO) to alleviate congestions that might occur in a distribution network with high penetration of distributed energy resources (DERs). Uncertainty management is required for the decentralized DT method because the DT...

  8. Distributional Monte Carlo Methods for the Boltzmann Equation

    Science.gov (United States)

    2013-03-01

    become the first to possess non - Maxwellian distributions, and therefore become the only location where 112 collisions are required to be calculated... Maxwellian . . . . . . . . . . . . . . . . . 16 fMB Maxwell-Boltzmann Density . . . . . . . . . . . . . . . . . . . . . . . . 16 nMB Maxwell-Boltzmann...is equivalent to assuming that millions of actual particles all share the exact velocity vector. This assumption is non -physical in the sense that

  9. DISTRIBUTED ELECTRICAL POWER PRODUCTION SYSTEM AND METHOD OF CONTROL THEREOF

    DEFF Research Database (Denmark)

    2010-01-01

    The present invention relates to a distributed electrical power production system wherein two or more electrical power units comprise respective sets of power supply attributes. Each set of power supply attributes is associated with a dynamic operating state of a particular electrical power unit....

  10. A robust fusion method for multiview distributed video coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Ascenso, Joao; Brites, Catarina;

    2014-01-01

    Distributed video coding (DVC) is a coding paradigm which exploits the redundancy of the source (video) at the decoder side, as opposed to predictive coding, where the encoder leverages the redundancy. To exploit the correlation between views, multiview predictive video codecs require the encoder...

  11. Synchronization Methods for Three Phase Distributed Power Generation Systems

    DEFF Research Database (Denmark)

    Timbus, Adrian Vasile; Teodorescu, Remus; Blaabjerg, Frede

    2005-01-01

    Nowadays, it is a general trend to increase the electricity production using Distributed Power Generation Systems (DPGS) based on renewable energy resources such as wind, sun or hydrogen. If these systems are not properly controlled, their connection to the utility network can generate problems o...

  12. A TRUST REGION METHOD FOR SOLVING DISTRIBUTED PARAMETER IDENTIFICATION PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    Yan-fei Wang; Ya-xiang Yuan

    2003-01-01

    This paper is concerned with the ill-posed problems of identifying a parameter in an elliptic equation which appears in many applications in science and industry. Its solution is obtained by applying trust region method to a nonlinear least squares error problem.Trust region method has long been a popular method for well-posed problems. This paper indicates that it is also suitable for ill-posed problems. Numerical experiment is given to compare the trust region method with the Tikhonov regularization method. It seems that the trust region method is more promising.

  13. Chaos on the interval

    CERN Document Server

    Ruette, Sylvie

    2017-01-01

    The aim of this book is to survey the relations between the various kinds of chaos and related notions for continuous interval maps from a topological point of view. The papers on this topic are numerous and widely scattered in the literature; some of them are little known, difficult to find, or originally published in Russian, Ukrainian, or Chinese. Dynamical systems given by the iteration of a continuous map on an interval have been broadly studied because they are simple but nevertheless exhibit complex behaviors. They also allow numerical simulations, which enabled the discovery of some chaotic phenomena. Moreover, the "most interesting" part of some higher-dimensional systems can be of lower dimension, which allows, in some cases, boiling it down to systems in dimension one. Some of the more recent developments such as distributional chaos, the relation between entropy and Li-Yorke chaos, sequence entropy, and maps with infinitely many branches are presented in book form for the first time. The author gi...

  14. Spectral Statistics of RR Intervals in ECG

    CERN Document Server

    Martinis, M; Knezevic, A; Crnugelj, J

    2003-01-01

    The statistical properties (fluctuations) of heartbeat intervals (RR intervals) in ECG are studied and compared with the predictions of Random Matrix Theory (RMT). It is found that heartbeat intervals only locally exhibit the fluctuation patterns (universality) predicted by the RMT. This finding shows that heartbeat dynamics is of the mixed type where regular and irregular (chaotic) regimes coexist and the Berry-Robnik theory can be applied. It is also observed that the distribution of heartbeat intervals is well described by the one-parameter Brody distribution. The parameter $\\beta $ of the Brody distribution is seen to be connected with the dynamical state of the heart.

  15. A Distributed Bio-Inspired Method for Multisite Grid Mapping

    Directory of Open Access Journals (Sweden)

    I. De Falco

    2010-01-01

    Full Text Available Computational grids assemble multisite and multiowner resources and represent the most promising solutions for processing distributed computationally intensive applications, each composed by a collection of communicating tasks. The execution of an application on a grid presumes three successive steps: the localization of the available resources together with their characteristics and status; the mapping which selects the resources that, during the estimated running time, better support this execution and, at last, the scheduling of the tasks. These operations are very difficult both because the availability and workload of grid resources change dynamically and because, in many cases, multisite mapping must be adopted to exploit all the possible benefits. As the mapping problem in parallel systems, already known as NP-complete, becomes even harder in distributed heterogeneous environments as in grids, evolutionary techniques can be adopted to find near-optimal solutions. In this paper an effective and efficient multisite mapping, based on a distributed Differential Evolution algorithm, is proposed. The aim is to minimize the time required to complete the execution of the application, selecting from among all the potential ones the solution which reduces the use of the grid resources. The proposed mapper is tested on different scenarios.

  16. Interval arithmetic in calculations

    Science.gov (United States)

    Bairbekova, Gaziza; Mazakov, Talgat; Djomartova, Sholpan; Nugmanova, Salima

    2016-10-01

    Interval arithmetic is the mathematical structure, which for real intervals defines operations analogous to ordinary arithmetic ones. This field of mathematics is also called interval analysis or interval calculations. The given math model is convenient for investigating various applied objects: the quantities, the approximate values of which are known; the quantities obtained during calculations, the values of which are not exact because of rounding errors; random quantities. As a whole, the idea of interval calculations is the use of intervals as basic data objects. In this paper, we considered the definition of interval mathematics, investigated its properties, proved a theorem, and showed the efficiency of the new interval arithmetic. Besides, we briefly reviewed the works devoted to interval analysis and observed basic tendencies of development of integral analysis and interval calculations.

  17. Interval Arithmetic for Nonlinear Problem Solving

    OpenAIRE

    2013-01-01

    Implementation of interval arithmetic in complex problems has been hampered by the tedious programming exercise needed to develop a particular implementation. In order to improve productivity, the use of interval mathematics is demonstrated using the computing platform INTLAB that allows for the development of interval-arithmetic-based programs more efficiently than with previous interval-arithmetic libraries. An interval-Newton Generalized-Bisection (IN/GB) method is developed in this platfo...

  18. Some new methods and results in examination of distribution of rare strongest events

    Science.gov (United States)

    Pisarenko, Vladilen; Rodkin, Mikhail

    2016-04-01

    In the study of disaster statistics the examination of the distribution tail - the range of rare strongest events - appears to be the mostly difficult and the mostly important problem. We discuss here this problem using two different approaches. In the first one we use the limit distributions of the theory of extreme values for parameterization of behavior of the distribution tail. Our method consists in estimation of the maximum size Mmax(T) (e.g. magnitude, earthquake energy, PGA value, victims or economic losses from catastrophe, etc.) that will occur in a prescribed future time interval T. In this particular case we combine the historical earthquake catalogs with instrumental ones since historical catalogs cover much longer time periods and thus can essentially improve seismic statistics in the higher magnitude domain. We apply here this technique to two historical Japan catalogs (the Usami earthquake catalog 599-1884, and the Utsu catalog, 1885-1925) and to the instrumental JMA catalog (1926-2014). We have compared the parameters of historical catalogs with ones derived from the instrumental JMA catalog and have found that the Usami catalog is incompatible with the instrumental one, whereas the Utsu catalog is statistically compatible in the higher magnitude domain with the JMA catalog. In all examined cases the effect of the "bending down" of the graph of strong earthquake recurrence was found as the typical of the seismic regime. Another method is connected with the use of the multiplicative cascade model (that in some aspects is an analogue of the ETAS model). It is known that the ordinary Gutenberg-Richter law of earthquake recurrence can be imitated within the scheme of multiplicative cascade in which the seismic regime is treated as a sequence of a large number of episodes of avalanche-like relaxation, randomly occurring on the set of metastable subsystems. This model simulates such well known regularity of the seismic regime as a decrease in b-value in

  19. Developments of entropy-stable residual distribution methods for conservation laws I: Scalar problems

    Science.gov (United States)

    Ismail, Farzad; Chizari, Hossain

    2017-02-01

    This paper presents preliminary developments of entropy-stable residual distribution methods for scalar problems. Controlling entropy generation is achieved by formulating an entropy conserved signals distribution coupled with an entropy-stable signals distribution. Numerical results of the entropy-stable residual distribution methods are accurate and comparable with the classic residual distribution methods for steady-state problems. High order accurate extensions for the new method on steady-state problems are also demonstrated. Moreover, the new method preserves second order accuracy on unsteady problems using an explicit time integration scheme. The idea of the multi-dimensional entropy-stable residual distribution method is generic enough to be extended to the system of hyperbolic equations, which will be presented in the sequel of this paper.

  20. Circular Interval Arithmetic Applied on LDMT for Linear Interval System

    Directory of Open Access Journals (Sweden)

    Stephen Ehidiamhen Uwamusi

    2014-07-01

    Full Text Available The paper considers the LDMT Factorization of a general nxn matrix arising from system of interval linear equations. We paid special emphasis on Interval Cholesky Factorization. The basic computational tool used is the square root method of circular interval arithmetic in a sense analogous to Gargantini and Henrici as well as the generalized square root method due to Petkovic which enables the construction of the square root of the resulting diagonal matrix. We also made use of Rump’s method for multiplying two intervals expressed in the form of midpoint-radius respectively. Numerical example of matrix factorization in this regard is given which forms the basis of discussion. It is shown that LDMT even though is a numerically stable method for any diagonally dominant matrix it also can lead to excess width of the solution set. It is also pointed out that in spite of the above mentioned objection to interval LDMT it has in addition , the advantage that in the presence of several solution sets sharing the same interval matrix the LDMT Factorization requires to be computed only once which helps in saving substantial computational time. This may be found applicable in the development of military hard ware which requires shooting at a single point but produces multiple broadcast at all other points

  1. Method for uncertain multi-attribute decision-making with preference information in the form of interval numbers complementary judgment matrix

    Institute of Scientific and Technical Information of China (English)

    Zhou Hong'an; Liu Sanyang; Fang Xiangrong

    2007-01-01

    The uncertain multi-attribute decision-making problems because of the information about attribute weights being known partly, and the decision maker's preference information on alternatives taking the form of interval numbers complementary to the judgment matrix, are investigated.First, the decision-making information, based on the subjective uncertain complementary preference matrix on alternatives is made uniform by using a translation function, and then an objective programming model is established.The attribute weights are obtained by solving the model, thus the overall values of the alternatives are gained by using the additive weighting method.Second, the alternatives are ranked, by using the continuous ordered weighted averaging (C-OWA) operator.A new approach to the uncertain multi-attribute decision-making problems, with uncertain preference information on alternatives is proposed.It is characterized by simple operations and can be easily implemented on a computer.Finally, a practical example is illustrated to show the feasibility and availability of the developed method.

  2. Modelling volatility recurrence intervals in the Chinese commodity futures market

    Science.gov (United States)

    Zhou, Weijie; Wang, Zhengxin; Guo, Haiming

    2016-09-01

    The law of extreme event occurrence attracts much research. The volatility recurrence intervals of Chinese commodity futures market prices are studied: the results show that the probability distributions of the scaled volatility recurrence intervals have a uniform scaling curve for different thresholds q. So we can deduce the probability distribution of extreme events from normal events. The tail of a scaling curve can be well fitted by a Weibull form, which is significance-tested by KS measures. Both short-term and long-term memories are present in the recurrence intervals with different thresholds q, which denotes that the recurrence intervals can be predicted. In addition, similar to volatility, volatility recurrence intervals also have clustering features. Through Monte Carlo simulation, we artificially synthesise ARMA, GARCH-class sequences similar to the original data, and find out the reason behind the clustering. The larger the parameter d of the FIGARCH model, the stronger the clustering effect is. Finally, we use the Fractionally Integrated Autoregressive Conditional Duration model (FIACD) to analyse the recurrence interval characteristics. The results indicated that the FIACD model may provide a method to analyse volatility recurrence intervals.

  3. Simulation of Interval Censored Data in Medical and Biological Studies

    Science.gov (United States)

    Kiani, Kaveh; Arasan, Jayanthi

    This research looks at the simulation of interval censored data when the survivor function of the survival time is known and attendance probability of the subjects for follow-ups can take any number between 0 to 1. Interval censored data often arise in the medical and biological follow-up studies where the event of interest occurs somewhere between two known times. Regardless of the methods used to analyze these types of data, simulation of interval censored data is an important and challenging step toward model building and prediction of survival time. The simulation itself is rather tedious and very computer intensive due to the interval monitoring of subjects at prescheduled times and subject's incomplete attendance to follow-ups. In this paper the simulated data by the proposed method were assessed using the bias, standard error and root mean square error (RMSE) of the parameter estimates where the survival time T is assumed to follow the Gompertz distribution function.

  4. Determination of the post mortem interval in skeletal remains by the comparative use of different physico-chemical methods: Are they reliable as an alternative to (14)C?

    Science.gov (United States)

    Amadasi, Alberto; Cappella, Annalisa; Cattaneo, Cristina; Cofrancesco, Pacifico; Cucca, Lucia; Merli, Daniele; Milanese, Chiara; Pinto, Andrea; Profumo, Antonella; Scarpulla, Valentina; Sguazza, Emanuela

    2017-05-01

    The determination of the post-mortem interval (PMI) of skeletal remains is a challenging aspect in the forensic field. Previous studies focused their attention on different macroscopic and morphological aspects but a thorough and complete evaluation of the potential of chemical and physical analyses in this field of research has not been performed. In addition to luminol test and Oxford histology index (OHI) reported in a recent paper, widely spread and accessible methods based on physical aspect and chemical characteristics of skeletal remains have been investigated as potential alternatives to dating by determination of (14)C. The investigation was performed on a total of 24 archeological and forensic bone samples with known PMI, with inductively coupled plasma optical emission spectrometer (ICP-OES), inductively coupled plasma quadruple mass spectrometry (ICP-MS), Fourier transform infrared (FT-IR) spectroscopy, energy dispersive X-ray analysis (EDX), powder X-ray diffraction analysis (XRPD) and scanning electron microscopy (SEM). Finally, the feasibility of such alternative methods was discussed. Some results such as carbonates/phosphates ratio from FT-IR, the amounts of organic and inorganic matter by EDX, crystallite sizes with XRPD, and surface morphology obtained by SEM, showed significant trends along with PMI. Though, from a chemical point of view cut-off values and gold-standard methods still present challenges, and rather different techniques together can provide useful information toward the assessment of the PMI of skeletal remains. It is however clear that in a hypothetical flowchart those methods may be placed practically at the same level and a choice should always consider the evaluation of results by each technique, execution times and a costs/benefits relationship. Copyright © 2017 Elsevier GmbH. All rights reserved.

  5. Interval Appraise Method Based on Grey Relation%基于灰色关联度的区间评价方法探讨

    Institute of Scientific and Technical Information of China (English)

    李伟军; 叶飞

    2001-01-01

    Among the social economic system,valuatorsare difficult to getcorrect data on a certain index,but can give an appraisal interval because of a lot of uncertain factors. For this problem,literature[1]gives out a kind of ideal point appraisal method,literature [2,3]also put forward a few methods to carry out the problem.This paper gives another method named grey relation appraisal method.%社会经济系统中,由于存在着大量不确定性因素,使得对系统进行评估时,在某些指标下难以精确量化,但评估者常常可以给出一个评估区间。对此类带区间值评价问题,文献[1]给出了一种基于理想点的区间评价方法,文献[2,3]针对带区间评价值的评价问题也做出了一些有意义的探讨。在文献[1]的基础上,提出了一种基于灰色关联度的区间评价方法,并利用文献[1]的实例来证实此方法的科学性与可行性。

  6. Introducing the event related fixed interval area (ERFIA multilevel technique: a method to analyze the complete epoch of event-related potentials at single trial level.

    Directory of Open Access Journals (Sweden)

    Catherine J Vossen

    Full Text Available In analyzing time-locked event-related potentials (ERPs, many studies have focused on specific peaks and their differences between experimental conditions. In theory, each latency point after a stimulus contains potentially meaningful information, regardless of whether it is peak-related. Based on this assumption, we introduce a new concept which allows for flexible investigation of the whole epoch and does not primarily focus on peaks and their corresponding latencies. For each trial, the entire epoch is partitioned into event-related fixed-interval areas under the curve (ERFIAs. These ERFIAs, obtained at single trial level, act as dependent variables in a multilevel random regression analysis. The ERFIA multilevel method was tested in an existing ERP dataset of 85 healthy subjects, who underwent a rating paradigm of 150 painful and non-painful somatosensory electrical stimuli. We modeled the variability of each consecutive ERFIA with a set of predictor variables among which were stimulus intensity and stimulus number. Furthermore, we corrected for latency variations of the P2 (260 ms. With respect to known relationships between stimulus intensity, habituation, and pain-related somatosensory ERP, the ERFIA method generated highly comparable results to those of commonly used methods. Notably, effects on stimulus intensity and habituation were also observed in non-peak-related latency ranges. Further, cortical processing of actual stimulus intensity depended on the intensity of the previous stimulus, which may reflect pain-memory processing. In conclusion, the ERFIA multilevel method is a promising tool that can be used to study event-related cortical processing.

  7. Introducing the event related fixed interval area (ERFIA) multilevel technique: a method to analyze the complete epoch of event-related potentials at single trial level.

    Science.gov (United States)

    Vossen, Catherine J; Vossen, Helen G M; Marcus, Marco A E; van Os, Jim; Lousberg, Richel

    2013-01-01

    In analyzing time-locked event-related potentials (ERPs), many studies have focused on specific peaks and their differences between experimental conditions. In theory, each latency point after a stimulus contains potentially meaningful information, regardless of whether it is peak-related. Based on this assumption, we introduce a new concept which allows for flexible investigation of the whole epoch and does not primarily focus on peaks and their corresponding latencies. For each trial, the entire epoch is partitioned into event-related fixed-interval areas under the curve (ERFIAs). These ERFIAs, obtained at single trial level, act as dependent variables in a multilevel random regression analysis. The ERFIA multilevel method was tested in an existing ERP dataset of 85 healthy subjects, who underwent a rating paradigm of 150 painful and non-painful somatosensory electrical stimuli. We modeled the variability of each consecutive ERFIA with a set of predictor variables among which were stimulus intensity and stimulus number. Furthermore, we corrected for latency variations of the P2 (260 ms). With respect to known relationships between stimulus intensity, habituation, and pain-related somatosensory ERP, the ERFIA method generated highly comparable results to those of commonly used methods. Notably, effects on stimulus intensity and habituation were also observed in non-peak-related latency ranges. Further, cortical processing of actual stimulus intensity depended on the intensity of the previous stimulus, which may reflect pain-memory processing. In conclusion, the ERFIA multilevel method is a promising tool that can be used to study event-related cortical processing.

  8. Distributed Research Project Scheduling Based on Multi-Agent Methods

    Directory of Open Access Journals (Sweden)

    Constanta Nicoleta Bodea

    2011-01-01

    Full Text Available Different project planning and scheduling approaches have been developed. The Operational Research (OR provides two major planning techniques: CPM (Critical Path Method and PERT (Program Evaluation and Review Technique. Due to projects complexity and difficulty to use classical methods, new approaches were developed. Artificial Intelligence (AI initially promoted the automatic planner concept, but model-based planning and scheduling methods emerged later on. The paper adresses the project scheduling optimization problem, when projects are seen as Complex Adaptive Systems (CAS. Taken into consideration two different approaches for project scheduling optimization: TCPSP (Time- Constrained Project Scheduling and RCPSP (Resource-Constrained Project Scheduling, the paper focuses on a multiagent implementation in MATLAB for TCSP. Using the research project as a case study, the paper includes a comparison between two multi-agent methods: Genetic Algorithm (GA and Ant Colony Algorithm (ACO.

  9. Space charge distribution measurement methods and particle loaded insulating materials

    Energy Technology Data Exchange (ETDEWEB)

    Hole, S [Laboratoire des Instruments et Systemes d' Ile de France, Universite Pierre et Marie Curie-Paris6, 10 rue Vauquelin, 75005 Paris (France); Sylvestre, A [Laboratoire d' Electrostatique et des Materiaux Dielectriques, CNRS UMR5517, 25 avenue des Martyrs, BP 166, 38042 Grenoble cedex 9 (France); Lavallee, O Gallot [Laboratoire d' Etude Aerodynamiques, CNRS UMR6609, boulevard Marie et Pierre Curie, Teleport 2, BP 30179, 86962 Futuroscope, Chasseneuil (France); Guillermin, C [Schneider Electric Industries SAS, 22 rue Henry Tarze, 38000 Grenoble (France); Rain, P [Laboratoire d' Electrostatique et des Materiaux Dielectriques, CNRS UMR5517, 25 avenue des Martyrs, BP 166, 38042 Grenoble cedex 9 (France); Rowe, S [Schneider Electric Industries SAS, 22 rue Henry Tarze, 38000 Grenoble (France)

    2006-03-07

    In this paper the authors discuss the effects of particles (fillers) mixed in a composite polymer on the space charge measurement techniques. The origin of particle-induced spurious signals is determined and silica filled epoxy resin is analysed using the laser-induced-pressure-pulse (LIPP) method, the pulsed-electro-acoustic (PEA) method and the laser-induced-thermal-pulse (LITP) method. A spurious signal identified as the consequence of a piezoelectric effect of some silica particles is visible for all the method. Moreover, space charges are clearly detected at the epoxy/silica interface after a 10 kV mm{sup -1} poling at room temperature for 2 h.

  10. Method for measuring the size distribution of airborne rhinovirus

    Energy Technology Data Exchange (ETDEWEB)

    Russell, M.L.; Goth-Goldstein, R.; Apte, M.G.; Fisk, W.J.

    2002-01-01

    About 50% of viral-induced respiratory illnesses are caused by the human rhinovirus (HRV). Measurements of the concentrations and sizes of bioaerosols are critical for research on building characteristics, aerosol transport, and mitigation measures. We developed a quantitative reverse transcription-coupled polymerase chain reaction (RT-PCR) assay for HRV and verified that this assay detects HRV in nasal lavage samples. A quantitation standard was used to determine a detection limit of 5 fg of HRV RNA with a linear range over 1000-fold. To measure the size distribution of HRV aerosols, volunteers with a head cold spent two hours in a ventilated research chamber. Airborne particles from the chamber were collected using an Andersen Six-Stage Cascade Impactor. Each stage of the impactor was analyzed by quantitative RT-PCR for HRV. For the first two volunteers with confirmed HRV infection, but with mild symptoms, we were unable to detect HRV on any stage of the impactor.

  11. Development of methods for DSM and distribution automation planning

    Energy Technology Data Exchange (ETDEWEB)

    Lehtonen, M.; Seppaelae, A.; Kekkonen, V.; Koreneff, G. [VTT Energy, Espoo (Finland)

    1996-12-31

    In the de-regulated electricity market, the power trading companies have to face new problems. The biggest challenges are caused by the uncertainty in the load magnitudes. In order to minimize the risks in power purchase and also in retail sales, the power traders should have as reliable and accurate estimates for hourly demands of their customers as possible. New tools have been developed for the distribution load estimation and for the management of energy balances of the trading companies. These tools are based on the flexible combination of the information available from several sources, like direct customer measurements, network measurements, load models and statistical data. These functions also serve as an information source for higher level activities of the electricity selling companies. These activities and the associated functions have been studied in the prototype system called DEM, which is now being developed for the operation of Finnish utilities in the newly de-regulated power market

  12. Iterative methods for distributed parameter estimation in parabolic PDE

    Energy Technology Data Exchange (ETDEWEB)

    Vogel, C.R. [Montana State Univ., Bozeman, MT (United States); Wade, J.G. [Bowling Green State Univ., OH (United States)

    1994-12-31

    The goal of the work presented is the development of effective iterative techniques for large-scale inverse or parameter estimation problems. In this extended abstract, a detailed description of the mathematical framework in which the authors view these problem is presented, followed by an outline of the ideas and algorithms developed. Distributed parameter estimation problems often arise in mathematical modeling with partial differential equations. They can be viewed as inverse problems; the `forward problem` is that of using the fully specified model to predict the behavior of the system. The inverse or parameter estimation problem is: given the form of the model and some observed data from the system being modeled, determine the unknown parameters of the model. These problems are of great practical and mathematical interest, and the development of efficient computational algorithms is an active area of study.

  13. Offset quantum-well method for tunable distributed Bragg reflector lasers and electro-absorption modulated distributed feedback lasers

    Institute of Scientific and Technical Information of China (English)

    Qiang Kan; Ying Ding; Lingjuan Zhao; Hongliang Zhu; Fan Zhou; Lufeng Wang; Baojun Wang; Wei Wang

    2005-01-01

    @@ A two-section offset quantum-well structure tunable laser with a tuning range of 7 nm was fabricated using offset quantum-well method. The distributed Bragg reflector (DBR) was realized just by selectively wet etching the multiquantum-well (MQW) layer above the quaternary lower waveguide. A threshold current of 32 mA and an output power of 9 mW at 100 mA were achieved. Furthermore, with this offset structure method, a distributed feedback (DFB) laser was integrated with an electro-absorption modulator (EAM),which was capable of producing 20 dB of optical extinction.

  14. Load forecasting method considering temperature effect for distribution network

    Directory of Open Access Journals (Sweden)

    Meng Xiao Fang

    2016-01-01

    Full Text Available To improve the accuracy of load forecasting, the temperature factor was introduced into the load forecasting in this paper. This paper analyzed the characteristics of power load variation, and researched the rule of the load with the temperature change. Based on the linear regression analysis, the mathematical model of load forecasting was presented with considering the temperature effect, and the steps of load forecasting were given. Used MATLAB, the temperature regression coefficient was calculated. Using the load forecasting model, the full-day load forecasting and time-sharing load forecasting were carried out. By comparing and analyzing the forecast error, the results showed that the error of time-sharing load forecasting method was small in this paper. The forecasting method is an effective method to improve the accuracy of load forecasting.

  15. Analytical Model Analysis Of Distributed Cooperative Spectrum Sensing Method

    Directory of Open Access Journals (Sweden)

    Ravi Prakash Shukla

    2012-05-01

    Full Text Available Spectrum sensing is a key function of cognitive radio to prevent the harmful interference with licensed users and identify the available spectrum for improving the spectrum’s utilization. Various methods for spectrum sensing control, such as deciding which sensors should perform sensing simultaneously and finding the appropriate trade-off between probability of misdetection and false alarm rate, are described. However, detection performance in practice is often compromised with multipath fading, shadowing and receiver uncertainty issues. To mitigate the impact of these issues, cooperative spectrum sensing has been shown to be an effective method to improve the detection performance by exploiting spatial diversity.

  16. Reducing the width of confidence intervals for the difference between two population means by inverting adaptive tests.

    Science.gov (United States)

    O'Gorman, Thomas W

    2016-08-08

    In the last decade, it has been shown that an adaptive testing method could be used, along with the Robbins-Monro search procedure, to obtain confidence intervals that are often narrower than traditional confidence intervals. However, these confidence interval limits require a great deal of computation and some familiarity with stochastic search methods. We propose a method for estimating the limits of confidence intervals that uses only a few tests of significance. We compare these limits to those obtained by a lengthy Robbins-Monro stochastic search and find that the proposed method is nearly as accurate as the Robbins-Monro search. Adaptive confidence intervals that are produced by the proposed method are often narrower than traditional confidence intervals when the distributions are long-tailed, skewed, or bimodal. Moreover, the proposed method of estimating confidence interval limits is easy to understand, because it is based solely on the p-values from a few tests of significance.

  17. 基于证据相似性度量的冲突性区间证据融合方法%A New Fusion Method of Conflicting Interval Evidence Based on the Similarity Measure of Evidence

    Institute of Scientific and Technical Information of China (English)

    冯海山; 徐晓滨; 文成林

    2012-01-01

    基于证据相似性度量,该文提出一种冲突性区间证据融合的新方法.首先,定义了扩展型Pignistic概率转换,将区间证据转换为区间型Pignistic概率.利用区间模糊集的归一化欧式距离,求取区间型Pignistic概率之间的相似性,以此确定两两证据间的相似度矩阵,从中获取区间证据的置信度.然后,基于该置信度对原始的区间证据进行加权平均得到新的区间证据,利用Demspter区间证据组合公式对其进行融合.该方法可以有效地减弱高冲突性区间证据在组合规则中的作用,从而减小融合后所得区间证据的宽度,最终可降低决策中的不确定性.最后通过多个典型算例验证了经冲突处理后再对区间证据进行融合,要比直接融合能够产生更为合理和可靠的结果.%Based on the similarity measure of evidence, a new method for combining conflicting interval evidence is proposed. Firstly, interval evidence can be transformed into interval-valued Pignistic probability by using the defined extended Pignistic probability function. Using the normalized Euclidean distance of interval-valued fuzzy sets, the similarity between Pignistic probabilities of interval evidence are obtained, and similarity measure matrix can be constructed, from which the credibility degrees (weights) of interval evidence can be got. Secondly, based on the credibility degrees, new interval evidence can be obtained by modified and weightedly averaging the original interval evidence. Using Demspter interval evidence combination rule, the fusion result can be obtained by combining the new interval evidence. The proposed method can effectively eliminate the effect of highly conflicting interval evidence in combination so as to reduce the width of combined interval evidence. Therefore the uncertainty of decision-making can be decreased. Finally, in classical numerical examples, compared with the fused results by directly using Demspter interval

  18. Determination of Soil Evaporation Fluxes Using Distributed Temperature Sensing Methods

    Science.gov (United States)

    Munoz, J.; Serna, J. L.; Suarez, F. I.

    2015-12-01

    Evaporation is the main process for water vapor exchange between the land surface and the atmosphere. Evaporation from shallow groundwater tables is important in arid zones and is influenced by the water table depth and by the soil's hydrodynamic characteristics. Measuring evaporation, however, is still challenging. Thus, it is important to develop new measuring techniques that can better determine evaporation fluxes. The aim of this work is to investigate the feasibility of using distributed-temperature-sensing (DTS) to study the processes that control evaporation from soils with shallow water tables. To achieve this objective, an experimental column was instrumented with traditional temperature probes, time-domain-reflectometry probes, and an armored fiber-optic cable that allowed the application of heat pulses to estimate the soil moisture profile. The experimental setup also allowed to fix the water table at different depths and to measure evaporation rates at the daily scale. Experiments with different groundwater table depths were carried out. For each experiment, the evaporation rates were measured and the moisture profile was determined using heat pulses all through the DTS cable. These pulses allowed estimation of the moisture content with errors smaller than 0.045 m3/m3 and with a spatial resolution of ~6.5 mm. The high spatial resolution of the moisture profile combined with mathematical modeling permitted to investigate the processes that control evaporation from bare soils with shallow groundwater tables.

  19. Economic evaluation of smoke alarm distribution methods in Baltimore, Maryland.

    Science.gov (United States)

    Diamond-Smith, Nadia; Bishai, David; Perry, Elise; Shields, Wendy; Gielen, Andrea

    2014-08-01

    This paper analyses costs and potential lives saved from a door-to-door smoke alarm distribution programme using data from a programme run by the Baltimore City Fire Department in 2010-2011. We evaluate the impact of a standard home visit programme and an enhanced home visit programme that includes having community health workers provide advance notice, promote the programme, and accompany fire department personnel on the day of the home visit, compared with each other and with an option of not having a home visit programme (control). Study data show that the home visit programme increased by 10% the number of homes that went from having no working alarm to having any working alarm, and the enhanced programme added an additional 1% to the number of homes protected. We use published reports on the relative risk of death in homes with and without a working smoke alarm to show that the standard programme would save an additional 0.24 lives per 10,000 homes over 10 years, compared with control areas and the enhanced home visit programme saved an additional 0.07 lives compared with the standard programme. The incremental cost of each life saved for the standard programme compared with control was $28,252 per death averted and $284,501per additional death averted for the enhanced compared with the standard. Following the US guidelines for the value of a life, both programmes are cost effective, however, the standard programme may offer a better value in terms of dollars per death averted. The study also highlights the need for better data on the benefits of current smoke alarm recommendations and their impact on injury, death and property damage. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  20. Big Boss Interval Games

    NARCIS (Netherlands)

    Alparslan-Gok, S.Z.; Brânzei, R.; Tijs, S.H.

    2008-01-01

    In this paper big boss interval games are introduced and various characterizations are given. The structure of the core of a big boss interval game is explicitly described and plays an important role relative to interval-type bi-monotonic allocation schemes for such games. Specifically, each element

  1. Calculation method for particle mean diameter and particle size distribution function under dependent model algorithm

    Institute of Scientific and Technical Information of China (English)

    Hong Tang; Xiaogang Sun; Guibin Yuan

    2007-01-01

    In total light scattering particle sizing technique, the relationship among Sauter mean diameter D32, mean extinction efficiency Q, and particle size distribution function is studied in order to inverse the mean diameter and particle size distribution simply. We propose a method which utilizes the mean extinction efficiency ratio at only two selected wavelengths to solve D32 and then to inverse the particle size distribution associated with (Q) and D32. Numerical simulation results show that the particle size distribution is inversed accurately with this method, and the number of wavelengths used is reduced to the greatest extent in the measurement range. The calculation method has the advantages of simplicity and rapidness.

  2. Interval probabilistic neural network.

    Science.gov (United States)

    Kowalski, Piotr A; Kulczycki, Piotr

    2017-01-01

    Automated classification systems have allowed for the rapid development of exploratory data analysis. Such systems increase the independence of human intervention in obtaining the analysis results, especially when inaccurate information is under consideration. The aim of this paper is to present a novel approach, a neural networking, for use in classifying interval information. As presented, neural methodology is a generalization of probabilistic neural network for interval data processing. The simple structure of this neural classification algorithm makes it applicable for research purposes. The procedure is based on the Bayes approach, ensuring minimal potential losses with regard to that which comes about through classification errors. In this article, the topological structure of the network and the learning process are described in detail. Of note, the correctness of the procedure proposed here has been verified by way of numerical tests. These tests include examples of both synthetic data, as well as benchmark instances. The results of numerical verification, carried out for different shapes of data sets, as well as a comparative analysis with other methods of similar conditioning, have validated both the concept presented here and its positive features.

  3. GAP-ANALYSIS METHODS USAGE FOR DISTRIBUTION LOGISTICS SYSTEM EFFICIENCY APPRAISAL

    OpenAIRE

    Markovskiy Vladimir Andreevich

    2012-01-01

    The article is dedicated to the appraisal of distribution logistics system efficiency problem, which is induced by lack of coordination of its elements. The aim of the article is to determine criteria for lack of logistics distribution systems efficiency. Novelty of the article is in applying GAP-analysis method to logistics distribution macrosystems, which allowed to propose a calculation method for informing about the presence of gaps in the logistics system. It is revea...

  4. Theoretical method for determining the three-particle distribution function of classical systems

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, E.

    1979-01-01

    Equilibrium statistical mechanics is considered. A method that should yield accurate three-particle distribution functions is presented. None of the current methods is successful. It appears that the new equation presented may be used with the first and second equations in the YBG hierarchy to obtain exact single-particle, pair, and triplet distribution functions. It should be easy to generalize the results to the n-particle distribution function.

  5. Distributed calculation method for large-pixel-number holograms by decomposition of object and hologram planes.

    Science.gov (United States)

    Jackin, Boaz Jessie; Miyata, Hiroaki; Ohkawa, Takeshi; Ootsu, Kanemitsu; Yokota, Takashi; Hayasaki, Yoshio; Yatagai, Toyohiko; Baba, Takanobu

    2014-12-15

    A method has been proposed to reduce the communication overhead in computer-generated hologram (CGH) calculations on parallel and distributed computing devices. The method uses the shifting property of Fourier transform to decompose calculations, thereby avoiding data dependency and communication. This enables the full potential of parallel and distributed computing devices. The proposed method is verified by simulation and optical experiments and can achieve a 20 times speed improvement compared to conventional methods, while using large data sizes.

  6. 一种区间直觉模糊熵的构造方法%Constructing method of interval-valued intuitionistic fuzzy entropy.

    Institute of Scientific and Technical Information of China (English)

    王培; 魏翠萍

    2011-01-01

    研究了区间直觉模糊熵.证明了三个直觉模糊熵公式的等价性.对直觉模糊熵公式进行推广,引入一个新的区间直觉模糊熵公式,该熵公式满足区间糊熵的公理化直觉模定义的4个条件.%This paper investigates the problem of interval-valued intuitionistic fuzzy entropy. It is pointed out that three formulas of intuitionistic fuzzy entropy are equivalent. By extending the formula of intuitionistic fuzzy entropy to interval-valued intuitionistic fuzzy sets, the formula of interval-valued intuitionistic fuzzy entropy is introduced, and it satisfies four axiomatic requirements of interval-valued intuitionistic fuzzy entropy.

  7. An Efficient Method for Distributing Animated Slides of Web Presentations

    Directory of Open Access Journals (Sweden)

    Yusuke Niwa

    2016-01-01

    Full Text Available Attention control of audience is required for suc-cessful presentations, therefore giving a presentation with im-mediate reaction, called reactive presentation, to unexpected changes in the context given by the audience is important. Examples of functions for the reactive presentation are shape animation effects on slides and slide transition effects. Understanding the functions that realize the reactive pre-sentation on the Web can be useful. In this work, we present an effective method for synchronizing shape animation effects on the Web, such as moving the objects and changing the size and color of the shape objects. The main idea is to make a video of animated slides, called Web Slide Media, including the page information of slides as movie chapter information for synchronization. Moreover, we explain a method to reduce the file size of the Web slide media by removing all shape animation effects and slide transition effects from a Web slide media item, called Sparse Web Slide Media. We demonstrate that the performance of the system is enough for practical use and the file size of the Sparse Web Slide Media is smaller than the file size of the Web Slide Media.

  8. 基于黑板模型的配电网多故障分时段动态恢复%An Approach of Time Interval-Divided Multi-Fault Dynamic Restoration for Distribution Network Based on Blackboard Model

    Institute of Scientific and Technical Information of China (English)

    卢志刚; 叶治格; 杨丽君

    2012-01-01

    为更好地实现配电网抢修过程中多故障分时段的动态恢复问题,提出了基于黑板模型的配电网多故障分时段动态恢复方法.首先建立了以恢复失电电量最大与系统网损最小为目标的双层优化模型.然后根据黑板模型原理,每次故障恢复由一个工作代理负责,各代理进行分布式并行计算,并利用改进离散细菌群体趋药性算法求取各代理的最优解,协调机制通过对可中断负荷的控制保证重要负荷优先恢复和减少开关操作次数.算例结果验证了该方法的有效性.%To better implement the time interval-divided multi-fault dynamic restoration during the urgent repair of distribution network, based on the blackboard model a time interval-divided dynamic restoration approach for distribution network is given. Firstly, an algorithm that combines blackboard modules with improved discrete bacterial colony chemotaxis (DBCC) algorithm for time interval-divided dynamic multi-fault restoration of distribution network is proposed, and a double-layer optimization model, which takes the maximum restoration of lost load and minimum network loss as objectives, is built; then according to the principle of blackboard model each time of fault restoration is in charge by a working agent, meanwhile the distributed parallel calculation is performed for all agents and by use of improved DBCC algorithm the optimal solution of each agent is solved. By means of controlling the interruptible loads, the coordinative mechanism ensures preferential restoration of important loads and reduces the switching times of circuit breakers. Simulation results of modified IEEE 69-bus system show that the proposed approach is effective.

  9. Management Methods In Sla-Aware Distributed Storage Systems

    Directory of Open Access Journals (Sweden)

    Darin Nikolow

    2012-01-01

    Full Text Available Traditional data storage systems provide access to user’s data on the “besteffort” basis. While this paradigm is sufficient in many use cases it becomesan obstacle for applications with Quality of Service (QoS constraints. ServiceLevel Agreement (SLA is a part of the contract agreed between the serviceprovider and the client and contains a set of well defined QoS requirementsregarding the provided service and the penalties applied in case of violations.In the paper we propose a set of SLA parameters and QoS metrics relevantto data storage processes and the management methods necessary for avoidingSLA violations. A key assumption in the proposed approach is that the underlyingdistributed storage system does not provide functionality for resource orbandwidth reservation for a given client request.

  10. Higher Order Analogues of Tracy-Widom Distributions via the Lax Method

    CERN Document Server

    Akemann, Gernot

    2012-01-01

    We study the distribution of the largest eigenvalue in formal Hermitian one-matrix models at multicriticality, where the spectral density acquires an extra number of k-1 zeros at the edge. The distributions are directly expressed through the norms of orthogonal polynomials on a semi-infinite interval, as an alternative to using Fredholm determinants. They satisfy non-linear recurrence relations which we show form a Lax pair, making contact to the string literature in the early 1990's. The technique of pseudo-differential operators allows us to give compact expressions for the logarithm of the gap probability in terms of the Painleve XXXIV hierarchy. These are the higher order analogues of the Tracy-Widom distribution which has k=1. Using known Backlund transformations we show how to simplify earlier equivalent results that are derived from Fredholm determinant theory, valid for even k in terms of the Painleve II hierarchy.

  11. A computational method for planning complex compound distributions under container, liquid handler, and assay constraints.

    Science.gov (United States)

    Russo, Mark F; Wild, Daniel; Hoffman, Steve; Paulson, James; Neil, William; Nirschl, David S

    2013-10-01

    A systematic method for assembling and solving complex compound distribution problems is presented in detail. The method is based on a model problem that enumerates the mathematical equations and constraints describing a source container, liquid handler, and three types of destination containers involved in a set of compound distributions. One source container and one liquid handler are permitted in any given problem formulation, although any number of compound distributions may be specified. The relative importance of all distributions is expressed by assigning weights, which are factored into the final mathematical problem specification. A computer program was created that automatically assembles and solves a complete compound distribution problem given the parameters that describe the source container, liquid handler, and any number and type of compound distributions. Business rules are accommodated by adjusting weighting factors assigned to each distribution. An example problem, presented and explored in detail, demonstrates complex and nonintuitive solution behavior.

  12. Distributed data organization and parallel data retrieval methods for huge laser scanner point clouds

    Science.gov (United States)

    Hongchao, Ma; Wang, Zongyue

    2011-02-01

    This paper proposes a novel method for distributed data organization and parallel data retrieval from huge volume point clouds generated by airborne Light Detection and Ranging (LiDAR) technology under a cluster computing environment, in order to allow fast analysis, processing, and visualization of the point clouds within a given area. The proposed method is suitable for both grid and quadtree data structures. As for distribution strategy, cross distribution of the dataset would be more efficient than serial distribution in terms of non-redundant datasets, since a dataset is more uniformly distributed in the former arrangement. However, redundant datasets are necessary in order to meet the frequent need of input and output operations in multi-client scenarios: the first copy would be distributed by a cross distribution strategy while the second (and later) would be distributed by an iterated exchanging distribution strategy. Such a distribution strategy would distribute datasets more uniformly to each data server. In data retrieval, a greedy algorithm is used to allocate the query task to a data server, where the computing load is lightest if the data block needing to be retrieved is stored among multiple data servers. Experiments show that the method proposed in this paper can satisfy the demands of frequent and fast data query.

  13. Electrocardiographic PR-interval duration and cardiovascular risk

    DEFF Research Database (Denmark)

    Rasmussen, Peter Vibe; Nielsen, Jonas Bille; Skov, Morten Wagner

    2017-01-01

    BACKGROUND: Because of ambiguous reports in the literature, we aimed to investigate the association between PR interval and the risk of all-cause and cardiovascular death, heart failure, and pacemaker implantation, allowing for a nonlinear relationship. METHODS: We included 293,111 individuals...... into 7 groups based on the population PR interval distribution. Cox models were used, with reference to a PR interval between 152 and 161 ms (40th to identified 34,783 deaths from all causes, 9867 cardiovascular deaths, 9526 cases of incident heart...... failure, and 1805 pacemaker implantations. A short PR interval ( 200 ms; HR, 1.23; 95% CI, 1.14-1.32; P risk of cardiovascular death after...

  14. Agent-based method for distributed clustering of textual information

    Science.gov (United States)

    Potok, Thomas E [Oak Ridge, TN; Reed, Joel W [Knoxville, TN; Elmore, Mark T [Oak Ridge, TN; Treadwell, Jim N [Louisville, TN

    2010-09-28

    A computer method and system for storing, retrieving and displaying information has a multiplexing agent (20) that calculates a new document vector (25) for a new document (21) to be added to the system and transmits the new document vector (25) to master cluster agents (22) and cluster agents (23) for evaluation. These agents (22, 23) perform the evaluation and return values upstream to the multiplexing agent (20) based on the similarity of the document to documents stored under their control. The multiplexing agent (20) then sends the document (21) and the document vector (25) to the master cluster agent (22), which then forwards it to a cluster agent (23) or creates a new cluster agent (23) to manage the document (21). The system also searches for stored documents according to a search query having at least one term and identifying the documents found in the search, and displays the documents in a clustering display (80) of similarity so as to indicate similarity of the documents to each other.

  15. A method to dynamic stochastic multicriteria decision making with log-normally distributed random variables.

    Science.gov (United States)

    Wang, Xin-Fan; Wang, Jian-Qiang; Deng, Sheng-Yue

    2013-01-01

    We investigate the dynamic stochastic multicriteria decision making (SMCDM) problems, in which the criterion values take the form of log-normally distributed random variables, and the argument information is collected from different periods. We propose two new geometric aggregation operators, such as the log-normal distribution weighted geometric (LNDWG) operator and the dynamic log-normal distribution weighted geometric (DLNDWG) operator, and develop a method for dynamic SMCDM with log-normally distributed random variables. This method uses the DLNDWG operator and the LNDWG operator to aggregate the log-normally distributed criterion values, utilizes the entropy model of Shannon to generate the time weight vector, and utilizes the expectation values and variances of log-normal distributions to rank the alternatives and select the best one. Finally, an example is given to illustrate the feasibility and effectiveness of this developed method.

  16. A Method to Dynamic Stochastic Multicriteria Decision Making with Log-Normally Distributed Random Variables

    Directory of Open Access Journals (Sweden)

    Xin-Fan Wang

    2013-01-01

    Full Text Available We investigate the dynamic stochastic multicriteria decision making (SMCDM problems, in which the criterion values take the form of log-normally distributed random variables, and the argument information is collected from different periods. We propose two new geometric aggregation operators, such as the log-normal distribution weighted geometric (LNDWG operator and the dynamic log-normal distribution weighted geometric (DLNDWG operator, and develop a method for dynamic SMCDM with log-normally distributed random variables. This method uses the DLNDWG operator and the LNDWG operator to aggregate the log-normally distributed criterion values, utilizes the entropy model of Shannon to generate the time weight vector, and utilizes the expectation values and variances of log-normal distributions to rank the alternatives and select the best one. Finally, an example is given to illustrate the feasibility and effectiveness of this developed method.

  17. PROGRAMMING OF METHODS FOR THE NEEDS OF LOGISTICS DISTRIBUTION SOLVING PROBLEMS

    Directory of Open Access Journals (Sweden)

    Andrea Štangová

    2014-06-01

    Full Text Available Logistics has become one of the dominant factors which is affecting the successful management, competitiveness and mentality of the global economy. Distribution logistics materializes the connesciton of production and consumer marke. It uses different methodology and methods of multicriterial evaluation and allocation. This thesis adresses the problem of the costs of securing the distribution of product. It was therefore relevant to design a software product thet would be helpful in solvin the problems related to distribution logistics. Elodis – electronic distribution logistics program was designed on the basis of theoretical analysis of the issue of distribution logistics and on the analysis of the software products market. The program uses a multicriterial evaluation methods to deremine the appropriate type and mathematical and geometrical method to determine an appropriate allocation of the distribution center, warehouse and company.

  18. Predictive Distribution of the Dirichlet Mixture Model by the Local Variational Inference Method

    DEFF Research Database (Denmark)

    Ma, Zhanyu; Leijon, Arne; Tan, Zheng-Hua;

    2014-01-01

    In Bayesian analysis of a statistical model, the predictive distribution is obtained by marginalizing over the parameters with their posterior distributions. Compared to the frequently used point estimate plug-in method, the predictive distribution leads to a more reliable result in calculating...... the predictive likelihood of the new upcoming data, especially when the amount of training data is small. The Bayesian estimation of a Dirichlet mixture model (DMM) is, in general, not analytically tractable. In our previous work, we have proposed a global variational inference-based method for approximately...... calculating the posterior distributions of the parameters in the DMM analytically. In this paper, we extend our previous study for the DMM and propose an algorithm to calculate the predictive distribution of the DMM with the local variational inference (LVI) method. The true predictive distribution of the DMM...

  19. Method of producing ceramic distribution members for solid state electrolyte cells

    Science.gov (United States)

    Clark, Douglas J. (Inventor); Galica, Leo M. (Inventor); Losey, Robert W. (Inventor); Suitor, Jerry W. (Inventor)

    1995-01-01

    A solid state electrolyte cells apparatus and method of producing is disclosed. The apparatus can be used for separating oxygen from an oxygen-containing feedstock or as a fuel cell for reacting fluids. Cells can be stacked so that fluids can be introduced and removed from the apparatus through ceramic distribution members having ports designed for distributing the fluids in parallel flow to and from each cell. The distribution members can also serve as electrodes to membranes or as membrane members between electrodes, The distribution member design does not contain any horizontal internal ports which allows the member to be thin. A method of tape casting in combination with an embossing method allows intricate radial ribs and bosses to be formed on each distribution member. The bosses serve as seals for the ports and allow the distribution members to be made without any horizontal internal ports.

  20. Hermite-distributed approximating functional-based formulation of multiconfiguration time-dependent Hartree method: A case study of quantum tunnelling in a coupled double-well system

    Indian Academy of Sciences (India)

    KAUSHIK MAJI

    2016-08-01

    We propose a variant of the multiconfiguration time-dependent Hartree (MCTDH) method within the framework of Hermite-distributed approximating functional (HDAF) method. The discretized Hamiltonian is a highly banded Toeplitz matrix which significantly reduces computational cost in terms of both storage and number of operations. The method proposed is employed to carry out the study of tunnelling dynamics of two coupled double well oscillators. We have calculated the orthogonality time \\tau , which is a measure of the time interval for an initial state to evolve into its orthogonal state. It is observed that the coupling has a significant effect on \\tau .

  1. Determination of reference intervals and comparison of venous blood gas parameters using standard and non-standard collection methods in 24 cats.

    Science.gov (United States)

    Bachmann, Karin; Kutter, Annette Pn; Schefer, Rahel Jud; Marly-Voquer, Charlotte; Sigrist, Nadja

    2017-08-01

    Objectives The aim of this study was to determine in-house reference intervals (RIs) for venous blood analysis with the RAPIDPoint 500 blood gas analyser using blood gas syringes (BGSs) and to determine whether immediate analysis of venous blood collected into lithium heparin (LH) tubes can replace anaerobic blood sampling into BGSs. Methods Venous blood was collected from 24 healthy cats and directly transferred into a BGS and an LH tube. The BGS was immediately analysed on the RAPIDPoint 500 followed by the LH tube. The BGSs and LH tubes were compared using paired t-test or Wilcoxon matched-pairs signed-rank test, Bland-Altman and Passing-Bablok analysis. To assess clinical relevance, bias or percentage bias between BGSs and LH tubes was compared with the allowable total error (TEa) recommended for the respective parameter. Results Based on the values obtained from the BGSs, RIs were calculated for the evaluated parameters, including blood gases, electrolytes, glucose and lactate. Values derived from LH tubes showed no significant difference for standard bicarbonate, whole blood base excess, haematocrit, total haemoglobin, sodium, potassium, chloride, glucose and lactate, while pH, partial pressure of carbon dioxide and oxygen, actual bicarbonate, extracellular base excess, ionised calcium and anion gap were significantly different to the samples collected in BGSs ( P <0.05). Furthermore, pH, partial pressure of carbon dioxide and oxygen, extracellular base excess, ionised calcium and anion gap exceeded the recommended TEa. Conclusions and relevance Assessment of actual and standard bicarbonate, whole blood base excess, haematocrit, total haemoglobin, sodium, potassium, chloride, glucose and lactate can be made based on blood collected in LH tubes and analysed within 5 mins. For pH, partial pressure of carbon dioxide and oxygen, extracellular base excess, anion gap and ionised calcium the clinically relevant alterations have to be considered if analysed in LH

  2. Iteration method for the inversion of simulated multiwavelength lidar signals to determine aerosol size distribution

    Institute of Scientific and Technical Information of China (English)

    Tao Zong-Ming; Zhang Yin-Chao; Liu Xiao-Qin; Tan Kun; Shao Shi-Sheng; Hu Huan-Ling; Zhang Gai-Xia; Lü Yong-Hui

    2004-01-01

    A new method is proposed to derive the size distribution of aerosol from the simulated multiwavelength lidar extinction coefficients. The basis for this iteration is to consider the extinction efficiency factor of particles as a set of weighting function covering the entire radius region of a distribution. The weighting functions are calculated exactly from Mie theory. This method extends the inversion region by subtracting some extinction coefficient. The radius range of simulated size distribution is 0.1-10.0μm, the inversion radius range is 0.1-2.0μm, but the inverted size distributions are in good agreement with the simulated one.

  3. Optimal reactive power and voltage control in distribution networks with distributed generators by fuzzy adaptive hybrid particle swarm optimisation method

    DEFF Research Database (Denmark)

    Chen, Shuheng; Hu, Weihao; Su, Chi

    2015-01-01

    A new and efficient methodology for optimal reactive power and voltage control of distribution networks with distributed generators based on fuzzy adaptive hybrid PSO (FAHPSO) is proposed. The objective is to minimize comprehensive cost, consisting of power loss and operation cost of transformers...... and capacitors, and subject to constraints such as minimum and maximum reactive power limits of distributed generators, maximum deviation of bus voltages, maximum allowable daily switching operation number (MADSON). Particle swarm optimization (PSO) is used to solve the corresponding mixed integer non......-linear programming problem (MINLP) and the hybrid PSO method (HPSO), consisting of three PSO variants, is presented. In order to mitigate the local convergence problem, fuzzy adaptive inference is used to improve the searching process and the final fuzzy adaptive inference based hybrid PSO is proposed. The proposed...

  4. Soft Sensing of Overflow Particle Size Distributions in Hydrocyclones Using a Combined Method

    Institute of Scientific and Technical Information of China (English)

    SUN Zhe; WANG Huangang; ZHANG Zengke

    2008-01-01

    Precise, real-time measurements of overflow particle size distributions in hydrocyclones are ne-cessary for accurate control of the comminution circuits. Soft sensing measurements provide real-time,flexible, and low-cost measurements appropriate for the overflow particle size distributions in hydrocyclones.Three soft sensing methods were investigated for measuring the overflow particle size distributions in hy-drocyclones. Simulations show that these methods have various advantages and disadvantages. Optimal Bayesian estimation fusion was then used to combine three methods with the fusion parameters determined according to the performance of each method with validation samples. The combined method compensates for the disadvantages of each method for more precise measurements. Simulations using real operating data show that the absolute root mean square measurement error of the combined method was always about 2% and the method provides the necessary accuracy for beneflciation plants.

  5. Electron density distribution in Si and Ge using multipole, maximum entropy method and pair distribution function analysis

    Indian Academy of Sciences (India)

    R Saravanan; K S Syed Ali; S Israel

    2008-04-01

    The local, average and electronic structure of the semiconducting materials Si and Ge has been studied using multipole, maximum entropy method (MEM) and pair distribution function (PDF) analyses, using X-ray powder data. The covalent nature of bonding and the interaction between the atoms are clearly revealed by the two-dimensional MEM maps plotted on (1 0 0) and (1 1 0) planes and one-dimensional density along [1 0 0], [1 1 0] and [1 1 1] directions. The mid-bond electron densities between the atoms are 0.554 e/Å3 and 0.187 e/Å3 for Si and Ge respectively. In this work, the local structural information has also been obtained by analyzing the atomic pair distribution function. An attempt has been made in the present work to utilize the X-ray powder data sets to refine the structure and electron density distribution using the currently available versatile methods, MEM, multipole analysis and determination of pair distribution function for these two systems.

  6. The interval ordering problem

    CERN Document Server

    Dürr, Christoph; Spieksma, Frits C R; Nobibon, Fabrice Talla; Woeginger, Gerhard J

    2011-01-01

    For a given set of intervals on the real line, we consider the problem of ordering the intervals with the goal of minimizing an objective function that depends on the exposed interval pieces (that is, the pieces that are not covered by earlier intervals in the ordering). This problem is motivated by an application in molecular biology that concerns the determination of the structure of the backbone of a protein. We present polynomial-time algorithms for several natural special cases of the problem that cover the situation where the interval boundaries are agreeably ordered and the situation where the interval set is laminar. Also the bottleneck variant of the problem is shown to be solvable in polynomial time. Finally we prove that the general problem is NP-hard, and that the existence of a constant-factor-approximation algorithm is unlikely.

  7. Increasing the Confidence in Student's $t$ Interval

    OpenAIRE

    Goutis, Constantinos; Casella, George

    1992-01-01

    The usual confidence interval, based on Student's $t$ distribution, has conditional confidence that is larger than the nominal confidence level. Although this fact is known, along with the fact that increased conditional confidence can be used to improve a confidence assertion, the confidence assertion of Student's $t$ interval has never been critically examined. We do so here, and construct a confidence estimator that allows uniformly higher confidence in the interval and is closer (than $1 ...

  8. Increasing the Confidence in Student's $t$ Interval

    OpenAIRE

    Goutis, Constantinos; Casella, George

    1992-01-01

    The usual confidence interval, based on Student's $t$ distribution, has conditional confidence that is larger than the nominal confidence level. Although this fact is known, along with the fact that increased conditional confidence can be used to improve a confidence assertion, the confidence assertion of Student's $t$ interval has never been critically examined. We do so here, and construct a confidence estimator that allows uniformly higher confidence in the interval and is closer (than $1 ...

  9. RESUSPENSION METHOD FOR ROAD SURFACE DUST COLLECTION AND AERODYNAMIC SIZE DISTRIBUTION CHARACTERIZATION

    Institute of Scientific and Technical Information of China (English)

    Jianhua Chen; Hongfeng Zheng; Wei Wang; Hongjie Liu; Ling Lu; Linfa Bao; Lihong Ren

    2006-01-01

    Traffic-generated fugitive dust is a source of urban atmospheric particulate pollution in Beijing. This paper introduces the resuspension method, recommended by the US EPA in AP-42 documents, for collecting Beijing road-surface dust. Analysis shows a single-peak distribution in the number size distribution and a double-peak mode for mass size distribution of the road surface dust. The median diameter of the mass concentration distribution of the road dust on a high-grade road was higher than that on a low-grade road. The ratio of PM2.5 to PM10 was consistent with that obtained in a similar study for Hong Kong. For the two selected road samples, the average relative deviation of the size distribution was 10.9% and 11.9%. All results indicate that the method introduced in this paper can effectively determine the size distribution of fugitive dust from traffic.

  10. Methods to Determine Recommended Feeder-Wide Advanced Inverter Settings for Improving Distribution System Performance

    Energy Technology Data Exchange (ETDEWEB)

    Rylander, Matthew; Reno, Matthew J.; Quiroz, Jimmy E.; Ding, Fei; Li, Huijuan; Broderick, Robert J.; Mather, Barry; Smith, Jeff

    2016-11-21

    This paper describes methods that a distribution engineer could use to determine advanced inverter settings to improve distribution system performance. These settings are for fixed power factor, volt-var, and volt-watt functionality. Depending on the level of detail that is desired, different methods are proposed to determine single settings applicable for all advanced inverters on a feeder or unique settings for each individual inverter. Seven distinctly different utility distribution feeders are analyzed to simulate the potential benefit in terms of hosting capacity, system losses, and reactive power attained with each method to determine the advanced inverter settings.

  11. Nonlinear Fitting Method of Long-Term Distributions for Statistical Analysis of Extreme Negative Surge Elevations

    Institute of Scientific and Technical Information of China (English)

    DONG Sheng; LI Fengli; JIAO Guiying

    2003-01-01

    Hydrologic frequency analysis plays an important role in coastal and ocean engineering for structural design and disaster prevention in coastal areas. This paper proposes a Nonlinear Least Squares Method (NLSM), which estimates the three unknown parameters of the Weibull distribution simultaneously by an iteration method. Statistical test shows that the NLSM fits each data sample well. The effects of different parameter-fitting methods, distribution models, and threshold values are also discussed in the statistical analysis of storm set-down elevation. The best-fitting probability distribution is given and the corresponding return values are estimated for engineering design.

  12. Distributed anonymous data perturbation method for privacy-preserving data mining

    Institute of Scientific and Technical Information of China (English)

    Feng LI; Jin MA; Jian-hua LI

    2009-01-01

    Privacy is a critical requirement in distributed data mining. Cryptography-based secure multiparty computation is a main approach for privacy preserving. However, it shows poor performance in large scale distributed systems. Meanwhile, data perturbation techniques are comparatively efficient but are mainly used in centralized privacy-preserving data mining (PPDM). In this paper, we propose a light-weight anonymous data perturbation method for efficient privacy preserving in distributed data mining. We first define the privacy constraints for data perturbation based PPDM in a semi-honest distributed environment. Two protocols are proposed to address these constraints and protect data statistics and the randomization process against collusion attacks: the adaptive privacy-preserving summary protocol and the anonymous exchange protocol. Finally, a distributed data perturbation framework based on these protocols is proposed to realize distributed PPDM. Experiment results show that our approach achieves a high security level and is very efficient in a large scale distributed environment.

  13. Optimal Design of Stochastic Distributed Order Linear SISO Systems Using Hybrid Spectral Method

    OpenAIRE

    2015-01-01

    The distributed order concept, which is a parallel connection of fractional order integrals and derivatives taken to the infinitesimal limit in delta order, has been the main focus in many engineering areas recently. On the other hand, there are few numerical methods available for analyzing distributed order systems, particularly under stochastic forcing. This paper proposes a novel numerical scheme for analyzing the behavior of a distributed order linear single input single output control s...

  14. Optimal design of stochastic distributed order linear SISO systems using hybrid spectral method

    OpenAIRE

    2015-01-01

    The distributed order concept, which is a parallel connection of fractional order integrals and derivatives taken to the infinitesimal limit in delta order, has been the main focus in many engineering areas recently. On the other hand, there are few numerical methods available for analyzing distributed order systems, particularly under stochastic forcing. This paper proposes a novel numerical scheme for analyzing the behavior of a distributed order linear single input single output control sy...

  15. A Multi-level Fuzzy Evaluation Method for Smart Distribution Network Based on Entropy Weight

    Science.gov (United States)

    Li, Jianfang; Song, Xiaohui; Gao, Fei; Zhang, Yu

    2017-05-01

    Smart distribution network is considered as the future trend of distribution network. In order to comprehensive evaluate smart distribution construction level and give guidance to the practice of smart distribution construction, a multi-level fuzzy evaluation method based on entropy weight is proposed. Firstly, focus on both the conventional characteristics of distribution network and new characteristics of smart distribution network such as self-healing and interaction, a multi-level evaluation index system which contains power supply capability, power quality, economy, reliability and interaction is established. Then, a combination weighting method based on Delphi method and entropy weight method is put forward, which take into account not only the importance of the evaluation index in the experts’ subjective view, but also the objective and different information from the index values. Thirdly, a multi-level evaluation method based on fuzzy theory is put forward. Lastly, an example is conducted based on the statistical data of some cites’ distribution network and the evaluation method is proved effective and rational.

  16. A Method of Visualizing Three-Dimensional Distribution of Yeast in Bread Dough

    Science.gov (United States)

    Maeda, Tatsurou; Do, Gab-Soo; Sugiyama, Junichi; Oguchi, Kosei; Shiraga, Seizaburou; Ueda, Mitsuyoshi; Takeya, Koji; Endo, Shigeru

    A novel technique was developed to monitor the change in three-dimensional (3D) distribution of yeast in frozen bread dough samples in accordance with the progress of mixing process. Application of a surface engineering technology allowed the identification of yeast in bread dough by bonding EGFP (Enhanced Green Fluorescent Protein) to the surface of yeast cells. The fluorescent yeast (a biomarker) was recognized as bright spots at the wavelength of 520 nm. A Micro-Slicer Image Processing System (MSIPS) with a fluorescence microscope was utilized to acquire cross-sectional images of frozen dough samples sliced at intervals of 1 μm. A set of successive two-dimensional images was reconstructed to analyze 3D distribution of yeast. Samples were taken from each of four normal mixing stages (i.e., pick up, clean up, development, and final stages) and also from over mixing stage. In the pick up stage yeast distribution was uneven with local areas of dense yeast. As the mixing progressed from clean up to final stages, the yeast became more evenly distributed throughout the dough sample. However, the uniformity in yeast distribution was lost in the over mixing stage possibly due to the breakdown of gluten structure within the dough sample.

  17. A Study on the optimal reclosing method of the transmission and distribution line

    Energy Technology Data Exchange (ETDEWEB)

    Kim, I.D.; Han, K.N. [Korea Electric Power Research Institute, Taejeon (Korea, Republic of)

    1998-09-01

    We studied Auto-Reclosing(AR) schemes of the transmission and distribution line in the various power system. Major results of this study are ; - Investigation on the overseas AR schemes - Analysis of existing AR schemes in KOREA - Optimal reclosing method of the transmission and distribution line - Study of 765kV AR scheme related in HSGS. (author). 142 refs., 113 figs., 37 tabs.

  18. A study of the up-and-down method for non-normal distribution functions

    DEFF Research Database (Denmark)

    Vibholm, Svend; Thyregod, Poul

    1988-01-01

    The assessment of breakdown probabilities is examined by the up-and-down method. The exact maximum-likelihood estimates for a number of response patterns are calculated for three different distribution functions and are compared with the estimates corresponding to the normal distribution. Estimat...

  19. A cable position sorting method for the balance of current distribution of parallel connected cables

    Energy Technology Data Exchange (ETDEWEB)

    Lee, S.Y. [Northern Taiwan Inst. of Science and Technology, Taipei, Taiwan (China); Yu, C.S. [National Defence Univ., Taoyuan, Taiwan (China); Wang, S.C. [Lung Hwa Univ. of Science and Technology, Taoyuan, Taiwan (China); Chen, Y.L. [MingChi Univ. of Technology, Taipei, Taiwan (China)

    2006-07-01

    In order to meet the high ampacity requirement of a low voltage main feeder, single-core power cables are usually connected in parallel in Taiwan's industrial and commercial power distribution systems. However, parallel connected cables can be problematic due to unequal current distributions among cables of the same phase, causing excessive temperature rise in the heavier loading cables, thus reducing the life expectancy of cable insulation. One of the most effective and economical methods of balancing current distributions is a properly designed cable position arrangement. This paper proposed a cable position sorting method for the balance of current distribution of parallel connected cables. A current distribution calculation method was developed based on mutual inductance theorem and the numerical iteration technique. In order to implement the sorting algorithm, two current distribution indices were proposed for the power loss of all cables and for the largest cable current value. The index values of different cable arrangement patterns generated by a novel permutation reduction method were determined and sorted and 3 cable configurations were studied. Recommendations for the arrangement of cable positions, aiming for more balanced current distributions, were also presented. It was concluded that dividing the cables into subgroups, including only one cable per phase in a subgroup, and arranging the cables in symmetric form can achieve a very balanced current distribution. 5 refs., 14 tabs., 7 figs.

  20. Proceedings 8th International Workshop on Parallel and Distributed Methods in verifiCation

    CERN Document Server

    Brim, Lubos; 10.4204/EPTCS.14

    2009-01-01

    The 8th International Workshop on Parallel and Distributed Methods in verifiCation (PDMC 2009) took place on November 4, 2009 at the Eindhoven University of Technology, in conjunction with Formal Methods 2009 and other related events for the first time under the heading of Formal Methods Week. This volume contains the final workshop proceedings.

  1. Interval Scheduling: A Survey

    NARCIS (Netherlands)

    Kolen, A.W.J.; Lenstra, J.K.; Papadimitriou, C.H.; Spieksma, F.C.R.

    2007-01-01

    In interval scheduling, not only the processing times of the jobs but also their starting times are given. This article surveys the area of interval scheduling and presents proofs of results that have been known within the community for some time. We first review the complexity and approximability o

  2. Estimating duration intervals

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); B.L.K. Vroomen (Björn)

    2003-01-01

    textabstractDuration intervals measure the dynamic impact of advertising on sales. More precise, the p per cent duration interval measures the time lag between the advertising impulse and the moment that p per cent of its effect has decayed. In this paper, we derive an expression for the duration

  3. Simultaneous Interval Graphs

    CERN Document Server

    Jampani, Krishnam Raju

    2010-01-01

    In a recent paper, we introduced the simultaneous representation problem (defined for any graph class C) and studied the problem for chordal, comparability and permutation graphs. For interval graphs, the problem is defined as follows. Two interval graphs G_1 and G_2, sharing some vertices I (and the corresponding induced edges), are said to be `simultaneous interval graphs' if there exist interval representations R_1 and R_2 of G_1 and G_2, such that any vertex of I is mapped to the same interval in both R_1 and R_2. Equivalently, G_1 and G_2 are simultaneous interval graphs if there exist edges E' between G_1-I and G_2-I such that G_1 \\cup G_2 \\cup E' is an interval graph. Simultaneous representation problems are related to simultaneous planar embeddings, and have applications in any situation where it is desirable to consistently represent two related graphs, for example: interval graphs capturing overlaps of DNA fragments of two similar organisms; or graphs connected in time, where one is an updated versi...

  4. Analytical method for reconstruction pin to pin of the nuclear power density distribution

    Energy Technology Data Exchange (ETDEWEB)

    Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S., E-mail: ppessoa@con.ufrj.br, E-mail: fernando@con.ufrj.br, E-mail: aquilino@imp.ufrj.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil)

    2013-07-01

    An accurate and efficient method for reconstructing pin to pin of the nuclear power density distribution, involving the analytical solution of the diffusion equation for two-dimensional neutron energy groups in homogeneous nodes, is presented. The boundary conditions used for analytic as solution are the four currents or fluxes on the surface of the node, which are obtained by Nodal Expansion Method (known as NEM) and four fluxes at the vertices of a node calculated using the finite difference method. The analytical solution found is the homogeneous distribution of neutron flux. Detailed distributions pin to pin inside a fuel assembly are estimated by the product of homogeneous flux distribution by local heterogeneous form function. Furthermore, the form functions of flux and power are used. The results obtained with this method have a good accuracy when compared with reference values. (author)

  5. Advanced airflow distribution methods for reduction of personal exposure to indoor pollutants

    DEFF Research Database (Denmark)

    Cao, Guangyu; Kosonen, Risto; Melikov, Arsen

    2016-01-01

    The main objective of this study is to recognize possible airflow distribution methods to protect the occupants from exposure to various indoor pollutants. The fact of the increasing exposure of occupants to various indoor pollutants shows that there is an urgent need to develop advanced airflow ...... distribution methods to reduce indoor exposure to various indoor pollutants. This article presents some of the latest development of advanced airflow distribution methods to reduce indoor exposure in various types of buildings.......The main objective of this study is to recognize possible airflow distribution methods to protect the occupants from exposure to various indoor pollutants. The fact of the increasing exposure of occupants to various indoor pollutants shows that there is an urgent need to develop advanced airflow...

  6. Distributed AC power flow method for AC and AC-DC hybrid ...

    African Journals Online (AJOL)

    DR OKE

    Hence the distribution power flow models to be developed were supposed to include both mesh and DG modeling (Sedghi ... Newton-Raphson based power flow methods, namely Newton ... Equation (1) is the mathematical realization of this ...

  7. An alternative method for modeling the size distribution of top wealth

    Science.gov (United States)

    Wang, Yuanjun; You, Shibing

    2016-09-01

    The Pareto distribution has been widely applied in modeling the distribution of wealth, as well as top incomes, cities and firms. However, recent evidence has shown that the Pareto distribution is not consistent with many situations in which it was previously considered applicable. We propose an alternative method for estimating the upper tail distribution of wealth and suggest a new Lorenz curve for building models to provide such estimates. Applying our new models to the Forbes World's Billionaire Lists, we show that they significantly outperform the Pareto Lorenz curve as well as some other popular alternatives.

  8. A new Color Feature Extraction method Based on Dynamic Color Distribution Entropy of Neighbourhoods

    Directory of Open Access Journals (Sweden)

    Fatemeh Alamdar

    2011-09-01

    Full Text Available One of the important requirements in image retrieval, indexing, classification, clustering and etc. is extracting efficient features from images. The color feature is one of the most widely used visual features. Use of color histogram is the most common way for representing color feature. One of disadvantage of the color histogram is that it does not take the color spatial distribution into consideration. In this paper dynamic color distribution entropy of neighborhoods method based on color distribution entropy is presented, which effectively describes the spatial information of colors. The image retrieval results in compare to improved color distribution entropy show the acceptable efficiency of this approach.

  9. Control Method of Single-phase Inverter Based Grounding System in Distribution Networks

    DEFF Research Database (Denmark)

    Wang, Wen; Yan, L.; Zeng, X.

    2016-01-01

    The asymmetry of the inherent distributed capacitances causes the rise of neutral-to-ground voltage in ungrounded system or high resistance grounded system. Overvoltage may occur in resonant grounded system if Petersen coil is resonant with the distributed capacitances. Thus, the restraint of neu...... of the control method is presented in detail. Experimental results prove the effectiveness and novelty of the proposed grounding system and control method....

  10. 确定住宅建筑日照间距的棒影图综合分析法%General analytical method to get the residential buildingsinsolation interval by using stick sunlight shadow chart

    Institute of Scientific and Technical Information of China (English)

    黄农; 姚金宝; 瞿伟

    2001-01-01

    The residential buildings insolation interval analysis is an important job in the field of urban residential district planning and design. The insolation interval of the buildings facing to the south can be got by the standard coefficient of such buildings insolation interval. But to the buildings which will be built in the old city zone or towards other sides than the south, the buildings insolation interval is difficult to determine. This paper proposes a general analytical method to get the residential buildings insolation interval by using stick sunlight shadow chart. The method is proved to be reliable and practical.%住宅建筑日照分析是城市居住区规划设计中的一项重要工作内容。对于正南向布置的住宅按照各地正南向住宅的标准日照间距系效计算即可,但是对于不同朝向的住宅或老城区、周围建筑密集区的住宅,日照间距的确定较为困难。文章提出利用棒影日照图综合分析确定住宅建筑日照间距,对于解决这类问题是一种可靠实用的方法。

  11. A Simulation Study of Lightning Surge Characteristics of a Distribution Line Using the FDTD Method

    Science.gov (United States)

    Matsuura, Susumu; Tatematsu, Akiyoshi; Noda, Taku; Yokoyama, Shigeru

    Recently, numerical electromagnetic field computation methods, which solve Maxwell's equations, have become a practical choice for the lightning surge analysis of power systems. Studies of lightning surge response of a transmission tower and lightning-induced voltages on a distribution line have already been carried out using the numerical electromagnetic field computation methods. However, a direct lightning stroke to a distribution line has not yet been studied. The authors have previously carried out pulse tests using a reduced-scale distribution line model which simulate the direct lightning stroke to a distribution line. In this paper, first, the pulse test results are simulated using the FDTD (Finite Difference Time Domain) method which is one of the numerical electromagnetic field computation methods, and comparisons are shown to validate the application of the FDTD method. Secondly, we present lightning surge characteristics of an actual-scale distribution line obtained by the FDTD method. The FDTD simulation takes into account the following items: (i) detailed structure of the distribution line; (ii) resistivity of the ground soil; (iii) propagation speed of the lightning return stroke.

  12. 用Delta法估计多维测验合成信度的置信区间%Estimating the Confidence Interval of Composite Reliability of a Multidimensional Test With the Delta Method

    Institute of Scientific and Technical Information of China (English)

    叶宝娟; 温忠麟

    2012-01-01

    Reliability is very important in evaluating the quality of a test. Based on the confirmatory factor analysis, composite reliabili- ty is a good index to estimate the test reliability for general applications. As is well known, point estimate contains limited information a- bout a population parameter and cannot indicate how far it can be from the population parameter. The confidence interval of the parame- ter can provide more information. In evaluating the quality of a test, the confidence interval of composite reliability has received atten- tion in recent years. There are three approaches to estimating the confidence interval of composite reliability of an unidimensional test: the Bootstrap method, the Delta method, and the direct use of the standard error of a software output (e. g. , LISREL). The Bootstrap method pro- vides empirical results of the standard error, and is the most credible method. But it needs data simulation techniques, and its computa- tion process is rather complex. The Delta method computes the standard error of composite reliability by approximate calculation. It is simpler than the Bootstrap method. The LISREL software can directly prompt the standard error, and it is the easiest among the three methods. By simulation study, it had been found that the interval estimates obtained by the Delta method and the Bootstrap method were almost identical, whereas the results obtained by LISREL and by the Bootstrap method were substantially different ( Ye & Wen, 2011 ). The Delta method is recommended when the confidence interval of composite reliability of a unidimensional test is estimated, because the Delta method is simpler than the Bootstrap method. There was little research about how to compute the confidence interval of composite reliability of a multidimensional test. We de- duced a formula by using the Delta method for computing the standard error of composite reliability of a multidimensional test. Based on the standard error, the

  13. Transformation of an empirical distribution to normal distribution by the use of Johnson system of translation and symmetrical quantile method

    OpenAIRE

    Ludvík Friebel; Jana Friebelová

    2006-01-01

    This article deals with approximation of empirical distribution to standard normal distribution using Johnson transformation. This transformation enables us to approximate wide spectrum of continuous distributions with a normal distribution. The estimation of parameters of transformation formulas is based on percentiles of empirical distribution. There are derived theoretical probability distribution functions of random variable obtained on the base of backward transformation standard normal ...

  14. [Spatial distribution pattern of Chilo suppressalis analyzed by classical method and geostatistics].

    Science.gov (United States)

    Yuan, Zheming; Fu, Wei; Li, Fangyi

    2004-04-01

    Two original samples of Chilo suppressalis and their grid, random and sequence samples were analyzed by classical method and geostatistics to characterize the spatial distribution pattern of C. suppressalis. The limitations of spatial distribution analysis with classical method, especially influenced by the original position of grid, were summarized rather completely. On the contrary, geostatistics characterized well the spatial distribution pattern, congregation intensity and spatial heterogeneity of C. suppressalis. According to geostatistics, the population was up to Poisson distribution in low density. As for higher density population, its distribution was up to aggregative, and the aggregation intensity and dependence range were 0.1056 and 193 cm, respectively. Spatial heterogeneity was also found in the higher density population. Its spatial correlativity in line direction was more closely than that in row direction, and the dependence ranges in line and row direction were 115 and 264 cm, respectively.

  15. Confidence Intervals from One One Observation

    CERN Document Server

    Rodriguez, Carlos C

    2008-01-01

    Robert Machol's surprising result, that from a single observation it is possible to have finite length confidence intervals for the parameters of location-scale models, is re-produced and extended. Two previously unpublished modifications are included. First, Herbert Robbins nonparametric confidence interval is obtained. Second, I introduce a technique for obtaining confidence intervals for the scale parameter of finite length in the logarithmic metric. Keywords: Theory/Foundations , Estimation, Prior Distributions, Non-parametrics & Semi-parametrics Geometry of Inference, Confidence Intervals, Location-Scale models

  16. An Application of the MP Method for Solving the Problem of Distribution

    Directory of Open Access Journals (Sweden)

    Tunjo Perić

    2015-02-01

    Full Text Available In this paper, we present an application of a method for solving the multi-objective programming problem (the MP method, which was introduced in [1]. This method is used to solve the problem of distribution (the problem of cost/ profit allocation. The method is based on the principles of cooperative games and linear programming. In the paper, we consider the standard case (proportional distribution and the generalized case in which the basic ideas of coalitions have been incorporated. The presented theory is applied and explained on an investment model for economic recovery.

  17. Coupled double-distribution-function lattice Boltzmann method for the compressible Navier-Stokes equations.

    Science.gov (United States)

    Li, Q; He, Y L; Wang, Y; Tao, W Q

    2007-11-01

    A coupled double-distribution-function lattice Boltzmann method is developed for the compressible Navier-Stokes equations. Different from existing thermal lattice Boltzmann methods, this method can recover the compressible Navier-Stokes equations with a flexible specific-heat ratio and Prandtl number. In the method, a density distribution function based on a multispeed lattice is used to recover the compressible continuity and momentum equations, while the compressible energy equation is recovered by an energy distribution function. The energy distribution function is then coupled to the density distribution function via the thermal equation of state. In order to obtain an adjustable specific-heat ratio, a constant related to the specific-heat ratio is introduced into the equilibrium energy distribution function. Two different coupled double-distribution-function lattice Boltzmann models are also proposed in the paper. Numerical simulations are performed for the Riemann problem, the double-Mach-reflection problem, and the Couette flow with a range of specific-heat ratios and Prandtl numbers. The numerical results are found to be in excellent agreement with analytical and/or other solutions.

  18. Canonical-Dissipative Nonequilibrium Energy Distributions: Parameter Estimation via Implicit Moment Method, Implementation and Application

    Science.gov (United States)

    Frank, T. D.; Kim, S.; Dotov, D. G.

    2013-11-01

    Canonical-dissipative nonequilibrium energy distributions play an important role in the life sciences. In one of the most fundamental forms, such energy distributions correspond to two-parametric normal distributions truncated to the left. We present an implicit moment method involving the first and second energy moments to estimate the distribution parameters. It is shown that the method is consistent with Cohen's 1949 formula. The implementation of the algorithm is discussed and the range of admissible parameter values is identified. In addition, an application to an earlier study on human oscillatory hand movements is presented. In this earlier study, energy was conceptualized as the energy of a Hamiltonian oscillator model. The canonical-dissipative approach allows for studying the systematic change of the model parameters with oscillation frequency. It is shown that the results obtained with the implicit moment method are consistent with those derived in the earlier study by other means.

  19. Optimal Design of Stochastic Distributed Order Linear SISO Systems Using Hybrid Spectral Method

    Directory of Open Access Journals (Sweden)

    Pham Luu Trung Duong

    2015-01-01

    Full Text Available The distributed order concept, which is a parallel connection of fractional order integrals and derivatives taken to the infinitesimal limit in delta order, has been the main focus in many engineering areas recently. On the other hand, there are few numerical methods available for analyzing distributed order systems, particularly under stochastic forcing. This paper proposes a novel numerical scheme for analyzing the behavior of a distributed order linear single input single output control system under random forcing. The method is based on the operational matrix technique to handle stochastic distributed order systems. The existing Monte Carlo, polynomial chaos, and frequency methods were first adapted to the stochastic distributed order system for comparison. Numerical examples were used to illustrate the accuracy and computational efficiency of the proposed method for the analysis of stochastic distributed order systems. The stability of the systems under stochastic perturbations can also be inferred easily from the moment of random output obtained using the proposed method. Based on the hybrid spectral framework, the optimal design was elaborated on by minimizing the suitably defined constrained-optimization problem.

  20. NEW METHOD TO ESTIMATE SCALING OF POWER-LAW DEGREE DISTRIBUTION AND HIERARCHICAL NETWORKS

    Institute of Scientific and Technical Information of China (English)

    YANG Bo; DUAN Wen-qi; CHEN Zhong

    2006-01-01

    A new method and corresponding numerical procedure are introduced to estimate scaling exponents of power-law degree distribution and hierarchical clustering func tion for complex networks. This method can overcome the biased and inaccurate faults of graphical linear fitting methods commonly used in current network research. Furthermore, it is verified to have higher goodness-of-fit than graphical methods by comparing the KS (Kolmogorov-Smirnov) test statistics for 10 CNN (Connecting Nearest-Neighbor)networks.

  1. Regaining confidence in confidence intervals for the mean treatment effect.

    Science.gov (United States)

    O'Gorman, Thomas W

    2014-09-28

    In many experiments, it is necessary to evaluate the effectiveness of a treatment by comparing the responses of two groups of subjects. This evaluation is often performed by using a confidence interval for the difference between the population means. To compute the limits of this confidence interval, researchers usually use the pooled t formulas, which are derived by assuming normally distributed errors. When the normality assumption does not seem reasonable, the researcher may have little confidence in the confidence interval because the actual one-sided coverage probability may not be close to the nominal coverage probability. This problem can be avoided by using the Robbins-Monro iterative search method to calculate the limits. One problem with this iterative procedure is that it is not clear when the procedure produces a sufficiently accurate estimate of a limit. In this paper, we describe a multiple search method that allows the user to specify the accuracy of the limits. We also give guidance concerning the number of iterations that would typically be needed to achieve a specified accuracy. This multiple iterative search method will produce limits for one-sided and two-sided confidence intervals that maintain their coverage probabilities with non-normal distributions.

  2. A new method for optimum dose distribution determination taking tumour mobility into account

    Science.gov (United States)

    Stavrev, P. V.; Stavreva, N. A.; Round, W. H.

    1996-09-01

    A method for determining the optimum dose distribution in the planning target volume is proposed when target volumes are deliberately enlarged to account for tumour mobility in external beam radiotherapy. The optimum dose distribution is a dose distribution that will result in an acceptable level of tumour control probability (TCP) in most of the arising cases of tumour dislocation. An assumption is made that the possible shifts of the tumour are subject to a Gaussian distribution with mean zero and known variance. The idea of a reduced (mean in ensemble) tumour cell density is introduced. On this basis, the target volume and dose distribution in it are determined. The tumour control probability as a function of the shift of the tumour has been calculated. The Monte Carlo method has been used to simulate TCP distributions corresponding to tumour mobility characterized by different variances. The obtained TCP distributions are independent of the variance of the mobility because the dose distribution in the planning target volume is prescribed so that the mobility variance is taken into account. For simplicity a one-dimensional model is used but three-dimensional generalization can be done.

  3. Synthesis of Water Utilization System Using Concentration Interval Analysis Method (Ⅱ) Discontinuous Process%基于浓度间隔分析的用水系统集成(Ⅱ)不连续过程

    Institute of Scientific and Technical Information of China (English)

    刘永健; 袁希钢; 罗神青

    2007-01-01

    The first part of the series of this article proposed a systematic method for the synthesis of continuous water-using system involving both non-mass-transfer-based and mass-transfer-based operations.This article, by extending the method, proposes a time-dependent concentration interval analysis (CIA) method to solve the problems associated with the synthesis of discontinuous or batch water-using systems involving both non-mass- transfer-based and mass-transfer-based operation.This method can effectively identify the possibility of water reuse and the amount of water reused under time constraints for minimizing the consumption of freshwater in single or repeated batch/discontinuous water-using systems.Moreover, on the basis of the heuristic method adapted from concentration interval analysis method for the continuous process network design, the network design for the discontinuous or batch process can be obtained through the designs for every time interval.Case study illustrates that the method presented in this article can simultaneously minimize the freshwater consumption in single or repeated batch/discontinuous water system and can determine a preferable storage tank capacity for some problems.

  4. 体检人群成人血清总胆红素浓度分布及参考区间的临床研究%Study on distribution and reference interval of serum bilirubin in physical examination population

    Institute of Scientific and Technical Information of China (English)

    刘安楠; 朱玲

    2013-01-01

    Objective To select health individual and test serum total bilirubin and direct bilimbin in order to provide a reference range for the establishment of appropriate reference interval in this region.Methods From October to December in 2009,314 of physical examination population of Beijing Hospital were selected by questionnaire and laboratory test excluding liver and gallbladder diseases or metabolic diseases.Roche and Prodia reagents were used to test TBIL and DBIL,respectively.Reference interval was calculated by sex group and compared with the existing reference interval.Results The level of TBIL had a significant difference between different genders(P < 0.01).The level of TBIL had a significant difference between two kinds of reagents(P < 0.01).Using Roche reagent,the reference interval of TBIL was 7.1 ~ 27.2 μmol/L for man,and 4.8 ~ 20.9 μmol/L for woman,and the reference interval of DBIL was 1.4 ~6.8 μmol/L for man,and 0.9 ~5.7 μmol/L for woman.Using Prodia reagent,the reference interval of TBIL was 9.5 ~ 35.7 μmol/L for man,and 6.8 ~ 28.9 μmol/L for woman,and the reference interval of DBIL was 1.3 ~ 7.0 μmol/L for man,and 1.0 ~ 6.6 μmol/L for woman.Conclusions The level of TBIL and DBIL in physical examination population was higher than the existing reference interval.It is necessary to modify the existing reference interval and establish reasonable reference interval for different regions and gender,respectively.%目的 筛选健康参考个体,进行血清总胆红素和直接胆红素的检测,为建立适合本地区人群的血清胆红素参考区间提供参考.方法 2009年10月至12月本院体检人群通过调查问卷及实验室检查排除肝胆疾病及代谢性疾病,共选取314例.应用两个不同的检测体系进行血清总胆红素和直接胆红素的检测.以性别分组计算参考区间,并与现行参考区间进行比较.结果 不同性别间总胆红素水平差异有统计学意义(P<0.01),两

  5. Distributed Generation Islanding Effect on Distribution Networks and End User Loads Using the Load Sharing Islanding Method

    Directory of Open Access Journals (Sweden)

    Maen Z. Kreishan

    2016-11-01

    Full Text Available In this paper a realistic medium voltage (MV network with four different distributed generation technologies (diesel, gas, hydro and wind along with their excitation and governor control systems is modelled and simulated. Moreover, an exponential model was used to represent the loads in the network. The dynamic and steady state behavior of the four distributed generation technologies was investigated during grid-connected operation and two transition modes to the islanding situation, planned and unplanned. This study aims to address the feasibility of planned islanding operation and to investigate the effect of unplanned islanding. The load sharing islanding method has been used for controlling the distributed generation units during grid-connected and islanding operation. The simulation results were validated through various case studies and have shown that properly planned islanding transition could provide support to critical loads at the event of utility outages. However, a reliable protection scheme would be required to mitigate the adverse effect of unplanned islanding as all unplanned sub-cases returned severe negative results.

  6. Environmental DNA method for estimating salamander distribution in headwater streams, and a comparison of water sampling methods.

    Science.gov (United States)

    Katano, Izumi; Harada, Ken; Doi, Hideyuki; Souma, Rio; Minamoto, Toshifumi

    2017-01-01

    Environmental DNA (eDNA) has recently been used for detecting the distribution of macroorganisms in various aquatic habitats. In this study, we applied an eDNA method to estimate the distribution of the Japanese clawed salamander, Onychodactylus japonicus, in headwater streams. Additionally, we compared the detection of eDNA and hand-capturing methods used for determining the distribution of O. japonicus. For eDNA detection, we designed a qPCR primer/probe set for O. japonicus using the 12S rRNA region. We detected the eDNA of O. japonicus at all sites (with the exception of one), where we also observed them by hand-capturing. Additionally, we detected eDNA at two sites where we were unable to observe individuals using the hand-capturing method. Moreover, we found that eDNA concentrations and detection rates of the two water sampling areas (stream surface and under stones) were not significantly different, although the eDNA concentration in the water under stones was more varied than that on the surface. We, therefore, conclude that eDNA methods could be used to determine the distribution of macroorganisms inhabiting headwater systems by using samples collected from the surface of the water.

  7. Verifying the distributed temperature sensing Bowen ratio method for measuring evaporation

    Science.gov (United States)

    Schilperoort, Bart; Coenders-Gerrits, Miriam; Luxemburg, Willem; Cisneros Vaca, César; Ucer, Murat

    2016-04-01

    Evaporation is an important process in the hydrological cycle, therefore measuring evaporation accurately is essential for water resource management, hydrological management and climate change models. Current techniques to measure evaporation, like eddy covariance systems, scintillometers, or lysimeters, have their limitations and therefore cannot always be used to estimate evaporation correctly. Also the conventional Bowen ratio surface energy balance method has as drawback that two sensors are used, which results in large measuring errors. In Euser et al. (2014) a new method was introduced, the DTS-based Bowen ratio (BR-DTS), that overcomes this drawback. It uses a distributed temperature sensing technique (DTS) whereby a fibre optic cable is placed vertically, going up and down along a measurement tower. One stretch of the cable is dry, the other wrapped with cloth and kept wet, akin to a psychrometer. Using this, the wet and dry bulb temperatures are determined every 12.5 cm over the height, from which the Bowen ratio can be determined. As radiation and wind have an effect on the cooling and heating of the cable's sheath as well, the DTS cables do not necessarily always measure dry and wet bulb temperature of the air accurately. In this study the accuracy in representing the dry and wet bulb temperatures of the cable are verified, and evaporation observations of the BR-DTS method are compared to Eddy Covariance (EC) measurements. Two ways to correct for errors due to wind and solar radiation warming up the DTS cables are presented: one for the dry cable and one for the wet cable. The measurements were carried out in a pine forest near Garderen (The Netherlands), along a 46-meter tall scaffold tower (15 meters above the canopy). Both the wet (Twet) and dry (Tdry) temperature of the DTS cable were compared to temperature and humidity (from which Twet is derived) observations from sensors placed along the height of the tower. Underneath the canopy, where there was

  8. Sensitivity analysis of soil parameters based on interval

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Interval analysis is a new uncertainty analysis method for engineering struc-tures. In this paper, a new sensitivity analysis method is presented by introducing interval analysis which can expand applications of the interval analysis method. The interval anal-ysis process of sensitivity factor matrix of soil parameters is given. A method of parameter intervals and decision-making target intervals is given according to the interval analysis method. With FEM, secondary developments are done for Marc and the Duncan-Chang nonlinear elastic model. Mutual transfer between FORTRAN and Marc is implemented. With practial examples, rationality and feasibility are validated. Comparison is made with some published results.

  9. An Optimization-Based Approach to Calculate Confidence Interval on Mean Value with Interval Data

    Directory of Open Access Journals (Sweden)

    Kais Zaman

    2014-01-01

    Full Text Available In this paper, we propose a methodology for construction of confidence interval on mean values with interval data for input variable in uncertainty analysis and design optimization problems. The construction of confidence interval with interval data is known as a combinatorial optimization problem. Finding confidence bounds on the mean with interval data has been generally considered an NP hard problem, because it includes a search among the combinations of multiple values of the variables, including interval endpoints. In this paper, we present efficient algorithms based on continuous optimization to find the confidence interval on mean values with interval data. With numerical experimentation, we show that the proposed confidence bound algorithms are scalable in polynomial time with respect to increasing number of intervals. Several sets of interval data with different numbers of intervals and type of overlap are presented to demonstrate the proposed methods. As against the current practice for the design optimization with interval data that typically implements the constraints on interval variables through the computation of bounds on mean values from the sampled data, the proposed approach of construction of confidence interval enables more complete implementation of design optimization under interval uncertainty.

  10. Uniform distribution and quasi-Monte Carlo methods discrepancy, integration and applications

    CERN Document Server

    Kritzer, Peter; Pillichshammer, Friedrich; Winterhof, Arne

    2014-01-01

    The survey articles in this book focus on number theoretic point constructions, uniform distribution theory, and quasi-Monte Carlo methods. As deterministic versions of the Monte Carlo method, quasi-Monte Carlo rules enjoy increasing popularity, with many fruitful applications in mathematical practice, as for example in finance, computer graphics, and biology.

  11. Calculation Method for Reliability of Agricultural Distribution Power Networks while Applying Functions of Boolean Algebra

    Directory of Open Access Journals (Sweden)

    V. Rusan

    2012-01-01

    Full Text Available The paper considers calculation methods for reliability of  agricultural distribution power networks while using Boolean algebra functions and analytical method. Reliability of 10 kV overhead line circuits with automatic sectionalization points and automatic standby activation has been investigated in the paper.

  12. Spectral distribution Method for neutrinoless double beta decay: Results for $^{82}$Se and $^{76}$Ge

    CERN Document Server

    Kota, V K B

    2016-01-01

    Statistical spectral distribution method based on shell model and random matrix theory is developed for calculating neutrinoless double beta decay nuclear transition matrix elements. First results obtained for $^{82}$Se and $^{76}$Ge using the spectral method are close to the available shell model results.

  13. Why liquid displacement methods are sometimes wrong in estimating the pore-size distribution

    NARCIS (Netherlands)

    Gijsbertsen-Abrahamse, A.J.; Boom, R.M.; Padt, van der A.

    2004-01-01

    The liquid displacement method is a commonly used method to determine the pore size distribution of micro- and ultrafiltration membranes. One of the assumptions for the calculation of the pore sizes is that the pores are parallel and thus are not interconnected. To show that the estimated pore size

  14. Distribution of invasive ants and methods for their control in Hawai'i Volcanoes National Park

    Science.gov (United States)

    Peck, Robert W.; Banko, Paul C.; Snook, Kirsten; Euaparadorn, Melody

    2013-01-01

    , upper boundaries of big-headed ants coincided with lower boundaries of Argentine ants. Significantly, Wasmannia auropunctata (little fire ant) was not detected during our surveys. Formicidal baits tested for controlling Argentine ants included XstinguishTM (containing fipronil at 0.01%), Maxforce® (hydramethylnon 1.0%), and Australian Distance® (pyriproxyfen 0.5%). Each bait was distributed evenly over four 2500 m2 replicate plots. Applications were repeated approximately four weeks after the initial treatment. Plots were subdivided into 25 subplots and ants monitored within each subplot using paper cards containing tuna bait at approximately one week intervals for about 14 weeks. All treatments reduced ant numbers, but none eradicated ants on any of the plots. XstinguishTM produced a strong and lasting effect, depressing ant abundance below 1% of control plot levels within the first week and for about eight weeks afterward. Maxforce® was slower to attain maximum effectiveness, reducing ants to 8% of control levels after one week and 3% after six weeks. Australian Distance® was least effective, decreasing ant abundance to 19% of control levels after one week with numbers subsequently rebounding to 40% of controls at four weeks and 72% at 10 weeks. In measurements of the proportion of bait cards at which ants were detected, XstinguishTM clearly out-performed Maxforce®, reaching a minimum detection rate of 3% of bait cards at one week compared to a low of 19% for Maxforce® two weeks following the second treatment. Although ant abundances were dramatically reduced on XstinguishTM plots, it is not currently registered for use in the USA. Our results suggest that ant abundance can be greatly reduced using registered baits, but further research is needed before even small-scale eradication of Argentine ants can be achieved.  Formicidal baits tested to control big-headed ants included Amdro® (hydramethylnon 0.75%), XstinguishTM (fipronil 0.01%), Extinguish® Plus (a

  15. A method for computing random chord length distributions in geometrical objects.

    Science.gov (United States)

    Borak, T B

    1994-03-01

    A method is described that uses a Monte Carlo approach for computing the distribution of random chord lengths in objects traversed by rays originating uniformly in space (mu-randomness). The resulting distributions converge identically to the analytical solutions for a sphere and satisfy the Cauchy relationship for mean chord lengths in circular cylinders. The method can easily be applied to geometrical shapes that are not convex such as the region between nested cylinders to simulate the sensitive volume of a detector. Comparisons with other computational methods are presented.

  16. MixBC Method: a New Approach for Distribution of Indirect Costs and Expenses to Products

    Directory of Open Access Journals (Sweden)

    Carlos Alberto Pereira Soares

    2013-02-01

    Full Text Available In cost management, the products cost is a valuable and necessary information. Nevertheless, distributing indirect costs and expenses to products may involve several uncertainties, what can lead to imprecise results and decision mistakes. The aim of this paper is to construct a method that would reduce the uncertainties found in current costing processes, by modelling and providing the analytical deduction of the method MixBC – Mix Based Costing. In sequence, there was performed an example of construction projects costing using MixBC. By analysing different production scenarios, this method permits indirect costs and expenses to be distributed among the products without the need of arbitrary apportionment.

  17. An Experimental Study on Voltage Compensation Method using Autonomous Decentralized Control of Distributed Generators

    Science.gov (United States)

    Tanaka, Shunsuke; Suzuki, Hirokazu

    When many distributed generators (DGs) are connected to a distribution line, the upward power flow from DGs causes the difficulty in line voltage regulation. As the countermeasure, we propose several methods to control the line voltage by use of DGs' reactive power outputs. These methods using only DGs' reactive power are implemented in an autonomous decentralized way. DGs with the function to estimate the line impedance provide the power system with reactive power according to the estimated impedance value, and regulate the line voltage. We evaluate the effect of the proposed methods for voltage compensation by experimental studies using commercial grid-connected inverters for PV system.

  18. An extension of the immersed boundary method based on the distributed Lagrange multiplier approach

    Science.gov (United States)

    Feldman, Yuri; Gulberg, Yosef

    2016-10-01

    An extended formulation of the immersed boundary method, which facilitates simulation of incompressible isothermal and natural convection flows around immersed bodies and which may be applied for linear stability analysis of the flows, is presented. The Lagrangian forces and heat sources are distributed on the fluid-structure interface. The method treats pressure, the Lagrangian forces, and heat sources as distributed Lagrange multipliers, thereby implicitly providing the kinematic constraints of no-slip and the corresponding thermal boundary conditions for immersed surfaces. Extensive verification of the developed method for both isothermal and natural convection 2D flows is provided. Strategies for adapting the developed approach to realistic 3D configurations are discussed.

  19. Closed-form fiducial confidence intervals for some functions of independent binomial parameters with comparisons.

    Science.gov (United States)

    Krishnamoorthy, K; Lee, Meesook; Zhang, Dan

    2017-02-01

    Approximate closed-form confidence intervals (CIs) for estimating the difference, relative risk, odds ratio, and linear combination of proportions are proposed. These CIs are developed using the fiducial approach and the modified normal-based approximation to the percentiles of a linear combination of independent random variables. These confidence intervals are easy to calculate as the computation requires only the percentiles of beta distributions. The proposed confidence intervals are compared with the popular score confidence intervals with respect to coverage probabilities and expected widths. Comparison studies indicate that the proposed confidence intervals are comparable with the corresponding score confidence intervals, and better in some cases, for all the problems considered. The methods are illustrated using several examples.

  20. Structure in the 3D Galaxy Distribution: I. Methods and Example Results

    CERN Document Server

    Way, M J; Scargle, Jeffrey D

    2010-01-01

    Three methods for detecting and characterizing structure in point data, such as that generated by redshift surveys, are described: classification using self-organizing maps, segmentation using Bayesian blocks, and density estimation using adaptive kernels. The first two methods are new, and allow detection and characterization of structures of arbitrary shape and at a wide range of spatial scales. They elucidate not only clusters, but also sheets, filaments, and the even more general morphologies comprising the Cosmic Web. The methods are demonstrated and compared in application to three data sets: a carefully selected volume-limited sample from the Sloan Digital Sky Survey (SDSS) redshift data, a similarly selected sample from the Millennium Simulation, and a set of points independently drawn from a uniform probability distribution -- a so-called Poisson distribution. We demonstrate a few of the many ways in which these methods elucidate large scale structure in the distribution of galaxies in the nearby Uni...

  1. The importance of the direction distribution of neutron fluence, and methods of determination

    Energy Technology Data Exchange (ETDEWEB)

    Bartlett, D.T. E-mail: david.bartlett@nrpb.org.uk; Drake, P.; D' Errico, F.; Luszik-Bhadra, M.; Matzke, M.; Tanner, R.J

    2002-01-01

    For the estimation of non-isotropic quantities such as personal dose equivalent and effective dose, and for the interpretation of the readings of personal dosemeters, it is necessary to determine both the energy and direction distributions of the neutron fluence. In fact, for workplace fields, the fluence and dose-equivalent responses of dosemeters and the relationships of operational and protection quantities, are frequently more dependent on the direction than on the energy distribution. In general, the direction distribution will not be independent of the energy distribution, and simultaneous determination of both may be required, which becomes a complex problem. The extent to which detailed information can be obtained depends on the spectrometric properties and on the angle dependence of the response of the detectors used. Methods for the determination of direction distributions of workplace fields are described.

  2. A Capacity Design Method of Distributed Battery Storage for Controlling Power Variation with Large-Scale Photovoltaic Sources in Distribution Network

    Science.gov (United States)

    Kobayashi, Yasuhiro; Sawa, Toshiyuki; Gunji, Keiko; Yamazaki, Jun; Watanabe, Masahiro

    A design method for distributed battery storage capacity has been developed for evaluating battery storage advantage on demand-supply imbalance control in distribution systems with which large-scale home photovoltaic powers connected. The proposed method is based on a linear storage capacity minimization model with design basis demand load and photovoltaic output time series subjective to battery management constraints. The design method has been experimentally applied to a sample distribution system with substation storage and terminal area storage. From the numerical results, the developed method successfully clarifies the charge-discharge control and stored power variation, satisfies peak cut requirement, and pinpoints the minimum distributed storage capacity.

  3. Nanoparticles and metrology: a comparison of methods for the determination of particle size distributions

    Science.gov (United States)

    Coleman, Victoria A.; Jämting, Åsa K.; Catchpoole, Heather J.; Roy, Maitreyee; Herrmann, Jan

    2011-10-01

    Nanoparticles and products incorporating nanoparticles are a growing branch of nanotechnology industry. They have found a broad market, including the cosmetic, health care and energy sectors. Accurate and representative determination of particle size distributions in such products is critical at all stages of the product lifecycle, extending from quality control at point of manufacture to environmental fate at the point of disposal. Determination of particle size distributions is non-trivial, and is complicated by the fact that different techniques measure different quantities, leading to differences in the measured size distributions. In this study we use both mono- and multi-modal dispersions of nanoparticle reference materials to compare and contrast traditional and novel methods for particle size distribution determination. The methods investigated include ensemble techniques such as dynamic light scattering (DLS) and differential centrifugal sedimentation (DCS), as well as single particle techniques such as transmission electron microscopy (TEM) and microchannel resonator (ultra high-resolution mass sensor).

  4. Sediment spatial distribution evaluated by three methods and its relation to some soil properties

    Energy Technology Data Exchange (ETDEWEB)

    Bacchi, O.O.S. [Centro de Energia Nuclear na Agricultura-CENA/USP, Laboratorio de Fisica do Solo, Piracicaba, SP (Brazil)]. E-mail: osny@ccna.usp.br; Reichardt, K. [Centro de Energia Nuclear na Agricultura-CENA/USP, Laboratorio de Fisica do Solo, Piracicaba, SP (Brazil); Departamento de Ciencias Exatas, Escola Superior de Agricultura ' Luiz de Queiroz' ESALQ/USP, Piracicaba, SP (Brazil); Sparovek, G. [Departamento de Solos e Nutricao de Plantas, Escola Superior de Agricultura ' Luiz de Queiroz' ESALQ/USP, Piracicaba, SP (Brazil)

    2003-02-15

    An investigation of rates and spatial distribution of sediments on an agricultural field cultivated with sugarcane was undertaken using the {sup 137}Cs technique, USLE and WEPP models. The study was carried out on the Ceveiro watershed of the Piracicaba river basin, state of Sao Paulo, Brazil, experiencing severe soil degradation due to soil erosion. The objectives of the study were to compare the spatial distribution of sediments evaluated by the three methods and its relation to some soil properties. Erosion and sedimentation rates and their spatial distribution estimated by the three methods were completely different. Although not able to show sediment deposition, the spatial distribution of erosion rates evaluated by USLE presented the best correlation with other studied soil properties. (author)

  5. Intervals in evolutionary algorithms for global optimization

    Energy Technology Data Exchange (ETDEWEB)

    Patil, R.B.

    1995-05-01

    Optimization is of central concern to a number of disciplines. Interval Arithmetic methods for global optimization provide us with (guaranteed) verified results. These methods are mainly restricted to the classes of objective functions that are twice differentiable and use a simple strategy of eliminating a splitting larger regions of search space in the global optimization process. An efficient approach that combines the efficient strategy from Interval Global Optimization Methods and robustness of the Evolutionary Algorithms is proposed. In the proposed approach, search begins with randomly created interval vectors with interval widths equal to the whole domain. Before the beginning of the evolutionary process, fitness of these interval parameter vectors is defined by evaluating the objective function at the center of the initial interval vectors. In the subsequent evolutionary process the local optimization process returns an estimate of the bounds of the objective function over the interval vectors. Though these bounds may not be correct at the beginning due to large interval widths and complicated function properties, the process of reducing interval widths over time and a selection approach similar to simulated annealing helps in estimating reasonably correct bounds as the population evolves. The interval parameter vectors at these estimated bounds (local optima) are then subjected to crossover and mutation operators. This evolutionary process continues for predetermined number of generations in the search of the global optimum.

  6. BIRTH INTERVAL AMONG NOMAD WOMEN

    Directory of Open Access Journals (Sweden)

    E.Keyvan

    1976-06-01

    Full Text Available To have an, idea about the relation between the length of birth interval and lactation, and birth control program this study have been done. The material for such analysis was nomad women's fertility history that was in their reproductive period (15-44. The material itself was gathered through a health survey. The main sample was composed of 2,165 qualified women, of whom 49 due to previous or presently using contraceptive methods and 10 for the lack of enough data were excluded from 'this study. Purpose of analysis was to find a relation between No. of live births and pregnancies with total duration of married life (in other word, total months which the women were at risk of pregnancy. 2,106 women which their fertility history was analyzed had a totally of272, 502 months married life. During this time 8,520 live births did occurred which gave a birth interval of 32 months. As pregnancy termination could be through either live birth, still birth or abortion (induced or spontaneous, bringing all together will give No. of pregnancies which have occurred during this period (8,520 + 124 + 328 = 8,972 with an average of interpregnancy interval of 30.3 months. Considering the length of components of birth interval: Post partum amenorrhea which depends upon lactation. - Anovulatory cycles (2 month - Ooulatory exposure, in the absence of contraceptive methods (5 months - Pregnancy (9 months.Difference between the length, of birth interval from the sum of the mentioned period (except the first component, (2 + 5+ 9 = 16 will be duration of post partum amenorrhea (32 - 16 = 16, or in other word duration of breast feeding among nomad women. In this study it was found that, in order to reduce birth by 50% a contraceptive method with 87% effectiveness is needed.

  7. A Conceptual Design Method of Disc Brake Systems for Reducing Brake Squeal Considering Pressure Distribution Variations

    OpenAIRE

    松島, 徹; 泉井, 一浩; 西脇, 眞二

    2011-01-01

    This paper proposes a design optimization method for disc brake systems that specifically aims to reduce brake squeal, with robustness against changes on contact surface pressure distribution, based on the concept of First Order Analysis. First, a simplified analysis model is constructed in which a pressure distribution parameter is introduced, and the relationships between the occurrence of brake squeal and the characteristics of various components is then clarified, using the simplified mod...

  8. Load Modeling and State Estimation Methods for Power Distribution Systems: Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Tom McDermott

    2010-05-07

    The project objective was to provide robust state estimation for distribution systems, comparable to what has been available on transmission systems for decades. This project used an algorithm called Branch Current State Estimation (BCSE), which is more effective than classical methods because it decouples the three phases of a distribution system, and uses branch current instead of node voltage as a state variable, which is a better match to current measurement.

  9. Generalized Cross Entropy Method for estimating joint distribution from incomplete information

    Science.gov (United States)

    Xu, Hai-Yan; Kuo, Shyh-Hao; Li, Guoqi; Legara, Erika Fille T.; Zhao, Daxuan; Monterola, Christopher P.

    2016-07-01

    Obtaining a full joint distribution from individual marginal distributions with incomplete information is a non-trivial task that continues to challenge researchers from various domains including economics, demography, and statistics. In this work, we develop a new methodology referred to as "Generalized Cross Entropy Method" (GCEM) that is aimed at addressing the issue. The objective function is proposed to be a weighted sum of divergences between joint distributions and various references. We show that the solution of the GCEM is unique and global optimal. Furthermore, we illustrate the applicability and validity of the method by utilizing it to recover the joint distribution of a household profile of a given administrative region. In particular, we estimate the joint distribution of the household size, household dwelling type, and household home ownership in Singapore. Results show a high-accuracy estimation of the full joint distribution of the household profile under study. Finally, the impact of constraints and weight on the estimation of joint distribution is explored.

  10. Effects of different irrigation methods on micro-environments and root distribution in winter wheat ifelds

    Institute of Scientific and Technical Information of China (English)

    L Guo-hua; SONG Ji-qing; BAI Wen-bo; WU Yong-feng; LIU Yuan; KANG Yao-hu

    2015-01-01

    The irrigation method used in winter wheat ifelds affects micro-environment factors, such as relative humidity (RH) within canopy, soil temperature, topsoil bulk density, soil matric potential, and soil nutrients, and these changes may affect plant root growth. An experiment was carried out to explore the effects of irrigation method on micro-environments and root distribution in a winter wheat ifeld in the 2007–2008 and 2008–2009 growing seasons. The results showed that border irrigation (BI), sprinkler irrigation (SI), and surface drip irrigation (SDI) had no signiifcant effects on soil temperature. Topsoil bulk density, RH within the canopy, soil available N distribution, and soil matric potential were signiifcantly affected by the three treatments. The change in soil matric potential was the key reason for the altered root proifle distribution patterns. Additional y, more ifne roots were produced in the BI treatment when soil water content was low and topsoil bulk density was high. Root growth was most stimulated in the top soil layers and inhibited in the deep layers in the SDI treatment, fol owed by SI and BI, which was due to the different water application frequencies. As a result, the root proifle distribution differed, depending on the irrigation method used. The root distribution pattern changes could be described by the power level variation in the exponential function. A good knowledge of root distribution patterns is important when attempting to model water and nutrient movements and when studying soil-plant interactions.

  11. Comparison of Two Methods Used to Model Shape Parameters of Pareto Distributions

    Science.gov (United States)

    Liu, C.; Charpentier, R.R.; Su, J.

    2011-01-01

    Two methods are compared for estimating the shape parameters of Pareto field-size (or pool-size) distributions for petroleum resource assessment. Both methods assume mature exploration in which most of the larger fields have been discovered. Both methods use the sizes of larger discovered fields to estimate the numbers and sizes of smaller fields: (1) the tail-truncated method uses a plot of field size versus size rank, and (2) the log-geometric method uses data binned in field-size classes and the ratios of adjacent bin counts. Simulation experiments were conducted using discovered oil and gas pool-size distributions from four petroleum systems in Alberta, Canada and using Pareto distributions generated by Monte Carlo simulation. The estimates of the shape parameters of the Pareto distributions, calculated by both the tail-truncated and log-geometric methods, generally stabilize where discovered pool numbers are greater than 100. However, with fewer than 100 discoveries, these estimates can vary greatly with each new discovery. The estimated shape parameters of the tail-truncated method are more stable and larger than those of the log-geometric method where the number of discovered pools is more than 100. Both methods, however, tend to underestimate the shape parameter. Monte Carlo simulation was also used to create sequences of discovered pool sizes by sampling from a Pareto distribution with a discovery process model using a defined exploration efficiency (in order to show how biased the sampling was in favor of larger fields being discovered first). A higher (more biased) exploration efficiency gives better estimates of the Pareto shape parameters. ?? 2011 International Association for Mathematical Geosciences.

  12. A model of synaptic vesicle-pool depletion and replenishment can account for the interspike interval distributions and nonrenewal properties of spontaneous spike trains of auditory-nerve fibers.

    Science.gov (United States)

    Peterson, Adam J; Irvine, Dexter R F; Heil, Peter

    2014-11-05

    In mammalian auditory systems, the spiking characteristics of each primary afferent (type I auditory-nerve fiber; ANF) are mainly determined by a single ribbon synapse in a single receptor cell (inner hair cell; IHC). ANF spike trains therefore provide a window into the operation of these synapses and cells. It was demonstrated previously (Heil et al., 2007) that the distribution of interspike intervals (ISIs) of cat ANFs during spontaneous activity can be modeled as resulting from refractoriness operating on a non-Poisson stochastic point process of excitation (transmitter release events from the IHC). Here, we investigate nonrenewal properties of these cat-ANF spontaneous spike trains, manifest as negative serial ISI correlations and reduced spike-count variability over short timescales. A previously discussed excitatory process, the constrained failure of events from a homogeneous Poisson point process, can account for these properties, but does not offer a parsimonious explanation for certain trends in the data. We then investigate a three-parameter model of vesicle-pool depletion and replenishment and find that it accounts for all experimental observations, including the ISI distributions, with only the release probability varying between spike trains. The maximum number of units (single vesicles or groups of simultaneously released vesicles) in the readily releasable pool and their replenishment time constant can be assumed to be constant (∼4 and 13.5 ms, respectively). We suggest that the organization of the IHC ribbon synapses not only enables sustained release of neurotransmitter but also imposes temporal regularity on the release process, particularly when operating at high rates. Copyright © 2014 the authors 0270-6474/14/3415097-13$15.00/0.

  13. Estimation of Power/Energy Losses in Electric Distribution Systems based on an Efficient Method

    Directory of Open Access Journals (Sweden)

    Gheorghe Grigoras

    2013-09-01

    Full Text Available Estimation of the power/energy losses constitutes an important tool for an efficient planning and operation of electric distribution systems, especially in a free energy market environment. For further development of plans of energy loss reduction and for determination of the implementation priorities of different measures and investment projects, analysis of the nature and reasons of losses in the system and in its different parts is needed. In the paper, an efficient method concerning the power flow problem of medium voltage distribution networks, under condition of lack of information about the nodal loads, is presented. Using this method it can obtain the power/energy losses in power transformers and the lines. The test results, obtained for a 20 kV real distribution network from Romania, confirmed the validity of the proposed method.  

  14. A new method for geochemical anomaly separation based on the distribution patterns of singularity indices

    Science.gov (United States)

    Liu, Yue; Zhou, Kefa; Cheng, Qiuming

    2017-08-01

    Singularity analysis is one of the most important models in the fractal/multifractal family that has been demonstrated as an efficient tool for identifying hybrid distribution patterns of geochemical data, such as normal and multifractal distributions. However, the question of how to appropriately separate these patterns using reasonable thresholds has not been well answered. In the present study, a new method termed singularity-quantile (S-Q) analysis was proposed to separate multiple geochemical anomaly populations based on integrating singularity analysis and quantile-quantile plot (QQ-plot) analysis. The new method provides excellent abilities for characterizing frequency distribution patterns of singularity indices by plotting singularity index quantiles vs. standard normal quantiles. From a perspective of geochemical element enrichment processes, distribution patterns of singularity indices can be evidently separated into three groups by means of the new method, corresponding to element enrichment, element generality and element depletion, respectively. A case study for chromitite exploration based on geochemical data in the western Junggar region (China), was employed to examine the potential application of the new method. The results revealed that the proposed method was very sensitive to the changes of singularity indices with three segments when it was applied to characterize geochemical element enrichment processes. And hence, the S-Q method can be considered as an efficient and powerful tool for separating hybrid geochemical anomalies on the basis of statistical and inherent fractal/multifractal properties.

  15. Ensemble-Biased Metadynamics: A Molecular Simulation Method to Sample Experimental Distributions.

    Science.gov (United States)

    Marinelli, Fabrizio; Faraldo-Gómez, José D

    2015-06-16

    We introduce an enhanced-sampling method for molecular dynamics (MD) simulations referred to as ensemble-biased metadynamics (EBMetaD). The method biases a conventional MD simulation to sample a molecular ensemble that is consistent with one or more probability distributions known a priori, e.g., experimental intramolecular distance distributions obtained by double electron-electron resonance or other spectroscopic techniques. To this end, EBMetaD adds an adaptive biasing potential throughout the simulation that discourages sampling of configurations inconsistent with the target probability distributions. The bias introduced is the minimum necessary to fulfill the target distributions, i.e., EBMetaD satisfies the maximum-entropy principle. Unlike other methods, EBMetaD does not require multiple simulation replicas or the introduction of Lagrange multipliers, and is therefore computationally efficient and straightforward in practice. We demonstrate the performance and accuracy of the method for a model system as well as for spin-labeled T4 lysozyme in explicit water, and show how EBMetaD reproduces three double electron-electron resonance distance distributions concurrently within a few tens of nanoseconds of simulation time. EBMetaD is integrated in the open-source PLUMED plug-in (www.plumed-code.org), and can be therefore readily used with multiple MD engines.

  16. Non-probabilistic fuzzy reliability analysis of pile foundation stability by interval theory

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Randomness and fuzziness are among the attributes of the influential factors for stability assessment of pile foundation.According to these two characteristics, the triangular fuzzy number analysis approach was introduced to determine the probability-distributed function of mechanical parameters. Then the functional function of reliability analysis was constructed based on the study of bearing mechanism of pile foundation, and the way to calculate interval values of the functional function was developed by using improved interval-truncation approach and operation rules of interval numbers. Afterwards, the non-probabilistic fuzzy reliability analysis method was applied to assessing the pile foundation, from which a method was presented for nonprobabilistic fuzzy reliability analysis of pile foundation stability by interval theory. Finally, the probability distribution curve of nonprobabilistic fuzzy reliability indexes of practical pile foundation was concluded. Its failure possibility is 0.91%, which shows that the pile foundation is stable and reliable.

  17. Investigation of diffusion length distribution on polycrystalline silicon wafers via photoluminescence methods

    Science.gov (United States)

    Lou, Shishu; Zhu, Huishi; Hu, Shaoxu; Zhao, Chunhua; Han, Peide

    2015-01-01

    Characterization of the diffusion length of solar cells in space has been widely studied using various methods, but few studies have focused on a fast, simple way to obtain the quantified diffusion length distribution on a silicon wafer. In this work, we present two different facile methods of doing this by fitting photoluminescence images taken in two different wavelength ranges or from different sides. These methods, which are based on measuring the ratio of two photoluminescence images, yield absolute values of the diffusion length and are less sensitive to the inhomogeneity of the incident laser beam. A theoretical simulation and experimental demonstration of this method are presented. The diffusion length distributions on a polycrystalline silicon wafer obtained by the two methods show good agreement. PMID:26364565

  18. A Traffic Forecasting Method with Function to Control Residual Error Distribution for IP Access Networks

    Science.gov (United States)

    Kitahara, Takeshi; Furuya, Hiroki; Nakamura, Hajime

    Since traffic in IP access networks is less aggregated than in backbone networks, its variance could be significant and its distribution may be long-tailed rather than Gaussian in nature. Such characteristics make it difficult to forecast traffic volume in IP access networks for appropriate capacity planning. This paper proposes a traffic forecasting method that includes a function to control residual error distribution in IP access networks. The objective of the proposed method is to grasp the statistical characteristics of peak traffic variations, while conventional methods focus on average rather than peak values. In the proposed method, a neural network model is built recursively while weighting residual errors around the peaks. This enables network operators to control the trade-off between underestimation and overestimation errors according to their planning policy. Evaluation with a total of 136 daily traffic volume data sequences measured in actual IP access networks demonstrates the performance of the proposed method.

  19. A Novel Method to Obtain Wires Distribution Considering the Shape of Generated Electromagnetic Field

    CERN Document Server

    Hong, Tianqi

    2016-01-01

    This paper proposes a method to calculate the wires distribution for generating required electromagnetic field. Instead of solving the distribution of wires directly, we formulate the problem into zero-one programming form. By applying the proposed algorithm to solve the zero-one programming problem, a practical solution can be obtained. Two practical examples are proposed to illustrate detailed calculation steps of the novel method. The comparison between binary particle swarm optimization searching algorithm and the pro-posed algorithm is provided and discussed. All the design results are validated with FEM calculation results.

  20. A Study of Transmission Control Method for Distributed Parameters Measurement in Large Factories and Storehouses

    Directory of Open Access Journals (Sweden)

    Shujing Su

    2015-01-01

    Full Text Available For the characteristics of parameters dispersion in large factories, storehouses, and other applications, a distributed parameter measurement system is designed that is based on the ring network. The structure of the system and the circuit design of the master-slave node are described briefly. The basic protocol architecture about transmission communication is introduced, and then this paper comes up with two kinds of distributed transmission control methods. Finally, the reliability, extendibility, and control characteristic of these two methods are tested through a series of experiments. Moreover, the measurement results are compared and discussed.

  1. Review and possible development direction of the methods for modeling of soil pollutants spatial distribution

    Science.gov (United States)

    Tarasov, D. A.; Medvedev, A. N.; Sergeev, A. P.; Buevich, A. G.

    2017-07-01

    Forecasting of environmental pollutants spatial distribution is a significant field of research in view of the current concerns regarding environment all over the world. Due to the danger to health and environment associated with an increase in pollution of air, soil, water and biosphere, it is very important to have the models that are capable to describe the modern distribution of contaminants and to forecast the dynamic of their spreading in future at different territories. This article addresses the methods, which applied the most often in this field, with an accent on soil pollution. The possible direction of such methods further development is suggested.

  2. SoC-Based Droop Method for Distributed Energy Storage in DC Microgrid Applications

    DEFF Research Database (Denmark)

    Lu, Xiaonan; Sun, Kai; Guerrero, Josep M.

    2012-01-01

    With the progress of distributed generation nowadays, microgrid is employed to integrate different renewable energy sources into a certain area. For several kinds of renewable sources have DC outputs, DC microgrid has drawn more attention recently. Meanwhile, to deal with the uncertainty...... in the output of microgrid system, distributed energy storage is usually adopted. Considering that the state-of-charge (SoC) of each battery may not be the same, decentralized droop control method based on SoC is shown in this paper to reach proportional load power sharing. With this method, the battery...

  3. Methods for fitting a parametric probability distribution to most probable number data.

    Science.gov (United States)

    Williams, Michael S; Ebel, Eric D

    2012-07-01

    Every year hundreds of thousands, if not millions, of samples are collected and analyzed to assess microbial contamination in food and water. The concentration of pathogenic organisms at the end of the production process is low for most commodities, so a highly sensitive screening test is used to determine whether the organism of interest is present in a sample. In some applications, samples that test positive are subjected to quantitation. The most probable number (MPN) technique is a common method to quantify the level of contamination in a sample because it is able to provide estimates at low concentrations. This technique uses a series of dilution count experiments to derive estimates of the concentration of the microorganism of interest. An application for these data is food-safety risk assessment, where the MPN concentration estimates can be fitted to a parametric distribution to summarize the range of potential exposures to the contaminant. Many different methods (e.g., substitution methods, maximum likelihood and regression on order statistics) have been proposed to fit microbial contamination data to a distribution, but the development of these methods rarely considers how the MPN technique influences the choice of distribution function and fitting method. An often overlooked aspect when applying these methods is whether the data represent actual measurements of the average concentration of microorganism per milliliter or the data are real-valued estimates of the average concentration, as is the case with MPN data. In this study, we propose two methods for fitting MPN data to a probability distribution. The first method uses a maximum likelihood estimator that takes average concentration values as the data inputs. The second is a Bayesian latent variable method that uses the counts of the number of positive tubes at each dilution to estimate the parameters of the contamination distribution. The performance of the two fitting methods is compared for two

  4. Application of DVC-FISH method in tracking Escherichia coli in drinking water distribution networks

    Directory of Open Access Journals (Sweden)

    L. Mezule

    2013-04-01

    Full Text Available Sporadic detection of live (viable Escherichia coli in drinking water and biofilm with molecular methods but not with standard plate counts has raised concerns about the reliability of this indicator in the surveillance of drinking water safety. The aim of this study was to determine spatial distribution of different viability forms of E. coli in a drinking water distribution system which complies with European Drinking Water Directive (98/83/EC. For two years coupons (two week old and pre-concentrated (100 times with ultrafilters water samples were collected after treatment plants and from four sites in the distribution network at several distances. The samples were analyzed for total, viable (able to divide as DVC-FISH positive and cultivable E. coli. The results showed that low numbers of E. coli enters the distribution sytem from the treatment plants and tend to accumulate in the biofilm of water distribution system. Almost all of the samples contained metabolically active E. coli in the range of 1 to 50 cells per litre or cm2 which represented approximately 53% of all E. coli detected. The amount of viable E. coli significantly increased into the network irrespective of the season. The study has shown that DVC-FISH method in combination with water pre-concentration and biofilm sampling allows to better understand the behaviour of E. coli in water distribution networks, thus, it provides new evidences for water safety control.

  5. Flood frequency analysis using multi-objective optimization based interval estimation approach

    Science.gov (United States)

    Kasiviswanathan, K. S.; He, Jianxun; Tay, Joo-Hwa

    2017-02-01

    Flood frequency analysis (FFA) is a necessary tool for water resources management and water infrastructure design. Owing to the existence of variability in sample representation, distribution selection, and distribution parameter estimation, flood quantile estimation is subjected to various levels of uncertainty, which is not negligible and avoidable. Hence, alternative methods to the conventional approach of FFA are desired for quantifying the uncertainty such as in the form of prediction interval. The primary focus of the paper was to develop a novel approach to quantify and optimize the prediction interval resulted from the non-stationarity of data set, which is reflected in the distribution parameters estimated, in FFA. This paper proposed the combination of the multi-objective optimization approach and the ensemble simulation technique to determine the optimal perturbations of distribution parameters for constructing the prediction interval of flood quantiles in FFA. To demonstrate the proposed approach, annual maximum daily flow data collected from two gauge stations on the Bow River, Alberta, Canada, were used. The results suggest that the proposed method can successfully capture the uncertainty in quantile estimates qualitatively using the prediction interval, as the number of observations falling within the constructed prediction interval is approximately maximized while the prediction interval is minimized.

  6. The generalized method of moments as applied to the generalized gamma distribution

    Science.gov (United States)

    Ashkar, F.; Bobée, B.; Leroux, D.; Morisette, D.

    1988-09-01

    The generalized gamma (GG) distribution has a density function that can take on many possible forms commonly encountered in hydrologic applications. This fact has led many authors to study the properties of the distribution and to propose various estimation techniques (method of moments, mixed moments, maximum likelihood etc.). We discuss some of the most important properties of this flexible distribution and present a flexible method of parameter estimation, called the “generalized method of moments” (GMM) which combines any three moments of the GG distribution. The main advantage of this general method is that it has many of the previously proposed methods of estimation as special cases. We also give a general formula for the variance of the T-year event X T obtained by the GMM along with a general formula for the parameter estimates and also for the covariances and correlation coefficients between any pair of such estimates. By applying the GMM and carefully choosing the order of the moments that are used in the estimation one can significantly reduce the variance of T-year events for the range of return periods that are of interest.

  7. A Determination Method of the Restoration Configuration Considering Many Connections of Distributed Generators

    Science.gov (United States)

    Takano, Hirotaka; Hayashi, Yasuhiro; Matsuki, Junya; Sugaya, Shuhei

    In the field of electrical power system, various approaches, such as utilization of renewable energy, loss reduction, and so on, have been taken to reduce CO2 emission. So as to work toward this goal, the total number of distributed generators (DGs) using renewable energy connected into 6.6kV distribution system has been increasing rapidly. However, when a fault occurs such as distribution line faults and bank faults, DGs connecting outage sections are disconnected simultaneously. Since the output of DGs influences feeder current and node voltage of distribution system, it is necessary to determine the optimal system configuration considering simultaneous disconnection and reconnection of DGs. In this paper, the authors propose a computation method to determine the optimal restoration configuration considering many connections of DGs. The feature of determined restoration configurations is prevention of the violation of operational constraints by disconnection and reconnection of DGs. Numerical simulations are carried out for a real scale distribution system model with 4 distribution substations, 72 distribution feeders, 252 sectionalizing switches (configuration candidates are 2252) and 23.2MW DGs (which is 14% of total load) in order to examine the validity of the proposed algorithm.

  8. ON ESTIMATION AND HYPOTHESIS TESTING OF THE GRAIN SIZE DISTRIBUTION BY THE SALTYKOV METHOD

    Directory of Open Access Journals (Sweden)

    Yuri Gulbin

    2011-05-01

    Full Text Available The paper considers the problem of validity of unfolding the grain size distribution with the back-substitution method. Due to the ill-conditioned nature of unfolding matrices, it is necessary to evaluate the accuracy and precision of parameter estimation and to verify the possibility of expected grain size distribution testing on the basis of intersection size histogram data. In order to review these questions, the computer modeling was used to compare size distributions obtained stereologically with those possessed by three-dimensional model aggregates of grains with a specified shape and random size. Results of simulations are reported and ways of improving the conventional stereological techniques are suggested. It is shown that new improvements in estimating and testing procedures enable grain size distributions to be unfolded more efficiently.

  9. Application of microdosimetric methods for the determination of energy deposition distributions by inhaled actinides

    Energy Technology Data Exchange (ETDEWEB)

    Aubineau-Laniece, I.; Castellan, G.; Caswell, R.S.; Guezingar, F.; Henge-Napoli, M.H.; Li, W.B.; Pihet, P

    1998-07-01

    The respiratory tract dosimetry model of ICRP Publication 66 takes into account the morphometry of lung tissues for the determination of average energy deposited by {alpha} emitters. However, it assumes a uniform distribution of radioactive material. The statistical fluctuations in frequency of cells hit and of energy deposited in individual target cells depends significantly on the real distribution of radioactive material, including possible high local concentrations. This paper is aimed at investigating the application of two established analytic methods, which have been combined to determine single and multi-event energy deposition distributions in epithelial cells of bronchiolar airway exposed to 5.15 MeV {alpha} particles ({sup 239}Pu). The relative importance of multi-event occurrence on the shape of the specific energy distributions is discussed. (author)

  10. Method for acquiring part load distribution coefficient of air conditioning system

    Institute of Scientific and Technical Information of China (English)

    丁勇; 李百战; 谭颖

    2009-01-01

    This paper presents a method to acquire runtime distribution ratio of building air conditioning system under part load condition (part load coefficient of system) through practical energy consumption data. By utilizing monthly energy consumption data of the entire year as the analysis object,this paper identifies data distribution,verifies distribution characteristics and analyzes distribution probability density for the issue of running time distribution ratio of air conditioning system in part load zones in the whole operation period,thus providing a basic calculation basis for an overall analysis of energy efficiency of air conditioning system. In view of the general survey of public building energy consumption carried by the government of Chongqing,this paper takes the governmental office building as an example,the part load ratio coefficient corresponding to practical running of air conditioning system of governmental office building in Chongqing is obtained by utilizing the above probability analysis and the solving method of probability density function. By utilizing the ratio coefficient obtained using this method,the part load coefficient with any running ratio of air conditioning system can be obtained according to the requirement of analysis,which can be used in any load ratio for analyzing running energy efficiency of air conditioning system.

  11. Thermodynamic method for generating random stress distributions on an earthquake fault

    Science.gov (United States)

    Barall, Michael; Harris, Ruth A.

    2012-01-01

    This report presents a new method for generating random stress distributions on an earthquake fault, suitable for use as initial conditions in a dynamic rupture simulation. The method employs concepts from thermodynamics and statistical mechanics. A pattern of fault slip is considered to be analogous to a micro-state of a thermodynamic system. The energy of the micro-state is taken to be the elastic energy stored in the surrounding medium. Then, the Boltzmann distribution gives the probability of a given pattern of fault slip and stress. We show how to decompose the system into independent degrees of freedom, which makes it computationally feasible to select a random state. However, due to the equipartition theorem, straightforward application of the Boltzmann distribution leads to a divergence which predicts infinite stress. To avoid equipartition, we show that the finite strength of the fault acts to restrict the possible states of the system. By analyzing a set of earthquake scaling relations, we derive a new formula for the expected power spectral density of the stress distribution, which allows us to construct a computer algorithm free of infinities. We then present a new technique for controlling the extent of the rupture by generating a random stress distribution thousands of times larger than the fault surface, and selecting a portion which, by chance, has a positive stress perturbation of the desired size. Finally, we present a new two-stage nucleation method that combines a small zone of forced rupture with a larger zone of reduced fracture energy.

  12. A numerical method to determine the steady state distribution of passive contaminant in generic ventilation systems.

    Science.gov (United States)

    Li, Xianting; Shao, Xiaoliang; Ma, Xiaojun; Zhang, Yuanhui; Cai, Hao

    2011-08-15

    Ventilation system with air recirculation is designed to conserve energy, yet at the same time may result in transporting hazardous substance among different rooms in the same building, which is a concern in indoor air quality control. There is a lack of effective methods to predict indoor contaminant distribution primarily because of uncertainty of the contaminant concentration in supply air which in turn due to the mixing ratio of fresh and recirculation air. In this paper, a versatile numerical method to determine the pollutant distribution of ventilation system with recirculation at steady state is proposed based on typical ventilation systems with accessibility of supply air (ASA) and accessibility of contaminant source (ACS). The relationship is established between contaminant concentrations of supply air and return air in a ventilated room or zone. The concentrations of supply air and contaminant distribution in each room can be determined using such parameters as ASA and ACS. The proposed method is validated by both experimental data and numerical simulation result. The computing speed of the proposed method is compared with the iteration method. The comparisons between the proposed method and the lumped parameter model are also conducted. The advantages of the proposed method in terms of accuracy, speed and versatility make it advantageous to be applied in air quality control of complex ventilation systems with recirculation.

  13. Comparison of Geostatistical Methods to Determine the Best Bioclimatic Data Interpolation Method for Modelling Species Distribution in Central Iran

    Directory of Open Access Journals (Sweden)

    R. Khosravi

    2014-09-01

    Full Text Available Climatic change can impose physiological constraints on species and can therefore affect species distribution. Bioclimatic predictors, including annual trends, regimes, thresholds and bio-limiting factors are the most important independent variables in species distribution models. Water and temperature are the most limiting factors in arid ecosystem in central Iran. Therefore, mapping of climatic factors in species distribution models seems necessary. In this study, we describe the extraction of 20 important bioclimatic variables from climatic data and compare different interpolation methods including inverse distance weighting, ordinary kriging, kriging with external trend, cokriging, and five radial basis functions. Normal climatic data (1950-2010 in 26 synoptic stations in central Iran were used to extract bioclimatic data. Spatial correlation, heterogeneity and trend in data were evaluated using three models of semivariogram (spherical, exponential and Gaussian and the best model was selected using cross validation. The optimum model for bioclimatic variables was assessed based on the root mean square error and mean bias error. Exponential model was considered to be the best fit mathematical model to empirical semivariogram. IDW and cokriging were recognised as the best interpolating methods for average annual temperature and annual precipitation, respectively. Use of elevation as an auxiliary variable appeared to be necessary for optimizing interpolation methods of climatic and bioclimatic variables.

  14. High definition in minimally invasive surgery: a review of methods for recording, editing, and distributing video.

    Science.gov (United States)

    Kelly, Christopher R; Hogle, Nancy J; Landman, Jaime; Fowler, Dennis L

    2008-09-01

    The use of high-definition cameras and monitors during minimally invasive procedures can provide the surgeon and operating team with more than twice the resolution of standard definition systems. Although this dramatic improvement in visualization offers numerous advantages, the adoption of high definition cameras in the operating room can be challenging because new recording equipment must be purchased, and several new technologies are required to edit and distribute video. The purpose of this review article is to provide an overview of the popular methods for recording, editing, and distributing high-definition video. This article discusses the essential technical concepts of high-definition video, reviews the different kinds of equipment and methods most often used for recording, and describes several options for video distribution.

  15. Distributed Solutions for Loosely Coupled Feasibility Problems Using Proximal Splitting Methods

    DEFF Research Database (Denmark)

    Pakazad, Sina Khoshfetrat; Andersen, Martin Skovgaard; Hansson, Anders

    2014-01-01

    In this paper,we consider convex feasibility problems (CFPs) where the underlying sets are loosely coupled, and we propose several algorithms to solve such problems in a distributed manner. These algorithms are obtained by applying proximal splitting methods to convex minimization reformulations...... of CFPs. We also put forth distributed convergence tests which enable us to establish feasibility or infeasibility of the problem distributedly, and we provide convergence rate results. Under the assumption that the problem is feasible and boundedly linearly regular, these convergence results are given...... in terms of the distance of the iterates to the feasible set, which are similar to those of classical projection methods. In case the feasibility problem is infeasible, we provide convergence rate results that concern the convergence of certain error bounds....

  16. Distributed memory compiler methods for irregular problems: Data copy reuse and runtime partitioning

    Science.gov (United States)

    Das, Raja; Ponnusamy, Ravi; Saltz, Joel; Mavriplis, Dimitri

    1991-01-01

    Outlined here are two methods which we believe will play an important role in any distributed memory compiler able to handle sparse and unstructured problems. We describe how to link runtime partitioners to distributed memory compilers. In our scheme, programmers can implicitly specify how data and loop iterations are to be distributed between processors. This insulates users from having to deal explicitly with potentially complex algorithms that carry out work and data partitioning. We also describe a viable mechanism for tracking and reusing copies of off-processor data. In many programs, several loops access the same off-processor memory locations. As long as it can be verified that the values assigned to off-processor memory locations remain unmodified, we show that we can effectively reuse stored off-processor data. We present experimental data from a 3-D unstructured Euler solver run on iPSC/860 to demonstrate the usefulness of our methods.

  17. Method of Images for the Fast Calculation of Temperature Distributions in Packaged VLSI Chips

    CERN Document Server

    Hériz, Virginia Martín; Kemper, T; Kang, S -M; Shakouri, A

    2008-01-01

    Thermal aware routing and placement algorithms are important in industry. Currently, there are reasonably fast Green's function based algorithms that calculate the temperature distribution in a chip made from a stack of different materials. However, the layers are all assumed to have the same size, thus neglecting the important fact that the thermal mounts which are placed underneath the chip can be significantly larger than the chip itself. In an earlier publication, we showed that the image blurring technique can be used to calculate quickly temperature distribution in realistic packages. For this method to be effective, temperature distribution for several point heat sources at the center and at the corner and edges of the chip should be calculated using finite element analysis (FEA) or measured. In addition, more accurate results require correction by a weighting function that will need several FEA simulations. In this paper, we introduce the method of images that take the symmetry of the thermal boundary...

  18. Distributed sampling measurement method of network traffic in high-speed IPv6 networks

    Institute of Scientific and Technical Information of China (English)

    Pan Qiao; Pei Changxing

    2007-01-01

    With the advent of large-scale and high-speed IPv6 network technology, an effective multi-point traffic sampling is becoming a necessity. A distributed multi-point traffic sampling method that provides an accurate and efficient solution to measure IPv6 traffic is proposed. The proposed method is to sample IPv6 traffic based on the analysis of bit randomness of each byte in the packet header. It offers a way to consistently select the same subset of packets at each measurement point, which satisfies the requirement of the distributed multi-point measurement. Finally, using real IPv6 traffic traces, the conclusion that the sampled traffic data have a good uniformity that satisfies the requirement of sampling randomness and can correctly reflect the packet size distribution of full packet trace is proved.

  19. Cellular Neural Network-Based Methods for Distributed Network Intrusion Detection

    Directory of Open Access Journals (Sweden)

    Kang Xie

    2015-01-01

    Full Text Available According to the problems of current distributed architecture intrusion detection systems (DIDS, a new online distributed intrusion detection model based on cellular neural network (CNN was proposed, in which discrete-time CNN (DTCNN was used as weak classifier in each local node and state-controlled CNN (SCCNN was used as global detection method, respectively. We further proposed a new method for design template parameters of SCCNN via solving Linear Matrix Inequality. Experimental results based on KDD CUP 99 dataset show its feasibility and effectiveness. Emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI implementation which allows the distributed intrusion detection to be performed better.

  20. Simulation of Temperature Distribution In a Rectangular Cavity using Finite Element Method

    CERN Document Server

    Naa, Christian

    2013-01-01

    This paper presents the study and implementation of finite element method to find the temperature distribution in a rectangular cavity which contains a fluid substance. The fluid motion is driven by a sudden temperature difference applied to two opposite side walls of the cavity. The remaining walls were considered adiabatic. Fluid properties were assumed incompressible. The problem has been approached by two-dimensional transient conduction which applied on the heated sidewall and one-dimensional steady state convection-diffusion equation which applied inside the cavity. The parameters which investigated are time and velocity. These parameters were computed together with boundary conditions which result in temperature distribution in the cavity. The implementation of finite element method was resulted in algebraic equation which is in vector and matrix form. Therefore, MATLAB programs used to solve this algebraic equation. The final temperature distribution results were presented in contour map within the re...

  1. A computed torque method based attitude control with optimal force distribution for articulated body mobile robots

    Energy Technology Data Exchange (ETDEWEB)

    Fukushima, Edwardo F.; Hirose, Shigeo [Tokyo Inst. of Tech. (Japan)

    2000-05-01

    This paper introduces an attitude control scheme based in optimal force distribution using quadratic programming which minimizes joint energy consumption. This method shares similarities with force distribution for multifingered hands, multiple coordinated manipulators and legged walking robots. In particular, an attitude control scheme was introduced inside the force distribution problem, and successfully implemented for control of the articulated body mobile robot KR-II. This is an actual mobile robot composed of cylindrical segments linked in series by prismatic joints and has a long snake-like appearance. These prismatic joints are force controlled so that each segment's vertical motion can automatically follow the terrain irregularities. An attitude control is necessary because this system acts like a system of wheeled inverted pendulum carts connected in series, being unstable by nature. The validity and effectiveness of the proposed method is verified by computer simulation and experiments with the robot KR-II. (author)

  2. Perturbation methods for structural-acoustic coupled systems with interval parameters%含区间参数的结构-声耦合系统摄动分析方法

    Institute of Scientific and Technical Information of China (English)

    牛明涛; 李昌盛; 陈利源

    2015-01-01

    针对实际工程中普遍存在的结构-声耦合系统,充分考虑系统本身及外载荷不确定性,基于摄动理论建立一阶及高阶参数摄动两种区间分析方法。从耦合系统有限元平衡方程出发,引入区间变量对系统不确定参数进行定量化描述。据传统的一阶 Taylor 展式及摄动理论,可快速估算系统响应区间上下界。高阶区间参数摄动分析方法除采用改进的 Taylor 展式对区间矩阵、向量近似估算外,亦保留 Neumann 级数中部分高阶项,可有效提高响应范围的计算精度。以长方体密闭舱室为研究对象,将计算结果与传统蒙特卡洛方法对比,充分验证所提数值计算方法求解含区间参数结构-声耦合问题的可行性、有效性。%Based on the perturbation theory,two interval analysis methods named first-order interval parameter perturbation method (FIPPM)and high-order interval parameter perturbation method (HIPPM)were proposed for the structural-acoustic coupled system response prediction with interval uncertainties in both system parameters and external loads.The structural-acoustic discrete equilibrium equations were established based on the finite element method.Interval variables were used to quantitatively describe the uncertain parameters with limited information.According to the first-order Taylor series and the first-order perturbation theory,the system response interval could be quickly estimated with FIPPM.HIPPMintroduced the modified Taylor series to approximately estimate the non-linear interval matrix and vector. Part of higher order terms of Neumann expansion were retained to calculate the interval matrix inverse.A 3D cuboid model was taken as a study object,its computing results using the propose methods were compared with those using the traditional Monte Carlo method.It was shown that the proposed methods are feasible and effective to predit the sound pressure ranges of structural

  3. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    of generic run-time error types, design of methods of observing application software behaviorduring execution and design of methods of evaluating run time constraints. In the definition of error types it is attempted to cover all relevant aspects of the application softwaree behavior. Methods of observation...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design......In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...

  4. Tide forecasting method based on dynamic weight distribution for operational evaluation

    Institute of Scientific and Technical Information of China (English)

    Shao-wei QIU; Zeng-chuan DONG; Fen XU; Li SUN; Sheng CHEN

    2009-01-01

    Through analysis of operational evaluation factors for tide forecasting, the relationship between the evaluation factors and the weights of forecasters was examined. A tide forecasting method based on dynamic weight distribution for operational evaluation was developed, and multiple-forecaster synchronous forecasting was realized while avoiding the instability cased by only one forecaster. Weights were distributed to the forecasters according to each one's forecast precision. An evaluation criterion for the professional level of the forecasters was also built. The eligibility rates of forecast results demonstrate the skill of the forecasters and the stability of their forecasts. With the developed tide forecasting method, the precision and reasonableness of tide forecasting are improved. The application of the present method to tide forecasting at the Huangpu Park tidal station demonstrates the validity of the method.

  5. Methods to estimate distribution and range extent of grizzly bears in the Greater Yellowstone Ecosystem

    Science.gov (United States)

    Haroldson, Mark A.; Schwartz, Charles C.; , Daniel D. Bjornlie; , Daniel J. Thompson; , Kerry A. Gunther; , Steven L. Cain; , Daniel B. Tyers; Frey, Kevin L.; Aber, Bryan C.

    2014-01-01

    The distribution of the Greater Yellowstone Ecosystem grizzly bear (Ursus arctos) population has expanded into areas unoccupied since the early 20th century. Up-to-date information on the area and extent of this distribution is crucial for federal, state, and tribal wildlife and land managers to make informed decisions regarding grizzly bear management. The most recent estimate of grizzly bear distribution (2004) utilized fixed-kernel density estimators to describe distribution. This method was complex and computationally time consuming and excluded observations of unmarked bears. Our objective was to develop a technique to estimate grizzly bear distribution that would allow for the use of all verified grizzly bear location data, as well as provide the simplicity to be updated more frequently. We placed all verified grizzly bear locations from all sources from 1990 to 2004 and 1990 to 2010 onto a 3-km × 3-km grid and used zonal analysis and ordinary kriging to develop a predicted surface of grizzly bear distribution. We compared the area and extent of the 2004 kriging surface with the previous 2004 effort and evaluated changes in grizzly bear distribution from 2004 to 2010. The 2004 kriging surface was 2.4% smaller than the previous fixed-kernel estimate, but more closely represented the data. Grizzly bear distribution increased 38.3% from 2004 to 2010, with most expansion in the northern and southern regions of the range. This technique can be used to provide a current estimate of grizzly bear distribution for management and conservation applications.

  6. Application of the anisotropy field distribution method to arrays of magnetic nanowires

    OpenAIRE

    De La Torre Medina, Joaquin; Darques, Michaël; Piraux, Luc; Encinas, Armando

    2009-01-01

    The applicability of the anisotropy field distribution method and the conditions required for an accurate determination of the effective anisotropy field in arrays of magnetic nanowires have been evaluated. In arrays of magnetic nanowires that behave as ideal uniaxial systems having only magnetostatic contributions to the effective anisotropy field, i.e., shape anisotropy and magnetostatic coupling, the method yields accurate values of the average anisotropy field at low-moderate dipolar coup...

  7. Improving the efficiency of the loop method for the simulation of water distribution networks

    OpenAIRE

    2015-01-01

    Efficiency of hydraulic solvers for the simulation of flows and pressures in water distribution systems (WDSs) is very important, especially in the context of optimization and risk analysis problems, where the hydraulic simulation has to be repeated many times. Among the methods used for hydraulic solvers, the most prominent nowadays is the global gradient algorithm (GGA), based on a hybrid node-loop formulation. Previously, another method based just on loop flow equations was proposed, which...

  8. Conventional and Alternative Disinfection Methods of Legionella in Water Distribution Systems – Review

    Directory of Open Access Journals (Sweden)

    Pūle Daina

    2016-12-01

    Full Text Available Prevalence of Legionella in drinking water distribution systems is a widespread problem. Outbreaks of Legionella caused diseases occur despite various disinfectants are used in order to control Legionella. Conventional methods like thermal disinfection, silver/copper ionization, ultraviolet irradiation or chlorine-based disinfection have not been effective in the long term for control of biofilm bacteria. Therefore, research to develop more effective disinfection methods is still necessary.

  9. Description of the particle distribution in the space of composite suspension casting by statistical methods

    Directory of Open Access Journals (Sweden)

    J. Grabian

    2011-01-01

    Full Text Available This article presents a description of the reinforcement phase distribution in the space of composite suspension casting. The statistical methods used include Spearman’s rank correlation coefficient combined with the significance test, Chi-square test of independence and the test of contingency. The reinforcement phase consisted of SiC particles (15% by weight, and the matrix was AlSi11 alloy. Composites were made by mechanical stir casting method.

  10. A comparison between different methods for determining grain distribution in coarse channel beds

    Institute of Scientific and Technical Information of China (English)

    Alessio Cislaghi; Enrico Antonio Chiaradia; Gian Battista Bischetti

    2016-01-01

    The determination of grain size distribution in alluvial channels plays a crucial role in understanding fluvial dynamics and processes (e.g., hydraulic resistance, sediment transport and erosion, and habitat suitability). However, to determine an accurate distribution, tremendous field efforts are often required. Traditionally, the grain size distribution of channel beds have been obtained by manually counting a set of randomly selected stones (the“pebble count”). Based on this elementary principle, many authors have proposed different adaptations to overcome weaknesses and problems with the original method; with the development of digital technology, photographic methods have been developed in order to sig-nificantly reduce the time spent in the field. Two of these“image-assisted”methods include Automated Grain Sizing, AGS, and Manual Photo Sieving, MPS. In this study, AGS and MPS were applied under ideal laboratory conditions, to be used as reference, and in two field conditions with different degrees of difficulty in terms of visual determination of the grain size distribution; these included an artificial unlined channel and two natural mountainous streams. The results were compared with those obtained with the pebble-count method. In general, strong agreement between the methods was found when they were applied under favorable conditions (”the laboratory”), and the differences between the image-assisted and pebble count methods were similar to those found in previous studies. Despite being more time consuming, MPS was deemed preferable to AGS when conditions are not optimal;in these cases, the time spent on image elaboration significantly increased in the AGS method (approximately three-fold), but the estimation error of the median grain size decreased by approximately 37%. The use of image-assisted analysis has proven to be robust for characterizing sediment in watercourse beds and reducing fieldwork time, but because field conditions can

  11. An Identification Method of Magnetizing Inrush Current Phenomena in Distribution System

    Science.gov (United States)

    Dou, Naoki; Toyama, Atushi; Satoh, Kohki; Naitoh, Tadashi; Masaki, Kazuyuki

    In high voltage distribution systems, there are many power quality troubles due to voltage dips. Otherwise, a magnetizing inrush current causes the voltage dip. To suppress voltage dips, it is necessary to identify the magnetizing inrush current phenomena. In this paper, the authors propose a new identification method. The principles are that the saturation start/end flux is equal and the inrush current pattern exists. And to avoid a interfere with saturation area overlap; the rectangular coordinate method is adopted.

  12. Air method measurements of apple vessel length distributions with improved apparatus and theory

    Science.gov (United States)

    Shabtal Cohen; John Bennink; Mel Tyree

    2003-01-01

    Studies showing that rootstock dwarfing potential is related to plant hydraulic conductance led to the hypothesis that xylem properties are also related. Vessel length distribution and other properties of apple wood from a series of varieties were measured using the 'air method' in order to test this hypothesis. Apparatus was built to measure and monitor...

  13. Research on Gaussian distribution preprocess method of infrared multispectral image background clutter

    Institute of Scientific and Technical Information of China (English)

    张伟; 武春风; 邓盼; 范宁

    2004-01-01

    This paper introduces a sliding-window mean removal high pass filter by which background clutter of infrared multispectral image is obtained. The method of selecting the optimum size of the sliding-window is based on the skewness-kurtosis test. In the end, a multivariate Gaussian distribution mathematical expression of background clutter image is given.

  14. Method for Quantitative Determination of Spatial Polymer Distribution in Alginate Beads Using Raman Spectroscopy

    NARCIS (Netherlands)

    Heinemann, Matthias; Meinberg, Holger; Büchs, Jochen; Koß, Hans-Jürgen; Ansorge-Schumacher, Marion B.

    2005-01-01

    A new method based on Raman spectroscopy is presented for non-invasive, quantitative determination of the spatial polymer distribution in alginate beads of approximately 4 mm diameter. With the experimental setup, a two-dimensional image is created along a thin measuring line through the bead compri

  15. Method for Quantitative Determination of Spatial Polymer Distribution in Alginate Beads Using Raman Spectroscopy

    NARCIS (Netherlands)

    Heinemann, Matthias; Meinberg, Holger; Büchs, Jochen; Koß, Hans-Jürgen; Ansorge-Schumacher, Marion B.

    2005-01-01

    A new method based on Raman spectroscopy is presented for non-invasive, quantitative determination of the spatial polymer distribution in alginate beads of approximately 4 mm diameter. With the experimental setup, a two-dimensional image is created along a thin measuring line through the bead

  16. Networked and Distributed Control Method with Optimal Power Dispatch for Islanded Microgrids

    DEFF Research Database (Denmark)

    Li, Qiang; Peng, Congbo; Chen, Minyou;

    2017-01-01

    In this paper, a two-layer network and distributed control method is proposed, where there is a top layer communication network over a bottom layer microgrid. The communication network consists of two subgraphs, in which the first is composed of all agents, while the second is only composed of co...

  17. An efficient Markov chain Monte Carlo method for distributions with intractable normalising constants

    DEFF Research Database (Denmark)

    Møller, Jesper; Pettitt, A. N.; Reeves, R.

    2006-01-01

    is presented which requires only that independent samples can be drawn from the unnormalised density at any particular parameter value. The proposal distribution is constructed so that the normalising constant cancels from the Metropolis–Hastings ratio. The method is illustrated by producing posterior samples...... for parameters of the Ising model given a particular lattice realisation....

  18. The simulation of skin temperature distributions by means of a relaxation method (applied to IR thermography)

    NARCIS (Netherlands)

    Vermey, G.F.

    1975-01-01

    To solve the differential equation for the heat in a two-layer, rectangular piece of skin tissue, a relaxation method, based on a finite difference technique, is used. The temperature distributions on the skin surface are calculated. The results are used to derive a criterion for the resolution for

  19. Evaluation for Confidence Interval of Reliability of Rolling Bearing Lifetime with Type I Censoring

    Directory of Open Access Journals (Sweden)

    Xintao Xia

    2013-06-01

    Full Text Available In the lifetime test of rolling bearings under type I censoring with a small sample, the confidence interval of reliability needs to be evaluated to ensure safe and reliable operation of a system like an aerospace system. Thus the probability density function of Weibull distribution parameters must be attained. Owing to very few test data and for lack of prior knowledge, it is difficult to take it out for prevailing methods like the moment method, the maximum likelihood method and the Harris method. For this end, the bootstrap likelihood maximum-entropy method is proposed by fusing the bootstrap method, the maximum likelihood method and the Harris method. The lifetime test data with the small sample are made into the simulated parameter data with the large sample to obtain the probability density function on the parameters. The confidence intervals of the Weibull distribution parameters are estimated and the confidence interval of reliability is calculated. The tests of the complete large-sample data, the complete small-sample data and the incomplete small-sample data are produced to prove effectiveness of the proposed method. Results show that the proposed method can assess the confidence interval of the reliability without any prior information on the Weibull distribution parameters.

  20. Distance Determination Method for Normally Distributed Obstacle Avoidance of Mobile Robots in Stochastic Environments

    Directory of Open Access Journals (Sweden)

    Jinhong Noh

    2016-04-01

    Full Text Available Obstacle avoidance methods require knowledge of the distance between a mobile robot and obstacles in the environment. However, in stochastic environments, distance determination is difficult because objects have position uncertainty. The purpose of this paper is to determine the distance between a robot and obstacles represented by probability distributions. Distance determination for obstacle avoidance should consider position uncertainty, computational cost and collision probability. The proposed method considers all of these conditions, unlike conventional methods. It determines the obstacle region using the collision probability density threshold. Furthermore, it defines a minimum distance function to the boundary of the obstacle region with a Lagrange multiplier method. Finally, it computes the distance numerically. Simulations were executed in order to compare the performance of the distance determination methods. Our method demonstrated a faster and more accurate performance than conventional methods. It may help overcome position uncertainty issues pertaining to obstacle avoidance, such as low accuracy sensors, environments with poor visibility or unpredictable obstacle motion.