WorldWideScience

Sample records for interval distribution method

  1. Measurement of subcritical multiplication by the interval distribution method

    International Nuclear Information System (INIS)

    Nelson, G.W.

    1985-01-01

    The prompt decay constant or the subcritical neutron multiplication may be determined by measuring the distribution of the time intervals between successive neutron counts. The distribution data is analyzed by least-squares fitting to a theoretical distribution function derived from a point reactor probability model. Published results of measurements with one- and two-detector systems are discussed. Data collection times are shorter, and statistical errors are smaller the nearer the system is to delayed critical. Several of the measurements indicate that a shorter data collection time and higher accuracy are possible with the interval distribution method than with the Feynman variance method

  2. An innovative method for offshore wind farm site selection based on the interval number with probability distribution

    Science.gov (United States)

    Wu, Yunna; Chen, Kaifeng; Xu, Hu; Xu, Chuanbo; Zhang, Haobo; Yang, Meng

    2017-12-01

    There is insufficient research relating to offshore wind farm site selection in China. The current methods for site selection have some defects. First, information loss is caused by two aspects: the implicit assumption that the probability distribution on the interval number is uniform; and ignoring the value of decision makers' (DMs') common opinion on the criteria information evaluation. Secondly, the difference in DMs' utility function has failed to receive attention. An innovative method is proposed in this article to solve these drawbacks. First, a new form of interval number and its weighted operator are proposed to reflect the uncertainty and reduce information loss. Secondly, a new stochastic dominance degree is proposed to quantify the interval number with a probability distribution. Thirdly, a two-stage method integrating the weighted operator with stochastic dominance degree is proposed to evaluate the alternatives. Finally, a case from China proves the effectiveness of this method.

  3. Probability Distribution for Flowing Interval Spacing

    International Nuclear Information System (INIS)

    Kuzio, S.

    2001-01-01

    The purpose of this analysis is to develop a probability distribution for flowing interval spacing. A flowing interval is defined as a fractured zone that transmits flow in the Saturated Zone (SZ), as identified through borehole flow meter surveys (Figure 1). This analysis uses the term ''flowing interval spacing'' as opposed to fractured spacing, which is typically used in the literature. The term fracture spacing was not used in this analysis because the data used identify a zone (or a flowing interval) that contains fluid-conducting fractures but does not distinguish how many or which fractures comprise the flowing interval. The flowing interval spacing is measured between the midpoints of each flowing interval. Fracture spacing within the SZ is defined as the spacing between fractures, with no regard to which fractures are carrying flow. The Development Plan associated with this analysis is entitled, ''Probability Distribution for Flowing Interval Spacing'', (CRWMS M and O 2000a). The parameter from this analysis may be used in the TSPA SR/LA Saturated Zone Flow and Transport Work Direction and Planning Documents: (1) ''Abstraction of Matrix Diffusion for SZ Flow and Transport Analyses'' (CRWMS M and O 1999a) and (2) ''Incorporation of Heterogeneity in SZ Flow and Transport Analyses'', (CRWMS M and O 1999b). A limitation of this analysis is that the probability distribution of flowing interval spacing may underestimate the effect of incorporating matrix diffusion processes in the SZ transport model because of the possible overestimation of the flowing interval spacing. Larger flowing interval spacing results in a decrease in the matrix diffusion processes. This analysis may overestimate the flowing interval spacing because the number of fractures that contribute to a flowing interval cannot be determined from the data. Because each flowing interval probably has more than one fracture contributing to a flowing interval, the true flowing interval spacing could be

  4. The Applicability of Confidence Intervals of Quantiles for the Generalized Logistic Distribution

    Science.gov (United States)

    Shin, H.; Heo, J.; Kim, T.; Jung, Y.

    2007-12-01

    The generalized logistic (GL) distribution has been widely used for frequency analysis. However, there is a little study related to the confidence intervals that indicate the prediction accuracy of distribution for the GL distribution. In this paper, the estimation of the confidence intervals of quantiles for the GL distribution is presented based on the method of moments (MOM), maximum likelihood (ML), and probability weighted moments (PWM) and the asymptotic variances of each quantile estimator are derived as functions of the sample sizes, return periods, and parameters. Monte Carlo simulation experiments are also performed to verify the applicability of the derived confidence intervals of quantile. As the results, the relative bias (RBIAS) and relative root mean square error (RRMSE) of the confidence intervals generally increase as return period increases and reverse as sample size increases. And PWM for estimating the confidence intervals performs better than the other methods in terms of RRMSE when the data is almost symmetric while ML shows the smallest RBIAS and RRMSE when the data is more skewed and sample size is moderately large. The GL model was applied to fit the distribution of annual maximum rainfall data. The results show that there are little differences in the estimated quantiles between ML and PWM while distinct differences in MOM.

  5. Confidence Intervals for True Scores Using the Skew-Normal Distribution

    Science.gov (United States)

    Garcia-Perez, Miguel A.

    2010-01-01

    A recent comparative analysis of alternative interval estimation approaches and procedures has shown that confidence intervals (CIs) for true raw scores determined with the Score method--which uses the normal approximation to the binomial distribution--have actual coverage probabilities that are closest to their nominal level. It has also recently…

  6. Interval sampling methods and measurement error: a computer simulation.

    Science.gov (United States)

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.

  7. Rationalizing method of replacement intervals by using Bayesian statistics

    International Nuclear Information System (INIS)

    Kasai, Masao; Notoya, Junichi; Kusakari, Yoshiyuki

    2007-01-01

    This study represents the formulations for rationalizing the replacement intervals of equipments and/or parts taking into account the probability density functions (PDF) of the parameters of failure distribution functions (FDF) and compares the optimized intervals by our formulations with those by conventional formulations which uses only representative values of the parameters of FDF instead of using these PDFs. The failure data are generated by Monte Carlo simulations since the real failure data can not be available for us. The PDF of PDF parameters are obtained by Bayesian method and the representative values are obtained by likelihood estimation and Bayesian method. We found that the method using PDF by Bayesian method brings longer replacement intervals than one using the representative of the parameters. (author)

  8. The Distribution of the Product Explains Normal Theory Mediation Confidence Interval Estimation.

    Science.gov (United States)

    Kisbu-Sakarya, Yasemin; MacKinnon, David P; Miočević, Milica

    2014-05-01

    The distribution of the product has several useful applications. One of these applications is its use to form confidence intervals for the indirect effect as the product of 2 regression coefficients. The purpose of this article is to investigate how the moments of the distribution of the product explain normal theory mediation confidence interval coverage and imbalance. Values of the critical ratio for each random variable are used to demonstrate how the moments of the distribution of the product change across values of the critical ratio observed in research studies. Results of the simulation study showed that as skewness in absolute value increases, coverage decreases. And as skewness in absolute value and kurtosis increases, imbalance increases. The difference between testing the significance of the indirect effect using the normal theory versus the asymmetric distribution of the product is further illustrated with a real data example. This article is the first study to show the direct link between the distribution of the product and indirect effect confidence intervals and clarifies the results of previous simulation studies by showing why normal theory confidence intervals for indirect effects are often less accurate than those obtained from the asymmetric distribution of the product or from resampling methods.

  9. Probability Distribution for Flowing Interval Spacing

    International Nuclear Information System (INIS)

    S. Kuzio

    2004-01-01

    Fracture spacing is a key hydrologic parameter in analyses of matrix diffusion. Although the individual fractures that transmit flow in the saturated zone (SZ) cannot be identified directly, it is possible to determine the fractured zones that transmit flow from flow meter survey observations. The fractured zones that transmit flow as identified through borehole flow meter surveys have been defined in this report as flowing intervals. The flowing interval spacing is measured between the midpoints of each flowing interval. The determination of flowing interval spacing is important because the flowing interval spacing parameter is a key hydrologic parameter in SZ transport modeling, which impacts the extent of matrix diffusion in the SZ volcanic matrix. The output of this report is input to the ''Saturated Zone Flow and Transport Model Abstraction'' (BSC 2004 [DIRS 170042]). Specifically, the analysis of data and development of a data distribution reported herein is used to develop the uncertainty distribution for the flowing interval spacing parameter for the SZ transport abstraction model. Figure 1-1 shows the relationship of this report to other model reports that also pertain to flow and transport in the SZ. Figure 1-1 also shows the flow of key information among the SZ reports. It should be noted that Figure 1-1 does not contain a complete representation of the data and parameter inputs and outputs of all SZ reports, nor does it show inputs external to this suite of SZ reports. Use of the developed flowing interval spacing probability distribution is subject to the limitations of the assumptions discussed in Sections 5 and 6 of this analysis report. The number of fractures in a flowing interval is not known. Therefore, the flowing intervals are assumed to be composed of one flowing zone in the transport simulations. This analysis may overestimate the flowing interval spacing because the number of fractures that contribute to a flowing interval cannot be

  10. Resampling methods in Microsoft Excel® for estimating reference intervals.

    Science.gov (United States)

    Theodorsson, Elvar

    2015-01-01

    Computer-intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. 
The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular.
 Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples.

  11. Neutron generation time of the reactor 'crocus' by an interval distribution method for counts collected by two detectors

    International Nuclear Information System (INIS)

    Haldy, P.-A.; Chikouche, M.

    1975-01-01

    The distribution is considered of time intervals between a count in one neutron detector and the consequent event registered in a second one. A 'four interval' probability generating function was derived by means of which the expression for the distribution of the time intervals, lasting from triggering detection in the first detector to subsequent count in the second, one could be obtained. The experimental work was conducted in the zero thermal power reactor Crocus, using a neutron source provided by spontaneous fission, a BF 3 counter for the first detector and an He 3 detector for the second instrument. (U.K.)

  12. Time interval approach to the pulsed neutron logging method

    International Nuclear Information System (INIS)

    Zhao Jingwu; Su Weining

    1994-01-01

    The time interval of neighbouring neutrons emitted from a steady state neutron source can be treated as that from a time-dependent neutron source. In the rock space, the neutron flux is given by the neutron diffusion equation and is composed of an infinite terms. Each term s composed of two die-away curves. The delay action is discussed and used to measure the time interval with only one detector in the experiment. Nuclear reactions with the time distribution due to different types of radiations observed in the neutron well-logging methods are presented with a view to getting the rock nuclear parameters from the time interval technique

  13. Interval methods: An introduction

    DEFF Research Database (Denmark)

    Achenie, L.E.K.; Kreinovich, V.; Madsen, Kaj

    2006-01-01

    This chapter contains selected papers presented at the Minisymposium on Interval Methods of the PARA'04 Workshop '' State-of-the-Art in Scientific Computing ''. The emphasis of the workshop was on high-performance computing (HPC). The ongoing development of ever more advanced computers provides...... the potential for solving increasingly difficult computational problems. However, given the complexity of modern computer architectures, the task of realizing this potential needs careful attention. A main concern of HPC is the development of software that optimizes the performance of a given computer....... An important characteristic of the computer performance in scientific computing is the accuracy of the Computation results. Often, we can estimate this accuracy by using traditional statistical techniques. However, in many practical situations, we do not know the probability distributions of different...

  14. Binomial Distribution Sample Confidence Intervals Estimation 1. Sampling and Medical Key Parameters Calculation

    Directory of Open Access Journals (Sweden)

    Tudor DRUGAN

    2003-08-01

    Full Text Available The aim of the paper was to present the usefulness of the binomial distribution in studying of the contingency tables and the problems of approximation to normality of binomial distribution (the limits, advantages, and disadvantages. The classification of the medical keys parameters reported in medical literature and expressing them using the contingency table units based on their mathematical expressions restrict the discussion of the confidence intervals from 34 parameters to 9 mathematical expressions. The problem of obtaining different information starting with the computed confidence interval for a specified method, information like confidence intervals boundaries, percentages of the experimental errors, the standard deviation of the experimental errors and the deviation relative to significance level was solves through implementation in PHP programming language of original algorithms. The cases of expression, which contain two binomial variables, were separately treated. An original method of computing the confidence interval for the case of two-variable expression was proposed and implemented. The graphical representation of the expression of two binomial variables for which the variation domain of one of the variable depend on the other variable was a real problem because the most of the software used interpolation in graphical representation and the surface maps were quadratic instead of triangular. Based on an original algorithm, a module was implements in PHP in order to represent graphically the triangular surface plots. All the implementation described above was uses in computing the confidence intervals and estimating their performance for binomial distributions sample sizes and variable.

  15. Chosen interval methods for solving linear interval systems with special type of matrix

    Science.gov (United States)

    Szyszka, Barbara

    2013-10-01

    The paper is devoted to chosen direct interval methods for solving linear interval systems with special type of matrix. This kind of matrix: band matrix with a parameter, from finite difference problem is obtained. Such linear systems occur while solving one dimensional wave equation (Partial Differential Equations of hyperbolic type) by using the central difference interval method of the second order. Interval methods are constructed so as the errors of method are enclosed in obtained results, therefore presented linear interval systems contain elements that determining the errors of difference method. The chosen direct algorithms have been applied for solving linear systems because they have no errors of method. All calculations were performed in floating-point interval arithmetic.

  16. [Nonparametric method of estimating survival functions containing right-censored and interval-censored data].

    Science.gov (United States)

    Xu, Yonghong; Gao, Xiaohuan; Wang, Zhengxi

    2014-04-01

    Missing data represent a general problem in many scientific fields, especially in medical survival analysis. Dealing with censored data, interpolation method is one of important methods. However, most of the interpolation methods replace the censored data with the exact data, which will distort the real distribution of the censored data and reduce the probability of the real data falling into the interpolation data. In order to solve this problem, we in this paper propose a nonparametric method of estimating the survival function of right-censored and interval-censored data and compare its performance to SC (self-consistent) algorithm. Comparing to the average interpolation and the nearest neighbor interpolation method, the proposed method in this paper replaces the right-censored data with the interval-censored data, and greatly improves the probability of the real data falling into imputation interval. Then it bases on the empirical distribution theory to estimate the survival function of right-censored and interval-censored data. The results of numerical examples and a real breast cancer data set demonstrated that the proposed method had higher accuracy and better robustness for the different proportion of the censored data. This paper provides a good method to compare the clinical treatments performance with estimation of the survival data of the patients. This pro vides some help to the medical survival data analysis.

  17. On a linear method in bootstrap confidence intervals

    Directory of Open Access Journals (Sweden)

    Andrea Pallini

    2007-10-01

    Full Text Available A linear method for the construction of asymptotic bootstrap confidence intervals is proposed. We approximate asymptotically pivotal and non-pivotal quantities, which are smooth functions of means of n independent and identically distributed random variables, by using a sum of n independent smooth functions of the same analytical form. Errors are of order Op(n-3/2 and Op(n-2, respectively. The linear method allows a straightforward approximation of bootstrap cumulants, by considering the set of n independent smooth functions as an original random sample to be resampled with replacement.

  18. Confidence intervals for the lognormal probability distribution

    International Nuclear Information System (INIS)

    Smith, D.L.; Naberejnev, D.G.

    2004-01-01

    The present communication addresses the topic of symmetric confidence intervals for the lognormal probability distribution. This distribution is frequently utilized to characterize inherently positive, continuous random variables that are selected to represent many physical quantities in applied nuclear science and technology. The basic formalism is outlined herein and a conjured numerical example is provided for illustration. It is demonstrated that when the uncertainty reflected in a lognormal probability distribution is large, the use of a confidence interval provides much more useful information about the variable used to represent a particular physical quantity than can be had by adhering to the notion that the mean value and standard deviation of the distribution ought to be interpreted as best value and corresponding error, respectively. Furthermore, it is shown that if the uncertainty is very large a disturbing anomaly can arise when one insists on interpreting the mean value and standard deviation as the best value and corresponding error, respectively. Reliance on using the mode and median as alternative parameters to represent the best available knowledge of a variable with large uncertainties is also shown to entail limitations. Finally, a realistic physical example involving the decay of radioactivity over a time period that spans many half-lives is presented and analyzed to further illustrate the concepts discussed in this communication

  19. Interval estimation methods of the mean in small sample situation and the results' comparison

    International Nuclear Information System (INIS)

    Wu Changli; Guo Chunying; Jiang Meng; Lin Yuangen

    2009-01-01

    The methods of the sample mean's interval estimation, namely the classical method, the Bootstrap method, the Bayesian Bootstrap method, the Jackknife method and the spread method of the Empirical Characteristic distribution function are described. Numerical calculation on the samples' mean intervals is carried out where the numbers of the samples are 4, 5, 6 respectively. The results indicate the Bootstrap method and the Bayesian Bootstrap method are much more appropriate than others in small sample situation. (authors)

  20. Analytical method for determining the channel-temperature distribution

    International Nuclear Information System (INIS)

    Kurbatov, I.M.

    1992-01-01

    The distribution of the predicted temperature over the volume or cross section of the active zone is important for thermal calculations of reactors taking into account random deviations. This requires a laborious calculation which includes the following steps: separation of the nominal temperature field, within the temperature range, into intervals, in each of which the temperature is set equal to its average value in the interval; determination of the number of channels whose temperature falls within each interval; construction of the channel-temperature distribution in each interval in accordance with the weighted error function; and summation of the number of channels with the same temperature over all intervals. This procedure can be greatly simplified with the help of methods which eliminate numerous variant calculations when the nominal temperature field is open-quotes refinedclose quotes up to the optimal field according to different criteria. In the present paper a universal analytical method is proposed for determining, by changing the coefficients in the channel-temperature distribution function, the form of this function that reflects all conditions of operation of the elements in the active zone. The problem is solved for the temperature of the coolant at the outlet from the reactor channels

  1. Optimal methods for fitting probability distributions to propagule retention time in studies of zoochorous dispersal.

    Science.gov (United States)

    Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi

    2016-02-01

    Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We

  2. Indirect methods for reference interval determination - review and recommendations.

    Science.gov (United States)

    Jones, Graham R D; Haeckel, Rainer; Loh, Tze Ping; Sikaris, Ken; Streichert, Thomas; Katayev, Alex; Barth, Julian H; Ozarda, Yesim

    2018-04-19

    Reference intervals are a vital part of the information supplied by clinical laboratories to support interpretation of numerical pathology results such as are produced in clinical chemistry and hematology laboratories. The traditional method for establishing reference intervals, known as the direct approach, is based on collecting samples from members of a preselected reference population, making the measurements and then determining the intervals. An alternative approach is to perform analysis of results generated as part of routine pathology testing and using appropriate statistical techniques to determine reference intervals. This is known as the indirect approach. This paper from a working group of the International Federation of Clinical Chemistry (IFCC) Committee on Reference Intervals and Decision Limits (C-RIDL) aims to summarize current thinking on indirect approaches to reference intervals. The indirect approach has some major potential advantages compared with direct methods. The processes are faster, cheaper and do not involve patient inconvenience, discomfort or the risks associated with generating new patient health information. Indirect methods also use the same preanalytical and analytical techniques used for patient management and can provide very large numbers for assessment. Limitations to the indirect methods include possible effects of diseased subpopulations on the derived interval. The IFCC C-RIDL aims to encourage the use of indirect methods to establish and verify reference intervals, to promote publication of such intervals with clear explanation of the process used and also to support the development of improved statistical techniques for these studies.

  3. Monitoring molecular interactions using photon arrival-time interval distribution analysis

    Science.gov (United States)

    Laurence, Ted A [Livermore, CA; Weiss, Shimon [Los Angels, CA

    2009-10-06

    A method for analyzing/monitoring the properties of species that are labeled with fluorophores. A detector is used to detect photons emitted from species that are labeled with one or more fluorophores and located in a confocal detection volume. The arrival time of each of the photons is determined. The interval of time between various photon pairs is then determined to provide photon pair intervals. The number of photons that have arrival times within the photon pair intervals is also determined. The photon pair intervals are then used in combination with the corresponding counts of intervening photons to analyze properties and interactions of the molecules including brightness, concentration, coincidence and transit time. The method can be used for analyzing single photon streams and multiple photon streams.

  4. The time interval distribution of sand–dust storms in theory: testing with observational data for Yanchi, China

    International Nuclear Information System (INIS)

    Liu, Guoliang; Zhang, Feng; Hao, Lizhen

    2012-01-01

    We previously introduced a time record model for use in studying the duration of sand–dust storms. In the model, X is the normalized wind speed and Xr is the normalized wind speed threshold for the sand–dust storm. X is represented by a random signal with a normal Gaussian distribution. The storms occur when X ≥ Xr. From this model, the time interval distribution of N = Aexp(−bt) can be deduced, wherein N is the number of time intervals with length greater than t, A and b are constants, and b is related to Xr. In this study, sand–dust storm data recorded in spring at the Yanchi meteorological station in China were analysed to verify whether the time interval distribution of the sand–dust storms agrees with the above time interval distribution. We found that the distribution of the time interval between successive sand–dust storms in April agrees well with the above exponential equation. However, the interval distribution for the sand–dust storm data for the entire spring period displayed a better fit to the Weibull equation and depended on the variation of the sand–dust storm threshold wind speed. (paper)

  5. Interval MULTIMOORA method with target values of attributes based on interval distance and preference degree: biomaterials selection

    Science.gov (United States)

    Hafezalkotob, Arian; Hafezalkotob, Ashkan

    2017-06-01

    A target-based MADM method covers beneficial and non-beneficial attributes besides target values for some attributes. Such techniques are considered as the comprehensive forms of MADM approaches. Target-based MADM methods can also be used in traditional decision-making problems in which beneficial and non-beneficial attributes only exist. In many practical selection problems, some attributes have given target values. The values of decision matrix and target-based attributes can be provided as intervals in some of such problems. Some target-based decision-making methods have recently been developed; however, a research gap exists in the area of MADM techniques with target-based attributes under uncertainty of information. We extend the MULTIMOORA method for solving practical material selection problems in which material properties and their target values are given as interval numbers. We employ various concepts of interval computations to reduce degeneration of uncertain data. In this regard, we use interval arithmetic and introduce innovative formula for interval distance of interval numbers to create interval target-based normalization technique. Furthermore, we use a pairwise preference matrix based on the concept of degree of preference of interval numbers to calculate the maximum, minimum, and ranking of these numbers. Two decision-making problems regarding biomaterials selection of hip and knee prostheses are discussed. Preference degree-based ranking lists for subordinate parts of the extended MULTIMOORA method are generated by calculating the relative degrees of preference for the arranged assessment values of the biomaterials. The resultant rankings for the problem are compared with the outcomes of other target-based models in the literature.

  6. A text zero-watermarking method based on keyword dense interval

    Science.gov (United States)

    Yang, Fan; Zhu, Yuesheng; Jiang, Yifeng; Qing, Yin

    2017-07-01

    Digital watermarking has been recognized as a useful technology for the copyright protection and authentication of digital information. However, rarely did the former methods focus on the key content of digital carrier. The idea based on the protection of key content is more targeted and can be considered in different digital information, including text, image and video. In this paper, we use text as research object and a text zero-watermarking method which uses keyword dense interval (KDI) as the key content is proposed. First, we construct zero-watermarking model by introducing the concept of KDI and giving the method of KDI extraction. Second, we design detection model which includes secondary generation of zero-watermark and the similarity computing method of keyword distribution. Besides, experiments are carried out, and the results show that the proposed method gives better performance than other available methods especially in the attacks of sentence transformation and synonyms substitution.

  7. Counting Raindrops and the Distribution of Intervals Between Them.

    Science.gov (United States)

    Van De Giesen, N.; Ten Veldhuis, M. C.; Hut, R.; Pape, J. J.

    2017-12-01

    Drop size distributions are often assumed to follow a generalized gamma function, characterized by one parameter, Λ, [1]. In principle, this Λ can be estimated by measuring the arrival rate of raindrops. The arrival rate should follow a Poisson distribution. By measuring the distribution of the time intervals between drops arriving at a certain surface area, one should not only be able to estimate the arrival rate but also the robustness of the underlying assumption concerning steady state. It is important to note that many rainfall radar systems also assume fixeddrop size distributions, and associated arrival rates, to derive rainfall rates. By testing these relationships with a simple device, we will be able to improve both land-based and space-based radar rainfall estimates. Here, an open-hardware sensor design is presented, consisting of a 3D printed housing for a piezoelectric element, some simple electronics and an Arduino. The target audience for this device are citizen scientists who want to contribute to collecting rainfall information beyond the standard rain gauge. The core of the sensor is a simple piezo-buzzer, as found in many devices such as watches and fire alarms. When a raindrop falls on a piezo-buzzer, a small voltage is generated , which can be used to register the drop's arrival time. By registering the intervals between raindrops, the associated Poisson distribution can be estimated. In addition to the hardware, we will present the first results of a measuring campaign in Myanmar that will have ran from August to October 2017. All design files and descriptions are available through GitHub: https://github.com/nvandegiesen/Intervalometer. This research is partially supported through the TWIGA project, funded by the European Commission's H2020 program under call SC5-18-2017 `Novel in-situ observation systems'. Reference [1]: Uijlenhoet, R., and J. N. M. Stricker. "A consistent rainfall parameterization based on the exponential raindrop size

  8. Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks.

    Science.gov (United States)

    Shi, Chaoyang; Chen, Bi Yu; Lam, William H K; Li, Qingquan

    2017-12-06

    Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.

  9. Robust stability analysis for Markovian jumping interval neural networks with discrete and distributed time-varying delays

    International Nuclear Information System (INIS)

    Balasubramaniam, P.; Lakshmanan, S.; Manivannan, A.

    2012-01-01

    Highlights: ► Robust stability analysis for Markovian jumping interval neural networks is considered. ► Both linear fractional and interval uncertainties are considered. ► A new LKF is constructed with triple integral terms. ► MATLAB LMI control toolbox is used to validate theoretical results. ► Numerical examples are given to illustrate the effectiveness of the proposed method. - Abstract: This paper investigates robust stability analysis for Markovian jumping interval neural networks with discrete and distributed time-varying delays. The parameter uncertainties are assumed to be bounded in given compact sets. The delay is assumed to be time-varying and belong to a given interval, which means that the lower and upper bounds of interval time-varying delays are available. Based on the new Lyapunov–Krasovskii functional (LKF), some inequality techniques and stochastic stability theory, new delay-dependent stability criteria have been obtained in terms of linear matrix inequalities (LMIs). Finally, two numerical examples are given to illustrate the less conservative and effectiveness of our theoretical results.

  10. Trajectory Optimization Based on Multi-Interval Mesh Refinement Method

    Directory of Open Access Journals (Sweden)

    Ningbo Li

    2017-01-01

    Full Text Available In order to improve the optimization accuracy and convergence rate for trajectory optimization of the air-to-air missile, a multi-interval mesh refinement Radau pseudospectral method was introduced. This method made the mesh endpoints converge to the practical nonsmooth points and decreased the overall collocation points to improve convergence rate and computational efficiency. The trajectory was divided into four phases according to the working time of engine and handover of midcourse and terminal guidance, and then the optimization model was built. The multi-interval mesh refinement Radau pseudospectral method with different collocation points in each mesh interval was used to solve the trajectory optimization model. Moreover, this method was compared with traditional h method. Simulation results show that this method can decrease the dimensionality of nonlinear programming (NLP problem and therefore improve the efficiency of pseudospectral methods for solving trajectory optimization problems.

  11. A nonparametric statistical method for determination of a confidence interval for the mean of a set of results obtained in a laboratory intercomparison

    International Nuclear Information System (INIS)

    Veglia, A.

    1981-08-01

    In cases where sets of data are obviously not normally distributed, the application of a nonparametric method for the estimation of a confidence interval for the mean seems to be more suitable than some other methods because such a method requires few assumptions about the population of data. A two-step statistical method is proposed which can be applied to any set of analytical results: elimination of outliers by a nonparametric method based on Tchebycheff's inequality, and determination of a confidence interval for the mean by a non-parametric method based on binominal distribution. The method is appropriate only for samples of size n>=10

  12. Measurements of the charged particle multiplicity distribution in restricted rapidity intervals

    CERN Document Server

    Buskulic, Damir; De Bonis, I; Décamp, D; Ghez, P; Goy, C; Lees, J P; Lucotte, A; Minard, M N; Odier, P; Pietrzyk, B; Ariztizabal, F; Chmeissani, M; Crespo, J M; Efthymiopoulos, I; Fernández, E; Fernández-Bosman, M; Gaitan, V; Garrido, L; Martínez, M; Orteu, S; Pacheco, A; Padilla, C; Palla, Fabrizio; Pascual, A; Perlas, J A; Sánchez, F; Teubert, F; Colaleo, A; Creanza, D; De Palma, M; Farilla, A; Gelao, G; Girone, M; Iaselli, Giuseppe; Maggi, G; Maggi, M; Marinelli, N; Natali, S; Nuzzo, S; Ranieri, A; Raso, G; Romano, F; Ruggieri, F; Selvaggi, G; Silvestris, L; Tempesta, P; Zito, G; Huang, X; Lin, J; Ouyang, Q; Wang, T; Xie, Y; Xu, R; Xue, S; Zhang, J; Zhang, L; Zhao, W; Bonvicini, G; Cattaneo, M; Comas, P; Coyle, P; Drevermann, H; Engelhardt, A; Forty, Roger W; Frank, M; Hagelberg, R; Harvey, J; Jacobsen, R; Janot, P; Jost, B; Knobloch, J; Lehraus, Ivan; Markou, C; Martin, E B; Mato, P; Meinhard, H; Minten, Adolf G; Miquel, R; Oest, T; Palazzi, P; Pater, J R; Pusztaszeri, J F; Ranjard, F; Rensing, P E; Rolandi, Luigi; Schlatter, W D; Schmelling, M; Schneider, O; Tejessy, W; Tomalin, I R; Venturi, A; Wachsmuth, H W; Wiedenmann, W; Wildish, T; Witzeling, W; Wotschack, J; Ajaltouni, Ziad J; Bardadin-Otwinowska, Maria; Barrès, A; Boyer, C; Falvard, A; Gay, P; Guicheney, C; Henrard, P; Jousset, J; Michel, B; Monteil, S; Montret, J C; Pallin, D; Perret, P; Podlyski, F; Proriol, J; Rossignol, J M; Saadi, F; Fearnley, Tom; Hansen, J B; Hansen, J D; Hansen, J R; Hansen, P H; Nilsson, B S; Kyriakis, A; Simopoulou, Errietta; Siotis, I; Vayaki, Anna; Zachariadou, K; Blondel, A; Bonneaud, G R; Brient, J C; Bourdon, P; Passalacqua, L; Rougé, A; Rumpf, M; Tanaka, R; Valassi, Andrea; Verderi, M; Videau, H L; Candlin, D J; Parsons, M I; Focardi, E; Parrini, G; Corden, M; Delfino, M C; Georgiopoulos, C H; Jaffe, D E; Antonelli, A; Bencivenni, G; Bologna, G; Bossi, F; Campana, P; Capon, G; Chiarella, V; Felici, G; Laurelli, P; Mannocchi, G; Murtas, F; Murtas, G P; Pepé-Altarelli, M; Dorris, S J; Halley, A W; ten Have, I; Knowles, I G; Lynch, J G; Morton, W T; O'Shea, V; Raine, C; Reeves, P; Scarr, J M; Smith, K; Smith, M G; Thompson, A S; Thomson, F; Thorn, S; Turnbull, R M; Becker, U; Braun, O; Geweniger, C; Graefe, G; Hanke, P; Hepp, V; Kluge, E E; Putzer, A; Rensch, B; Schmidt, M; Sommer, J; Stenzel, H; Tittel, K; Werner, S; Wunsch, M; Beuselinck, R; Binnie, David M; Cameron, W; Colling, D J; Dornan, Peter J; Konstantinidis, N P; Moneta, L; Moutoussi, A; Nash, J; San Martin, G; Sedgbeer, J K; Stacey, A M; Dissertori, G; Girtler, P; Kneringer, E; Kuhn, D; Rudolph, G; Bowdery, C K; Brodbeck, T J; Colrain, P; Crawford, G; Finch, A J; Foster, F; Hughes, G; Sloan, Terence; Whelan, E P; Williams, M I; Galla, A; Greene, A M; Kleinknecht, K; Quast, G; Raab, J; Renk, B; Sander, H G; Wanke, R; Zeitnitz, C; Aubert, Jean-Jacques; Bencheikh, A M; Benchouk, C; Bonissent, A; Bujosa, G; Calvet, D; Carr, J; Diaconu, C A; Etienne, F; Thulasidas, M; Nicod, D; Payre, P; Rousseau, D; Talby, M; Abt, I; Assmann, R W; Bauer, C; Blum, Walter; Brown, D; Dietl, H; Dydak, Friedrich; Ganis, G; Gotzhein, C; Jakobs, K; Kroha, H; Lütjens, G; Lutz, Gerhard; Männer, W; Moser, H G; Richter, R H; Rosado-Schlosser, A; Settles, Ronald; Seywerd, H C J; Stierlin, U; Saint-Denis, R; Wolf, G; Alemany, R; Boucrot, J; Callot, O; Cordier, A; Courault, F; Davier, M; Duflot, L; Grivaz, J F; Jacquet, M; Kim, D W; Le Diberder, F R; Lefrançois, J; Lutz, A M; Musolino, G; Nikolic, I A; Park, H J; Park, I C; Schune, M H; Simion, S; Veillet, J J; Videau, I; Abbaneo, D; Azzurri, P; Bagliesi, G; Batignani, G; Bettarini, S; Bozzi, C; Calderini, G; Carpinelli, M; Ciocci, M A; Ciulli, V; Dell'Orso, R; Fantechi, R; Ferrante, I; Foà, L; Forti, F; Giassi, A; Giorgi, M A; Gregorio, A; Ligabue, F; Lusiani, A; Marrocchesi, P S; Messineo, A; Rizzo, G; Sanguinetti, G; Sciabà, A; Spagnolo, P; Steinberger, Jack; Tenchini, Roberto; Tonelli, G; Triggiani, G; Vannini, C; Verdini, P G; Walsh, J; Betteridge, A P; Blair, G A; Bryant, L M; Cerutti, F; Gao, Y; Green, M G; Johnson, D L; Medcalf, T; Mir, M; Perrodo, P; Strong, J A; Bertin, V; Botterill, David R; Clifft, R W; Edgecock, T R; Haywood, S; Edwards, M; Maley, P; Norton, P R; Thompson, J C; Bloch-Devaux, B; Colas, P; Duarte, H; Emery, S; Kozanecki, Witold; Lançon, E; Lemaire, M C; Locci, E; Marx, B; Pérez, P; Rander, J; Renardy, J F; Rosowsky, A; Roussarie, A; Schuller, J P; Schwindling, J; Si Mohand, D; Trabelsi, A; Vallage, B; Johnson, R P; Kim, H Y; Litke, A M; McNeil, M A; Taylor, G; Beddall, A; Booth, C N; Boswell, R; Cartwright, S L; Combley, F; Dawson, I; Köksal, A; Letho, M; Newton, W M; Rankin, C; Thompson, L F; Böhrer, A; Brandt, S; Cowan, G D; Feigl, E; Grupen, Claus; Lutters, G; Minguet-Rodríguez, J A; Rivera, F; Saraiva, P; Smolik, L; Stephan, F; Apollonio, M; Bosisio, L; Della Marina, R; Giannini, G; Gobbo, B; Ragusa, F; Rothberg, J E; Wasserbaech, S R; Armstrong, S R; Bellantoni, L; Elmer, P; Feng, Z; Ferguson, D P S; Gao, Y S; González, S; Grahl, J; Harton, J L; Hayes, O J; Hu, H; McNamara, P A; Nachtman, J M; Orejudos, W; Pan, Y B; Saadi, Y; Schmitt, M; Scott, I J; Sharma, V; Turk, J; Walsh, A M; Wu Sau Lan; Wu, X; Yamartino, J M; Zheng, M; Zobernig, G

    1995-01-01

    Charged particle multiplicity distributions have been measured with the ALEPH detector in restricted rapidity intervals |Y| \\leq 0.5,1.0, 1.5,2.0\\/ along the thrust axis and also without restriction on rapidity. The distribution for the full range can be parametrized by a log-normal distribution. For smaller windows one finds a more complicated structure, which is understood to arise from perturbative effects. The negative-binomial distribution fails to describe the data both with and without the restriction on rapidity. The JETSET model is found to describe all aspects of the data while the width predicted by HERWIG is in significant disagreement.

  13. A probabilistic approach for representation of interval uncertainty

    International Nuclear Information System (INIS)

    Zaman, Kais; Rangavajhala, Sirisha; McDonald, Mark P.; Mahadevan, Sankaran

    2011-01-01

    In this paper, we propose a probabilistic approach to represent interval data for input variables in reliability and uncertainty analysis problems, using flexible families of continuous Johnson distributions. Such a probabilistic representation of interval data facilitates a unified framework for handling aleatory and epistemic uncertainty. For fitting probability distributions, methods such as moment matching are commonly used in the literature. However, unlike point data where single estimates for the moments of data can be calculated, moments of interval data can only be computed in terms of upper and lower bounds. Finding bounds on the moments of interval data has been generally considered an NP-hard problem because it includes a search among the combinations of multiple values of the variables, including interval endpoints. In this paper, we present efficient algorithms based on continuous optimization to find the bounds on second and higher moments of interval data. With numerical examples, we show that the proposed bounding algorithms are scalable in polynomial time with respect to increasing number of intervals. Using the bounds on moments computed using the proposed approach, we fit a family of Johnson distributions to interval data. Furthermore, using an optimization approach based on percentiles, we find the bounding envelopes of the family of distributions, termed as a Johnson p-box. The idea of bounding envelopes for the family of Johnson distributions is analogous to the notion of empirical p-box in the literature. Several sets of interval data with different numbers of intervals and type of overlap are presented to demonstrate the proposed methods. As against the computationally expensive nested analysis that is typically required in the presence of interval variables, the proposed probabilistic representation enables inexpensive optimization-based strategies to estimate bounds on an output quantity of interest.

  14. Confidence intervals for experiments with background and small numbers of events

    International Nuclear Information System (INIS)

    Bruechle, W.

    2003-01-01

    Methods to find a confidence interval for Poisson distributed variables are illuminated, especially for the case of poor statistics. The application of 'central' and 'highest probability density' confidence intervals is compared for the case of low count-rates. A method to determine realistic estimates of the confidence intervals for Poisson distributed variables affected with background, and their ratios, is given. (orig.)

  15. Confidence intervals for experiments with background and small numbers of events

    International Nuclear Information System (INIS)

    Bruechle, W.

    2002-07-01

    Methods to find a confidence interval for Poisson distributed variables are illuminated, especially for the case of poor statistics. The application of 'central' and 'highest probability density' confidence intervals is compared for the case of low count-rates. A method to determine realistic estimates of the confidence intervals for Poisson distributed variables affected with background, and their ratios, is given. (orig.)

  16. Robotic fish tracking method based on suboptimal interval Kalman filter

    Science.gov (United States)

    Tong, Xiaohong; Tang, Chao

    2017-11-01

    Autonomous Underwater Vehicle (AUV) research focused on tracking and positioning, precise guidance and return to dock and other fields. The robotic fish of AUV has become a hot application in intelligent education, civil and military etc. In nonlinear tracking analysis of robotic fish, which was found that the interval Kalman filter algorithm contains all possible filter results, but the range is wide, relatively conservative, and the interval data vector is uncertain before implementation. This paper proposes a ptimization algorithm of suboptimal interval Kalman filter. Suboptimal interval Kalman filter scheme used the interval inverse matrix with its worst inverse instead, is more approximate nonlinear state equation and measurement equation than the standard interval Kalman filter, increases the accuracy of the nominal dynamic system model, improves the speed and precision of tracking system. Monte-Carlo simulation results show that the optimal trajectory of sub optimal interval Kalman filter algorithm is better than that of the interval Kalman filter method and the standard method of the filter.

  17. A method to elicit beliefs as most likely intervals

    NARCIS (Netherlands)

    Schlag, K.H.; van der Weele, J.J.

    2015-01-01

    We show how to elicit the beliefs of an expert in the form of a "most likely interval", a set of future outcomes that are deemed more likely than any other outcome. Our method, called the Most Likely Interval elicitation rule (MLI), asks the expert for an interval and pays according to how well the

  18. Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks

    Directory of Open Access Journals (Sweden)

    Chaoyang Shi

    2017-12-01

    Full Text Available Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.

  19. Charged particle multiplicity distributions in restricted rapidity intervals in Z0 hadronic decays

    International Nuclear Information System (INIS)

    Uvarov, V.

    1991-01-01

    The multiplicity distributions of charged particles in restricted rapidity intervals in Z 0 hadronic decays measured by the DELPHI detector are presented. The data reveal a shoulder structure, best visible for intervals of intermediate size, i.e. for rapidity limits around ±1.5. The whole set of distributions including the shoulder structure is reproduced by the Lund Parton Shower model. The structure is found to be due to important contributions from 3- and 4-jet events with a hard gluon jet. A different model, based on the concept of independently produced groups of particles, 'clans', fluctuating both in number per event and particle content per clan, has also been used to analyse the present data. The results show that for each interval of rapidity the average number of clans per event is approximately the same as at lower energies. (author) 11 refs., 3 figs

  20. Preventive maintenance and the interval availability distribution of an unreliable production system

    International Nuclear Information System (INIS)

    Dijkhuizen, G. van; Heijden, M. van der

    1999-01-01

    Traditionally, the optimal preventive maintenance interval for an unreliable production system has been determined by maximizing its limiting availability. Nowadays, it is widely recognized that this performance measure does not always provide relevant information for practical purposes. This is particularly true for order-driven manufacturing systems, in which due date performance has become a more important, and even a competitive factor. Under these circumstances, the so-called interval availability distribution is often seen as a more appropriate performance measure. Surprisingly enough, the relation between preventive maintenance and interval availability has received little attention in the existing literature. In this article, a series of mathematical models and optimization techniques is presented, with which the optimal preventive maintenance interval can be determined from an interval availability point of view, rather than from a limiting availability perspective. Computational results for a class of representative test problems indicate that significant improvements of up to 30% in the guaranteed interval availability can be obtained, by increasing preventive maintenance frequencies somewhere between 10 and 70%

  1. Recurrence interval analysis of trading volumes.

    Science.gov (United States)

    Ren, Fei; Zhou, Wei-Xing

    2010-06-01

    We study the statistical properties of the recurrence intervals τ between successive trading volumes exceeding a certain threshold q. The recurrence interval analysis is carried out for the 20 liquid Chinese stocks covering a period from January 2000 to May 2009, and two Chinese indices from January 2003 to April 2009. Similar to the recurrence interval distribution of the price returns, the tail of the recurrence interval distribution of the trading volumes follows a power-law scaling, and the results are verified by the goodness-of-fit tests using the Kolmogorov-Smirnov (KS) statistic, the weighted KS statistic and the Cramér-von Mises criterion. The measurements of the conditional probability distribution and the detrended fluctuation function show that both short-term and long-term memory effects exist in the recurrence intervals between trading volumes. We further study the relationship between trading volumes and price returns based on the recurrence interval analysis method. It is found that large trading volumes are more likely to occur following large price returns, and the comovement between trading volumes and price returns is more pronounced for large trading volumes.

  2. Analyzing Big Data with the Hybrid Interval Regression Methods

    Directory of Open Access Journals (Sweden)

    Chia-Hui Huang

    2014-01-01

    Full Text Available Big data is a new trend at present, forcing the significant impacts on information technologies. In big data applications, one of the most concerned issues is dealing with large-scale data sets that often require computation resources provided by public cloud services. How to analyze big data efficiently becomes a big challenge. In this paper, we collaborate interval regression with the smooth support vector machine (SSVM to analyze big data. Recently, the smooth support vector machine (SSVM was proposed as an alternative of the standard SVM that has been proved more efficient than the traditional SVM in processing large-scale data. In addition the soft margin method is proposed to modify the excursion of separation margin and to be effective in the gray zone that the distribution of data becomes hard to be described and the separation margin between classes.

  3. Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks

    OpenAIRE

    Chaoyang Shi; Bi Yu Chen; William H. K. Lam; Qingquan Li

    2017-01-01

    Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are f...

  4. Interval-Censored Time-to-Event Data Methods and Applications

    CERN Document Server

    Chen, Ding-Geng

    2012-01-01

    Interval-Censored Time-to-Event Data: Methods and Applications collects the most recent techniques, models, and computational tools for interval-censored time-to-event data. Top biostatisticians from academia, biopharmaceutical industries, and government agencies discuss how these advances are impacting clinical trials and biomedical research. Divided into three parts, the book begins with an overview of interval-censored data modeling, including nonparametric estimation, survival functions, regression analysis, multivariate data analysis, competing risks analysis, and other models for interva

  5. Confidence Intervals for Asbestos Fiber Counts: Approximate Negative Binomial Distribution.

    Science.gov (United States)

    Bartley, David; Slaven, James; Harper, Martin

    2017-03-01

    The negative binomial distribution is adopted for analyzing asbestos fiber counts so as to account for both the sampling errors in capturing only a finite number of fibers and the inevitable human variation in identifying and counting sampled fibers. A simple approximation to this distribution is developed for the derivation of quantiles and approximate confidence limits. The success of the approximation depends critically on the use of Stirling's expansion to sufficient order, on exact normalization of the approximating distribution, on reasonable perturbation of quantities from the normal distribution, and on accurately approximating sums by inverse-trapezoidal integration. Accuracy of the approximation developed is checked through simulation and also by comparison to traditional approximate confidence intervals in the specific case that the negative binomial distribution approaches the Poisson distribution. The resulting statistics are shown to relate directly to early research into the accuracy of asbestos sampling and analysis. Uncertainty in estimating mean asbestos fiber concentrations given only a single count is derived. Decision limits (limits of detection) and detection limits are considered for controlling false-positive and false-negative detection assertions and are compared to traditional limits computed assuming normal distributions. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2017.

  6. A note on birth interval distributions

    International Nuclear Information System (INIS)

    Shrestha, G.

    1989-08-01

    A considerable amount of work has been done regarding the birth interval analysis in mathematical demography. This paper is prepared with the intention of reviewing some probability models related to interlive birth intervals proposed by different researchers. (author). 14 refs

  7. VIKOR Method for Interval Neutrosophic Multiple Attribute Group Decision-Making

    Directory of Open Access Journals (Sweden)

    Yu-Han Huang

    2017-11-01

    Full Text Available In this paper, we will extend the VIKOR (VIsekriterijumska optimizacija i KOmpromisno Resenje method to multiple attribute group decision-making (MAGDM with interval neutrosophic numbers (INNs. Firstly, the basic concepts of INNs are briefly presented. The method first aggregates all individual decision-makers’ assessment information based on an interval neutrosophic weighted averaging (INWA operator, and then employs the extended classical VIKOR method to solve MAGDM problems with INNs. The validity and stability of this method are verified by example analysis and sensitivity analysis, and its superiority is illustrated by a comparison with the existing methods.

  8. Optimal interval for major maintenance actions in electricity distribution networks

    Energy Technology Data Exchange (ETDEWEB)

    Louit, Darko; Pascual, Rodrigo [Centro de Mineria, Pontificia Universidad Catolica de Chile, Av. Vicuna MacKenna, 4860 Santiago (Chile); Banjevic, Dragan [Centre for Maintenance Optimization and Reliability Engineering, University of Toronto, 5 King' s College Rd., Toronto, Ontario (Canada)

    2009-09-15

    Many systems require the periodic undertaking of major (preventive) maintenance actions (MMAs) such as overhauls in mechanical equipment, reconditioning of train lines, resurfacing of roads, etc. In the long term, these actions contribute to achieving a lower rate of occurrence of failures, though in many cases they increase the intensity of the failure process shortly after performed, resulting in a non-monotonic trend for failure intensity. Also, in the special case of distributed assets such as communications and energy networks, pipelines, etc., it is likely that the maintenance action takes place sequentially over an extended period of time, implying that different sections of the network underwent the MMAs at different periods. This forces the development of a model based on a relative time scale (i.e. time since last major maintenance event) and the combination of data from different sections of a grid, under a normalization scheme. Additionally, extended maintenance times and sequential execution of the MMAs make it difficult to identify failures occurring before and after the preventive maintenance action. This results in the loss of important information for the characterization of the failure process. A simple model is introduced to determine the optimal MMA interval considering such restrictions. Furthermore, a case study illustrates the optimal tree trimming interval around an electricity distribution network. (author)

  9. Risky Group Decision-Making Method for Distribution Grid Planning

    Science.gov (United States)

    Li, Cunbin; Yuan, Jiahang; Qi, Zhiqiang

    2015-12-01

    With rapid speed on electricity using and increasing in renewable energy, more and more research pay attention on distribution grid planning. For the drawbacks of existing research, this paper proposes a new risky group decision-making method for distribution grid planning. Firstly, a mixing index system with qualitative and quantitative indices is built. On the basis of considering the fuzziness of language evaluation, choose cloud model to realize "quantitative to qualitative" transformation and construct interval numbers decision matrices according to the "3En" principle. An m-dimensional interval numbers decision vector is regarded as super cuboids in m-dimensional attributes space, using two-level orthogonal experiment to arrange points uniformly and dispersedly. The numbers of points are assured by testing numbers of two-level orthogonal arrays and these points compose of distribution points set to stand for decision-making project. In order to eliminate the influence of correlation among indices, Mahalanobis distance is used to calculate the distance from each solutions to others which means that dynamic solutions are viewed as the reference. Secondly, due to the decision-maker's attitude can affect the results, this paper defines the prospect value function based on SNR which is from Mahalanobis-Taguchi system and attains the comprehensive prospect value of each program as well as the order. At last, the validity and reliability of this method is illustrated by examples which prove the method is more valuable and superiority than the other.

  10. Process control and optimization with simple interval calculation method

    DEFF Research Database (Denmark)

    Pomerantsev, A.; Rodionova, O.; Høskuldsson, Agnar

    2006-01-01

    for the quality improvement in the course of production. The latter is an active quality optimization, which takes into account the actual history of the process. The advocate approach is allied to the conventional method of multivariate statistical process control (MSPC) as it also employs the historical process......Methods of process control and optimization are presented and illustrated with a real world example. The optimization methods are based on the PLS block modeling as well as on the simple interval calculation methods of interval prediction and object status classification. It is proposed to employ...... the series of expanding PLS/SIC models in order to support the on-line process improvements. This method helps to predict the effect of planned actions on the product quality and thus enables passive quality control. We have also considered an optimization approach that proposes the correcting actions...

  11. A modified hybrid uncertain analysis method for dynamic response field of the LSOAAC with random and interval parameters

    Science.gov (United States)

    Zi, Bin; Zhou, Bin

    2016-07-01

    For the prediction of dynamic response field of the luffing system of an automobile crane (LSOAAC) with random and interval parameters, a hybrid uncertain model is introduced. In the hybrid uncertain model, the parameters with certain probability distribution are modeled as random variables, whereas, the parameters with lower and upper bounds are modeled as interval variables instead of given precise values. Based on the hybrid uncertain model, the hybrid uncertain dynamic response equilibrium equation, in which different random and interval parameters are simultaneously included in input and output terms, is constructed. Then a modified hybrid uncertain analysis method (MHUAM) is proposed. In the MHUAM, based on random interval perturbation method, the first-order Taylor series expansion and the first-order Neumann series, the dynamic response expression of the LSOAAC is developed. Moreover, the mathematical characteristics of extrema of bounds of dynamic response are determined by random interval moment method and monotonic analysis technique. Compared with the hybrid Monte Carlo method (HMCM) and interval perturbation method (IPM), numerical results show the feasibility and efficiency of the MHUAM for solving the hybrid LSOAAC problems. The effects of different uncertain models and parameters on the LSOAAC response field are also investigated deeply, and numerical results indicate that the impact made by the randomness in the thrust of the luffing cylinder F is larger than that made by the gravity of the weight in suspension Q . In addition, the impact made by the uncertainty in the displacement between the lower end of the lifting arm and the luffing cylinder a is larger than that made by the length of the lifting arm L .

  12. Solving the interval type-2 fuzzy polynomial equation using the ranking method

    Science.gov (United States)

    Rahman, Nurhakimah Ab.; Abdullah, Lazim

    2014-07-01

    Polynomial equations with trapezoidal and triangular fuzzy numbers have attracted some interest among researchers in mathematics, engineering and social sciences. There are some methods that have been developed in order to solve these equations. In this study we are interested in introducing the interval type-2 fuzzy polynomial equation and solving it using the ranking method of fuzzy numbers. The ranking method concept was firstly proposed to find real roots of fuzzy polynomial equation. Therefore, the ranking method is applied to find real roots of the interval type-2 fuzzy polynomial equation. We transform the interval type-2 fuzzy polynomial equation to a system of crisp interval type-2 fuzzy polynomial equation. This transformation is performed using the ranking method of fuzzy numbers based on three parameters, namely value, ambiguity and fuzziness. Finally, we illustrate our approach by numerical example.

  13. In-Hospital Basic Life Support: Major Differences in Duration, Retraining Intervals, and Training Methods - A Danish Nationwide Study

    DEFF Research Database (Denmark)

    Rasmussen, Ditte K; Glerup Lauridsen, Kasper; Staerk, Mathilde

    2017-01-01

    Introduction: High-quality chest compressions and early defibrillation is essential to improve survival following in-hospital cardiac arrest. Efficient training in basic life support (BLS) for clinical staff is therefore important. This study aimed to investigate duration, training methods...... and retraining intervals for BLS training of clinical staff in Danish hospitals.Methods: We included all public, somatic hospitals in Denmark with a cardiac arrest team. Online questionnaires were distributed to resuscitation officers in each hospital. Questionnaires inquired information on: A) Course duration...... and retraining interval, and B) Training methods and setting.Results: In total, 44 hospitals replied (response rate: 96%). BLS training for clinical staff was conducted in 41 hospitals (93%). Median (Q1;Q3) course duration was 1.5 (1;2.5) hours. Retraining was conducted every year (17%), every second year (56...

  14. Adaptive Kalman Filter Based on Adjustable Sampling Interval in Burst Detection for Water Distribution System

    Directory of Open Access Journals (Sweden)

    Doo Yong Choi

    2016-04-01

    Full Text Available Rapid detection of bursts and leaks in water distribution systems (WDSs can reduce the social and economic costs incurred through direct loss of water into the ground, additional energy demand for water supply, and service interruptions. Many real-time burst detection models have been developed in accordance with the use of supervisory control and data acquisition (SCADA systems and the establishment of district meter areas (DMAs. Nonetheless, no consideration has been given to how frequently a flow meter measures and transmits data for predicting breaks and leaks in pipes. This paper analyzes the effect of sampling interval when an adaptive Kalman filter is used for detecting bursts in a WDS. A new sampling algorithm is presented that adjusts the sampling interval depending on the normalized residuals of flow after filtering. The proposed algorithm is applied to a virtual sinusoidal flow curve and real DMA flow data obtained from Jeongeup city in South Korea. The simulation results prove that the self-adjusting algorithm for determining the sampling interval is efficient and maintains reasonable accuracy in burst detection. The proposed sampling method has a significant potential for water utilities to build and operate real-time DMA monitoring systems combined with smart customer metering systems.

  15. Multifractal distribution of spike intervals for two oscillators coupled by unreliable pulses

    International Nuclear Information System (INIS)

    Kestler, Johannes; Kinzel, Wolfgang

    2006-01-01

    Two neurons coupled by unreliable synapses are modelled by leaky integrate-and-fire neurons and stochastic on-off synapses. The dynamics is mapped to an iterated function system. Numerical calculations yield a multifractal distribution of interspike intervals. The covering, information and correlation dimensions are calculated as a function of synaptic strength and transmission probability. (letter to the editor)

  16. A note on Nonparametric Confidence Interval for a Shift Parameter ...

    African Journals Online (AJOL)

    The method is illustrated using the Cauchy distribution as a location model. The kernel-based method is found to have a shorter interval for the shift parameter between two Cauchy distributions than the one based on the Mann-Whitney test statistic. Keywords: Best Asymptotic Normal; Cauchy distribution; Kernel estimates; ...

  17. Method of high precision interval measurement in pulse laser ranging system

    Science.gov (United States)

    Wang, Zhen; Lv, Xin-yuan; Mao, Jin-jin; Liu, Wei; Yang, Dong

    2013-09-01

    Laser ranging is suitable for laser system, for it has the advantage of high measuring precision, fast measuring speed,no cooperative targets and strong resistance to electromagnetic interference,the measuremen of laser ranging is the key paremeters affecting the performance of the whole system.The precision of the pulsed laser ranging system was decided by the precision of the time interval measurement, the principle structure of laser ranging system was introduced, and a method of high precision time interval measurement in pulse laser ranging system was established in this paper.Based on the analysis of the factors which affected the precision of range measure,the pulse rising edges discriminator was adopted to produce timing mark for the start-stop time discrimination,and the TDC-GP2 high precision interval measurement system based on TMS320F2812 DSP was designed to improve the measurement precision.Experimental results indicate that the time interval measurement method in this paper can obtain higher range accuracy. Compared with the traditional time interval measurement system,the method simplifies the system design and reduce the influence of bad weather conditions,furthermore,it satisfies the requirements of low costs and miniaturization.

  18. Electrocardiographic PR-interval duration and cardiovascular risk

    DEFF Research Database (Denmark)

    Rasmussen, Peter Vibe; Nielsen, Jonas Bille; Skov, Morten Wagner

    2017-01-01

    Background Because of ambiguous reports in the literature, we aimed to investigate the association between PR interval and the risk of all-cause and cardiovascular death, heart failure, and pacemaker implantation, allowing for a nonlinear relationship. MethodsWe included 293,111 individuals...... into 7 groups based on the population PR interval distribution. Cox models were used, with reference to a PR interval between 152 and 161 ms (40th to heart failure...... adjustment. A long PR interval conferred an increased risk of heart failure ( > 200 ms; HR, 1.31; 95% CI, 1.22-1.42; P 200 ms (HR, 3...

  19. Global Robust Stability of Switched Interval Neural Networks with Discrete and Distributed Time-Varying Delays of Neural Type

    Directory of Open Access Journals (Sweden)

    Huaiqin Wu

    2012-01-01

    Full Text Available By combing the theories of the switched systems and the interval neural networks, the mathematics model of the switched interval neural networks with discrete and distributed time-varying delays of neural type is presented. A set of the interval parameter uncertainty neural networks with discrete and distributed time-varying delays of neural type are used as the individual subsystem, and an arbitrary switching rule is assumed to coordinate the switching between these networks. By applying the augmented Lyapunov-Krasovskii functional approach and linear matrix inequality (LMI techniques, a delay-dependent criterion is achieved to ensure to such switched interval neural networks to be globally asymptotically robustly stable in terms of LMIs. The unknown gain matrix is determined by solving this delay-dependent LMIs. Finally, an illustrative example is given to demonstrate the validity of the theoretical results.

  20. A New Method Based on TOPSIS and Response Surface Method for MCDM Problems with Interval Numbers

    Directory of Open Access Journals (Sweden)

    Peng Wang

    2015-01-01

    Full Text Available As the preference of design maker (DM is always ambiguous, we have to face many multiple criteria decision-making (MCDM problems with interval numbers in our daily life. Though there have been some methods applied to solve this sort of problem, it is always complex to comprehend and sometimes difficult to implement. The calculation processes are always ineffective when a new alternative is added or removed. In view of the weakness like this, this paper presents a new method based on TOPSIS and response surface method (RSM for MCDM problems with interval numbers, RSM-TOPSIS-IN for short. The key point of this approach is the application of deviation degree matrix, which ensures that the DM can get a simple response surface (RS model to rank the alternatives. In order to demonstrate the feasibility and effectiveness of the proposed method, three illustrative MCMD problems with interval numbers are analysed, including (a selection of investment program, (b selection of a right partner, and (c assessment of road transport technologies. The contrast of ranking results shows that the RSM-TOPSIS-IN method is in good agreement with those derived by earlier researchers, indicating it is suitable to solve MCDM problems with interval numbers.

  1. Time Interval to Initiation of Contraceptive Methods Following ...

    African Journals Online (AJOL)

    Objectives: The objectives of the study were to determine factors affecting the interval between a woman's last childbirth and the initiation of contraception. Materials and Methods: This was a retrospective study. Family planning clinic records of the Barau Dikko Teaching Hospital Kaduna from January 2000 to March 2014 ...

  2. Conditional prediction intervals of wind power generation

    DEFF Research Database (Denmark)

    Pinson, Pierre; Kariniotakis, Georges

    2010-01-01

    A generic method for the providing of prediction intervals of wind power generation is described. Prediction intervals complement the more common wind power point forecasts, by giving a range of potential outcomes for a given probability, their so-called nominal coverage rate. Ideally they inform...... on the characteristics of prediction errors for providing conditional interval forecasts. By simultaneously generating prediction intervals with various nominal coverage rates, one obtains full predictive distributions of wind generation. Adapted resampling is applied here to the case of an onshore Danish wind farm...... to the case of a large number of wind farms in Europe and Australia among others is finally discussed....

  3. No Additional Benefits of Block- Over Evenly-Distributed High-Intensity Interval Training within a Polarized Microcycle.

    Science.gov (United States)

    McGawley, Kerry; Juudas, Elisabeth; Kazior, Zuzanna; Ström, Kristoffer; Blomstrand, Eva; Hansson, Ola; Holmberg, Hans-Christer

    2017-01-01

    Introduction: The current study aimed to investigate the responses to block- versus evenly-distributed high-intensity interval training (HIT) within a polarized microcycle. Methods: Twenty well-trained junior cross-country skiers (10 males, age 17.6 ± 1.5 and 10 females, age 17.3 ± 1.5) completed two, 3-week periods of training (EVEN and BLOCK) in a randomized, crossover-design study. In EVEN, 3 HIT sessions (5 × 4-min of diagonal-stride roller-skiing) were completed at a maximal sustainable intensity each week while low-intensity training (LIT) was distributed evenly around the HIT. In BLOCK, the same 9 HIT sessions were completed in the second week while only LIT was completed in the first and third weeks. Heart rate (HR), session ratings of perceived exertion (sRPE), and perceived recovery (pREC) were recorded for all HIT and LIT sessions, while distance covered was recorded for each HIT interval. The recovery-stress questionnaire for athletes (RESTQ-Sport) was completed weekly. Before and after EVEN and BLOCK, resting saliva and muscle samples were collected and an incremental test and 600-m time-trial (TT) were completed. Results: Pre- to post-testing revealed no significant differences between EVEN and BLOCK for changes in resting salivary cortisol, testosterone, or IgA, or for changes in muscle capillary density, fiber area, fiber composition, enzyme activity (CS, HAD, and PFK) or the protein content of VEGF or PGC-1α. Neither were any differences observed in the changes in skiing economy, [Formula: see text] or 600-m time-trial performance between interventions. These findings were coupled with no significant differences between EVEN and BLOCK for distance covered during HIT, summated HR zone scores, total sRPE training load, overall pREC or overall recovery-stress state. However, 600-m TT performance improved from pre- to post-training, irrespective of intervention ( P = 0.003), and a number of hormonal and muscle biopsy markers were also significantly

  4. Binomial Distribution Sample Confidence Intervals Estimation 7. Absolute Risk Reduction and ARR-like Expressions

    Directory of Open Access Journals (Sweden)

    Andrei ACHIMAŞ CADARIU

    2004-08-01

    Full Text Available Assessments of a controlled clinical trial suppose to interpret some key parameters as the controlled event rate, experimental event date, relative risk, absolute risk reduction, relative risk reduction, number needed to treat when the effect of the treatment are dichotomous variables. Defined as the difference in the event rate between treatment and control groups, the absolute risk reduction is the parameter that allowed computing the number needed to treat. The absolute risk reduction is compute when the experimental treatment reduces the risk for an undesirable outcome/event. In medical literature when the absolute risk reduction is report with its confidence intervals, the method used is the asymptotic one, even if it is well know that may be inadequate. The aim of this paper is to introduce and assess nine methods of computing confidence intervals for absolute risk reduction and absolute risk reduction – like function.Computer implementations of the methods use the PHP language. Methods comparison uses the experimental errors, the standard deviations, and the deviation relative to the imposed significance level for specified sample sizes. Six methods of computing confidence intervals for absolute risk reduction and absolute risk reduction-like functions were assessed using random binomial variables and random sample sizes.The experiments shows that the ADAC, and ADAC1 methods obtains the best overall performance of computing confidence intervals for absolute risk reduction.

  5. Reference intervals for serum total cholesterol, HDL cholesterol and ...

    African Journals Online (AJOL)

    Reference intervals of total cholesterol, HDL cholesterol and non-HDL cholesterol concentrations were determined on 309 blood donors from an urban and peri-urban population of Botswana. Using non-parametric methods to establish 2.5th and 97.5th percentiles of the distribution, the intervals were: total cholesterol 2.16 ...

  6. A comparison of confidence interval methods for the concordance correlation coefficient and intraclass correlation coefficient with small number of raters.

    Science.gov (United States)

    Feng, Dai; Svetnik, Vladimir; Coimbra, Alexandre; Baumgartner, Richard

    2014-01-01

    The intraclass correlation coefficient (ICC) with fixed raters or, equivalently, the concordance correlation coefficient (CCC) for continuous outcomes is a widely accepted aggregate index of agreement in settings with small number of raters. Quantifying the precision of the CCC by constructing its confidence interval (CI) is important in early drug development applications, in particular in qualification of biomarker platforms. In recent years, there have been several new methods proposed for construction of CIs for the CCC, but their comprehensive comparison has not been attempted. The methods consisted of the delta method and jackknifing with and without Fisher's Z-transformation, respectively, and Bayesian methods with vague priors. In this study, we carried out a simulation study, with data simulated from multivariate normal as well as heavier tailed distribution (t-distribution with 5 degrees of freedom), to compare the state-of-the-art methods for assigning CI to the CCC. When the data are normally distributed, the jackknifing with Fisher's Z-transformation (JZ) tended to provide superior coverage and the difference between it and the closest competitor, the Bayesian method with the Jeffreys prior was in general minimal. For the nonnormal data, the jackknife methods, especially the JZ method, provided the coverage probabilities closest to the nominal in contrast to the others which yielded overly liberal coverage. Approaches based upon the delta method and Bayesian method with conjugate prior generally provided slightly narrower intervals and larger lower bounds than others, though this was offset by their poor coverage. Finally, we illustrated the utility of the CIs for the CCC in an example of a wake after sleep onset (WASO) biomarker, which is frequently used in clinical sleep studies of drugs for treatment of insomnia.

  7. Solutions of interval type-2 fuzzy polynomials using a new ranking method

    Science.gov (United States)

    Rahman, Nurhakimah Ab.; Abdullah, Lazim; Ghani, Ahmad Termimi Ab.; Ahmad, Noor'Ani

    2015-10-01

    A few years ago, a ranking method have been introduced in the fuzzy polynomial equations. Concept of the ranking method is proposed to find actual roots of fuzzy polynomials (if exists). Fuzzy polynomials are transformed to system of crisp polynomials, performed by using ranking method based on three parameters namely, Value, Ambiguity and Fuzziness. However, it was found that solutions based on these three parameters are quite inefficient to produce answers. Therefore in this study a new ranking method have been developed with the aim to overcome the inherent weakness. The new ranking method which have four parameters are then applied in the interval type-2 fuzzy polynomials, covering the interval type-2 of fuzzy polynomial equation, dual fuzzy polynomial equations and system of fuzzy polynomials. The efficiency of the new ranking method then numerically considered in the triangular fuzzy numbers and the trapezoidal fuzzy numbers. Finally, the approximate solutions produced from the numerical examples indicate that the new ranking method successfully produced actual roots for the interval type-2 fuzzy polynomials.

  8. An interval fixed-mix stochastic programming method for greenhouse gas mitigation in energy systems under uncertainty

    International Nuclear Information System (INIS)

    Xie, Y.L.; Li, Y.P.; Huang, G.H.; Li, Y.F.

    2010-01-01

    In this study, an interval fixed-mix stochastic programming (IFSP) model is developed for greenhouse gas (GHG) emissions reduction management under uncertainties. In the IFSP model, methods of interval-parameter programming (IPP) and fixed-mix stochastic programming (FSP) are introduced into an integer programming framework, such that the developed model can tackle uncertainties described in terms of interval values and probability distributions over a multi-stage context. Moreover, it can reflect dynamic decisions for facility-capacity expansion during the planning horizon. The developed model is applied to a case of planning GHG-emission mitigation, demonstrating that IFSP is applicable to reflecting complexities of multi-uncertainty, dynamic and interactive energy management systems, and capable of addressing the problem of GHG-emission reduction. A number of scenarios corresponding to different GHG-emission mitigation levels are examined; the results suggest that reasonable solutions have been generated. They can be used for generating plans for energy resource/electricity allocation and capacity expansion and help decision makers identify desired GHG mitigation policies under various economic costs and environmental requirements.

  9. The Interval Slope Method for Long-Term Forecasting of Stock Price Trends

    Directory of Open Access Journals (Sweden)

    Chun-xue Nie

    2016-01-01

    Full Text Available A stock price is a typical but complex type of time series data. We used the effective prediction of long-term time series data to schedule an investment strategy and obtain higher profit. Due to economic, environmental, and other factors, it is very difficult to obtain a precise long-term stock price prediction. The exponentially segmented pattern (ESP is introduced here and used to predict the fluctuation of different stock data over five future prediction intervals. The new feature of stock pricing during the subinterval, named the interval slope, can characterize fluctuations in stock price over specific periods. The cumulative distribution function (CDF of MSE was compared to those of MMSE-BC and SVR. We concluded that the interval slope developed here can capture more complex dynamics of stock price trends. The mean stock price can then be predicted over specific time intervals relatively accurately, in which multiple mean values over time intervals are used to express the time series in the long term. In this way, the prediction of long-term stock price can be more precise and prevent the development of cumulative errors.

  10. Exponential operations and aggregation operators of interval neutrosophic sets and their decision making methods.

    Science.gov (United States)

    Ye, Jun

    2016-01-01

    An interval neutrosophic set (INS) is a subclass of a neutrosophic set and a generalization of an interval-valued intuitionistic fuzzy set, and then the characteristics of INS are independently described by the interval numbers of its truth-membership, indeterminacy-membership, and falsity-membership degrees. However, the exponential parameters (weights) of all the existing exponential operational laws of INSs and the corresponding exponential aggregation operators are crisp values in interval neutrosophic decision making problems. As a supplement, this paper firstly introduces new exponential operational laws of INSs, where the bases are crisp values or interval numbers and the exponents are interval neutrosophic numbers (INNs), which are basic elements in INSs. Then, we propose an interval neutrosophic weighted exponential aggregation (INWEA) operator and a dual interval neutrosophic weighted exponential aggregation (DINWEA) operator based on these exponential operational laws and introduce comparative methods based on cosine measure functions for INNs and dual INNs. Further, we develop decision-making methods based on the INWEA and DINWEA operators. Finally, a practical example on the selecting problem of global suppliers is provided to illustrate the applicability and rationality of the proposed methods.

  11. A Fourier transform method for the selection of a smoothing interval

    International Nuclear Information System (INIS)

    Kekre, H.B.; Madan, V.K.; Bairi, B.R.

    1989-01-01

    A novel method for the selection of a smoothing interval for the widely used Savitzky and Golay's smoothing filter is proposed. Complementary bandwidths for the nuclear spectral data and the smoothing filter are defined. The criterion for the selection of smoothing interval is based on matching the bandwidths of the spectral data to the filter. Using the above method five real observed spectral peaks of different full width at half maximum, viz. 23.5, 19.5, 17, 8.5 and 6.5 channels, were smoothed and the results are presented. (orig.)

  12. Resampling Approach for Determination of the Method for Reference Interval Calculation in Clinical Laboratory Practice▿

    Science.gov (United States)

    Pavlov, Igor Y.; Wilson, Andrew R.; Delgado, Julio C.

    2010-01-01

    Reference intervals (RI) play a key role in clinical interpretation of laboratory test results. Numerous articles are devoted to analyzing and discussing various methods of RI determination. The two most widely used approaches are the parametric method, which assumes data normality, and a nonparametric, rank-based procedure. The decision about which method to use is usually made arbitrarily. The goal of this study was to demonstrate that using a resampling approach for the comparison of RI determination techniques could help researchers select the right procedure. Three methods of RI calculation—parametric, transformed parametric, and quantile-based bootstrapping—were applied to multiple random samples drawn from 81 values of complement factor B observations and from a computer-simulated normally distributed population. It was shown that differences in RI between legitimate methods could be up to 20% and even more. The transformed parametric method was found to be the best method for the calculation of RI of non-normally distributed factor B estimations, producing an unbiased RI and the lowest confidence limits and interquartile ranges. For a simulated Gaussian population, parametric calculations, as expected, were the best; quantile-based bootstrapping produced biased results at low sample sizes, and the transformed parametric method generated heavily biased RI. The resampling approach could help compare different RI calculation methods. An algorithm showing a resampling procedure for choosing the appropriate method for RI calculations is included. PMID:20554803

  13. Synchronization of Markovian jumping stochastic complex networks with distributed time delays and probabilistic interval discrete time-varying delays

    International Nuclear Information System (INIS)

    Li Hongjie; Yue Dong

    2010-01-01

    The paper investigates the synchronization stability problem for a class of complex dynamical networks with Markovian jumping parameters and mixed time delays. The complex networks consist of m modes and the networks switch from one mode to another according to a Markovian chain with known transition probability. The mixed time delays are composed of discrete and distributed delays, the discrete time delay is assumed to be random and its probability distribution is known a priori. In terms of the probability distribution of the delays, the new type of system model with probability-distribution-dependent parameter matrices is proposed. Based on the stochastic analysis techniques and the properties of the Kronecker product, delay-dependent synchronization stability criteria in the mean square are derived in the form of linear matrix inequalities which can be readily solved by using the LMI toolbox in MATLAB, the solvability of derived conditions depends on not only the size of the delay, but also the probability of the delay-taking values in some intervals. Finally, a numerical example is given to illustrate the feasibility and effectiveness of the proposed method.

  14. Robust Confidence Interval for a Ratio of Standard Deviations

    Science.gov (United States)

    Bonett, Douglas G.

    2006-01-01

    Comparing variability of test scores across alternate forms, test conditions, or subpopulations is a fundamental problem in psychometrics. A confidence interval for a ratio of standard deviations is proposed that performs as well as the classic method with normal distributions and performs dramatically better with nonnormal distributions. A simple…

  15. Analysis of calculating methods for failure distribution function based on maximal entropy principle

    International Nuclear Information System (INIS)

    Guo Chunying; Lin Yuangen; Jiang Meng; Wu Changli

    2009-01-01

    The computation of invalidation distribution functions of electronic devices when exposed in gamma rays is discussed here. First, the possible devices failure distribution models are determined through the tests of statistical hypotheses using the test data. The results show that: the devices' failure distribution can obey multi-distributions when the test data is few. In order to decide the optimum failure distribution model, the maximal entropy principle is used and the elementary failure models are determined. Then, the Bootstrap estimation method is used to simulate the intervals estimation of the mean and the standard deviation. On the basis of this, the maximal entropy principle is used again and the simulated annealing method is applied to find the optimum values of the mean and the standard deviation. Accordingly, the electronic devices' optimum failure distributions are finally determined and the survival probabilities are calculated. (authors)

  16. Confidence interval procedures for Monte Carlo transport simulations

    International Nuclear Information System (INIS)

    Pederson, S.P.

    1997-01-01

    The problem of obtaining valid confidence intervals based on estimates from sampled distributions using Monte Carlo particle transport simulation codes such as MCNP is examined. Such intervals can cover the true parameter of interest at a lower than nominal rate if the sampled distribution is extremely right-skewed by large tallies. Modifications to the standard theory of confidence intervals are discussed and compared with some existing heuristics, including batched means normality tests. Two new types of diagnostics are introduced to assess whether the conditions of central limit theorem-type results are satisfied: the relative variance of the variance determines whether the sample size is sufficiently large, and estimators of the slope of the right tail of the distribution are used to indicate the number of moments that exist. A simulation study is conducted to quantify the relationship between various diagnostics and coverage rates and to find sample-based quantities useful in indicating when intervals are expected to be valid. Simulated tally distributions are chosen to emulate behavior seen in difficult particle transport problems. Measures of variation in the sample variance s 2 are found to be much more effective than existing methods in predicting when coverage will be near nominal rates. Batched means tests are found to be overly conservative in this regard. A simple but pathological MCNP problem is presented as an example of false convergence using existing heuristics. The new methods readily detect the false convergence and show that the results of the problem, which are a factor of 4 too small, should not be used. Recommendations are made for applying these techniques in practice, using the statistical output currently produced by MCNP

  17. Magnetic Resonance Imaging in the measurement of whole body muscle mass: A comparison of interval gap methods

    International Nuclear Information System (INIS)

    Hellmanns, K.; McBean, K.; Thoirs, K.

    2015-01-01

    Purpose: Magnetic Resonance Imaging (MRI) is commonly used in body composition research to measure whole body skeletal muscle mass (SM). MRI calculation methods of SM can vary by analysing the images at different slice intervals (or interval gaps) along the length of the body. This study compared SM measurements made from MRI images of apparently healthy individuals using different interval gap methods to determine the error associated with each technique. It was anticipated that the results would inform researchers of optimum interval gap measurements to detect a predetermined minimum change in SM. Methods: A method comparison study was used to compare eight interval gap methods (interval gaps of 40, 50, 60, 70, 80, 100, 120 and 140 mm) against a reference 10 mm interval gap method for measuring SM from twenty MRI image sets acquired from apparently healthy participants. Pearson product-moment correlation analysis was used to determine the association between methods. Total error was calculated as the sum of the bias (systematic error) and the random error (limits of agreement) of the mean differences. Percentage error was used to demonstrate proportional error. Results: Pearson product-moment correlation analysis between the reference method and all interval gap methods demonstrated strong and significant associations (r > 0.99, p < 0.0001). The 40 mm interval gap method was comparable with the 10 mm interval reference method and had a low error (total error 0.95 kg, −3.4%). Analysis methods using wider interval gap techniques demonstrated larger errors than reported for dual-energy x-ray absorptiometry (DXA), a technique which is more available, less expensive, and less time consuming than MRI analysis of SM. Conclusions: Researchers using MRI to measure SM can be confident in using a 40 mm interval gap technique when analysing the images to detect minimum changes less than 1 kg. The use of wider intervals will introduce error that is no better

  18. The Interval-Valued Triangular Fuzzy Soft Set and Its Method of Dynamic Decision Making

    Directory of Open Access Journals (Sweden)

    Xiaoguo Chen

    2014-01-01

    Full Text Available A concept of interval-valued triangular fuzzy soft set is presented, and some operations of “AND,” “OR,” intersection, union and complement, and so forth are defined. Then some relative properties are discussed and several conclusions are drawn. A dynamic decision making model is built based on the definition of interval-valued triangular fuzzy soft set, in which period weight is determined by the exponential decay method. The arithmetic weighted average operator of interval-valued triangular fuzzy soft set is given by the aggregating thought, thereby aggregating interval-valued triangular fuzzy soft sets of different time-series into a collective interval-valued triangular fuzzy soft set. The formulas of selection and decision values of different objects are given; therefore the optimal decision making is achieved according to the decision values. Finally, the steps of this method are concluded, and one example is given to explain the application of the method.

  19. Bootstrap Prediction Intervals in Non-Parametric Regression with Applications to Anomaly Detection

    Science.gov (United States)

    Kumar, Sricharan; Srivistava, Ashok N.

    2012-01-01

    Prediction intervals provide a measure of the probable interval in which the outputs of a regression model can be expected to occur. Subsequently, these prediction intervals can be used to determine if the observed output is anomalous or not, conditioned on the input. In this paper, a procedure for determining prediction intervals for outputs of nonparametric regression models using bootstrap methods is proposed. Bootstrap methods allow for a non-parametric approach to computing prediction intervals with no specific assumptions about the sampling distribution of the noise or the data. The asymptotic fidelity of the proposed prediction intervals is theoretically proved. Subsequently, the validity of the bootstrap based prediction intervals is illustrated via simulations. Finally, the bootstrap prediction intervals are applied to the problem of anomaly detection on aviation data.

  20. The role of retinopathy distribution and other lesion types for the definition of examination intervals during screening for diabetic retinopathy.

    Science.gov (United States)

    Ometto, Giovanni; Erlandsen, Mogens; Hunter, Andrew; Bek, Toke

    2017-06-01

    It has previously been shown that the intervals between screening examinations for diabetic retinopathy can be optimized by including individual risk factors for the development of the disease in the risk assessment. However, in some cases, the risk model calculating the screening interval may recommend a different interval than an experienced clinician. The purpose of this study was to evaluate the influence of factors unrelated to diabetic retinopathy and the distribution of lesions for discrepancies between decisions made by the clinician and the risk model. Therefore, fundus photographs from 90 screening examinations where the recommendations of the clinician and a risk model had been discrepant were evaluated. Forty features were defined to describe the type and location of the lesions, and classification and ranking techniques were used to assess whether the features could predict the discrepancy between the grader and the risk model. Suspicion of tumours, retinal degeneration and vascular diseases other than diabetic retinopathy could explain why the clinician recommended shorter examination intervals than the model. Additionally, the regional distribution of microaneurysms/dot haemorrhages was important for defining a photograph as belonging to the group where both the clinician and the risk model had recommended a short screening interval as opposed to the other decision alternatives. Features unrelated to diabetic retinopathy and the regional distribution of retinal lesions may affect the recommendation of the examination interval during screening for diabetic retinopathy. The development of automated computerized algorithms for extracting information about the type and location of retinal lesions could be expected to further optimize examination intervals during screening for diabetic retinopathy. © 2016 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  1. Comparison of the methods for determination of calibration and verification intervals of measuring devices

    Directory of Open Access Journals (Sweden)

    Toteva Pavlina

    2017-01-01

    Full Text Available The paper presents different determination and optimisation methods for verification intervals of technical devices for monitoring and measurement based on the requirements of some widely used international standards, e.g. ISO 9001, ISO/IEC 17020, ISO/IEC 17025 etc., maintained by various organizations implementing measuring devices in practice. Comparative analysis of the reviewed methods is conducted in terms of opportunities for assessing the adequacy of interval(s for calibration of measuring devices and their optimisation accepted by an organization – an extension or reduction depending on the obtained results. The advantages and disadvantages of the reviewed methods are discussed, and recommendations for their applicability are provided.

  2. Non-Gaussian distributions of melodic intervals in music: The Lévy-stable approximation

    Science.gov (United States)

    Niklasson, Gunnar A.; Niklasson, Maria H.

    2015-11-01

    The analysis of structural patterns in music is of interest in order to increase our fundamental understanding of music, as well as for devising algorithms for computer-generated music, so called algorithmic composition. Musical melodies can be analyzed in terms of a “music walk” between the pitches of successive tones in a notescript, in analogy with the “random walk” model commonly used in physics. We find that the distribution of melodic intervals between tones can be approximated with a Lévy-stable distribution. Since music also exibits self-affine scaling, we propose that the “music walk” should be modelled as a Lévy motion. We find that the Lévy motion model captures basic structural patterns in classical as well as in folk music.

  3. The Optimal Confidence Intervals for Agricultural Products’ Price Forecasts Based on Hierarchical Historical Errors

    Directory of Open Access Journals (Sweden)

    Yi Wang

    2016-12-01

    Full Text Available With the levels of confidence and system complexity, interval forecasts and entropy analysis can deliver more information than point forecasts. In this paper, we take receivers’ demands as our starting point, use the trade-off model between accuracy and informativeness as the criterion to construct the optimal confidence interval, derive the theoretical formula of the optimal confidence interval and propose a practical and efficient algorithm based on entropy theory and complexity theory. In order to improve the estimation precision of the error distribution, the point prediction errors are STRATIFIED according to prices and the complexity of the system; the corresponding prediction error samples are obtained by the prices stratification; and the error distributions are estimated by the kernel function method and the stability of the system. In a stable and orderly environment for price forecasting, we obtain point prediction error samples by the weighted local region and RBF (Radial basis function neural network methods, forecast the intervals of the soybean meal and non-GMO (Genetically Modified Organism soybean continuous futures closing prices and implement unconditional coverage, independence and conditional coverage tests for the simulation results. The empirical results are compared from various interval evaluation indicators, different levels of noise, several target confidence levels and different point prediction methods. The analysis shows that the optimal interval construction method is better than the equal probability method and the shortest interval method and has good anti-noise ability with the reduction of system entropy; the hierarchical estimation error method can obtain higher accuracy and better interval estimation than the non-hierarchical method in a stable system.

  4. An Extended TOPSIS Method for Multiple Attribute Decision Making based on Interval Neutrosophic Uncertain Linguistic Variables

    Directory of Open Access Journals (Sweden)

    Said Broumi

    2015-03-01

    Full Text Available The interval neutrosophic uncertain linguistic variables can easily express the indeterminate and inconsistent information in real world, and TOPSIS is a very effective decision making method more and more extensive applications. In this paper, we will extend the TOPSIS method to deal with the interval neutrosophic uncertain linguistic information, and propose an extended TOPSIS method to solve the multiple attribute decision making problems in which the attribute value takes the form of the interval neutrosophic uncertain linguistic variables and attribute weight is unknown. Firstly, the operational rules and properties for the interval neutrosophic variables are introduced. Then the distance between two interval neutrosophic uncertain linguistic variables is proposed and the attribute weight is calculated by the maximizing deviation method, and the closeness coefficients to the ideal solution for each alternatives. Finally, an illustrative example is given to illustrate the decision making steps and the effectiveness of the proposed method.

  5. An Interval Estimation Method of Patent Keyword Data for Sustainable Technology Forecasting

    Directory of Open Access Journals (Sweden)

    Daiho Uhm

    2017-11-01

    Full Text Available Technology forecasting (TF is forecasting the future state of a technology. It is exciting to know the future of technologies, because technology changes the way we live and enhances the quality of our lives. In particular, TF is an important area in the management of technology (MOT for R&D strategy and new product development. Consequently, there are many studies on TF. Patent analysis is one method of TF because patents contain substantial information regarding developed technology. The conventional methods of patent analysis are based on quantitative approaches such as statistics and machine learning. The most traditional TF methods based on patent analysis have a common problem. It is the sparsity of patent keyword data structured from collected patent documents. After preprocessing with text mining techniques, most frequencies of technological keywords in patent data have values of zero. This problem creates a disadvantage for the performance of TF, and we have trouble analyzing patent keyword data. To solve this problem, we propose an interval estimation method (IEM. Using an adjusted Wald confidence interval called the Agresti–Coull confidence interval, we construct our IEM for efficient TF. In addition, we apply the proposed method to forecast the technology of an innovative company. To show how our work can be applied in the real domain, we conduct a case study using Apple technology.

  6. Comparing interval estimates for small sample ordinal CFA models.

    Science.gov (United States)

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  7. Effect of a data buffer on the recorded distribution of time intervals for random events

    Energy Technology Data Exchange (ETDEWEB)

    Barton, J C [Polytechnic of North London (UK)

    1976-03-15

    The use of a data buffer enables the distribution of the time intervals between events to be studied for times less than the recording system dead-time but the usual negative exponential distribution for random events has to be modified. The theory for this effect is developed for an n-stage buffer followed by an asynchronous recorder. Results are evaluated for the values of n from 1 to 5. In the language of queueing theory the system studied is of type M/D/1/n+1, i.e. with constant service time and a finite number of places.

  8. The Distribution of the Interval between Events of a Cox Process with Shot Noise Intensity

    Directory of Open Access Journals (Sweden)

    Angelos Dassios

    2008-01-01

    Full Text Available Applying piecewise deterministic Markov processes theory, the probability generating function of a Cox process, incorporating with shot noise process as the claim intensity, is obtained. We also derive the Laplace transform of the distribution of the shot noise process at claim jump times, using stationary assumption of the shot noise process at any times. Based on this Laplace transform and from the probability generating function of a Cox process with shot noise intensity, we obtain the distribution of the interval of a Cox process with shot noise intensity for insurance claims and its moments, that is, mean and variance.

  9. Extraction and LOD control of colored interval volumes

    Science.gov (United States)

    Miyamura, Hiroko N.; Takeshima, Yuriko; Fujishiro, Issei; Saito, Takafumi

    2005-03-01

    Interval volume serves as a generalized isosurface and represents a three-dimensional subvolume for which the associated scalar filed values lie within a user-specified closed interval. In general, it is not an easy task for novices to specify the scalar field interval corresponding to their ROIs. In order to extract interval volumes from which desirable geometric features can be mined effectively, we propose a suggestive technique which extracts interval volumes automatically based on the global examination of the field contrast structure. Also proposed here is a simplification scheme for decimating resultant triangle patches to realize efficient transmission and rendition of large-scale interval volumes. Color distributions as well as geometric features are taken into account to select best edges to be collapsed. In addition, when a user wants to selectively display and analyze the original dataset, the simplified dataset is restructured to the original quality. Several simulated and acquired datasets are used to demonstrate the effectiveness of the present methods.

  10. Two sample Bayesian prediction intervals for order statistics based on the inverse exponential-type distributions using right censored sample

    Directory of Open Access Journals (Sweden)

    M.M. Mohie El-Din

    2011-10-01

    Full Text Available In this paper, two sample Bayesian prediction intervals for order statistics (OS are obtained. This prediction is based on a certain class of the inverse exponential-type distributions using a right censored sample. A general class of prior density functions is used and the predictive cumulative function is obtained in the two samples case. The class of the inverse exponential-type distributions includes several important distributions such the inverse Weibull distribution, the inverse Burr distribution, the loglogistic distribution, the inverse Pareto distribution and the inverse paralogistic distribution. Special cases of the inverse Weibull model such as the inverse exponential model and the inverse Rayleigh model are considered.

  11. Confidence intervals for distinguishing ordinal and disordinal interactions in multiple regression.

    Science.gov (United States)

    Lee, Sunbok; Lei, Man-Kit; Brody, Gene H

    2015-06-01

    Distinguishing between ordinal and disordinal interaction in multiple regression is useful in testing many interesting theoretical hypotheses. Because the distinction is made based on the location of a crossover point of 2 simple regression lines, confidence intervals of the crossover point can be used to distinguish ordinal and disordinal interactions. This study examined 2 factors that need to be considered in constructing confidence intervals of the crossover point: (a) the assumption about the sampling distribution of the crossover point, and (b) the possibility of abnormally wide confidence intervals for the crossover point. A Monte Carlo simulation study was conducted to compare 6 different methods for constructing confidence intervals of the crossover point in terms of the coverage rate, the proportion of true values that fall to the left or right of the confidence intervals, and the average width of the confidence intervals. The methods include the reparameterization, delta, Fieller, basic bootstrap, percentile bootstrap, and bias-corrected accelerated bootstrap methods. The results of our Monte Carlo simulation study suggest that statistical inference using confidence intervals to distinguish ordinal and disordinal interaction requires sample sizes more than 500 to be able to provide sufficiently narrow confidence intervals to identify the location of the crossover point. (c) 2015 APA, all rights reserved).

  12. Reference interval computation: which method (not) to choose?

    Science.gov (United States)

    Pavlov, Igor Y; Wilson, Andrew R; Delgado, Julio C

    2012-07-11

    When different methods are applied to reference interval (RI) calculation the results can sometimes be substantially different, especially for small reference groups. If there are no reliable RI data available, there is no way to confirm which method generates results closest to the true RI. We randomly drawn samples obtained from a public database for 33 markers. For each sample, RIs were calculated by bootstrapping, parametric, and Box-Cox transformed parametric methods. Results were compared to the values of the population RI. For approximately half of the 33 markers, results of all 3 methods were within 3% of the true reference value. For other markers, parametric results were either unavailable or deviated considerably from the true values. The transformed parametric method was more accurate than bootstrapping for sample size of 60, very close to bootstrapping for sample size 120, but in some cases unavailable. We recommend against using parametric calculations to determine RIs. The transformed parametric method utilizing Box-Cox transformation would be preferable way of RI calculation, if it satisfies normality test. If not, the bootstrapping is always available, and is almost as accurate and precise as the transformed parametric method. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. RANDOM FUNCTIONS AND INTERVAL METHOD FOR PREDICTING THE RESIDUAL RESOURCE OF BUILDING STRUCTURES

    Directory of Open Access Journals (Sweden)

    Shmelev Gennadiy Dmitrievich

    2017-11-01

    Full Text Available Subject: possibility of using random functions and interval prediction method for estimating the residual life of building structures in the currently used buildings. Research objectives: coordination of ranges of values to develop predictions and random functions that characterize the processes being predicted. Materials and methods: when performing this research, the method of random functions and the method of interval prediction were used. Results: in the course of this work, the basic properties of random functions, including the properties of families of random functions, are studied. The coordination of time-varying impacts and loads on building structures is considered from the viewpoint of their influence on structures and representation of the structures’ behavior in the form of random functions. Several models of random functions are proposed for predicting individual parameters of structures. For each of the proposed models, its scope of application is defined. The article notes that the considered approach of forecasting has been used many times at various sites. In addition, the available results allowed the authors to develop a methodology for assessing the technical condition and residual life of building structures for the currently used facilities. Conclusions: we studied the possibility of using random functions and processes for the purposes of forecasting the residual service lives of structures in buildings and engineering constructions. We considered the possibility of using an interval forecasting approach to estimate changes in defining parameters of building structures and their technical condition. A comprehensive technique for forecasting the residual life of building structures using the interval approach is proposed.

  14. Analysis of Low Frequency Oscillation Using the Multi-Interval Parameter Estimation Method on a Rolling Blackout in the KEPCO System

    Directory of Open Access Journals (Sweden)

    Kwan-Shik Shim

    2017-04-01

    Full Text Available This paper describes a multiple time interval (“multi-interval” parameter estimation method. The multi-interval parameter estimation method estimates a parameter from a new multi-interval prediction error polynomial that can simultaneously consider multiple time intervals. The root of the multi-interval prediction error polynomial includes the effect on each time interval, and the important mode can be estimated by solving one polynomial for multiple time intervals or signals. The algorithm of the multi-interval parameter estimation method proposed in this paper is applied to the test function and the data measured from a PMU (phasor measurement unit installed in the KEPCO (Korea Electric Power Corporation system. The results confirm that the proposed multi-interval parameter estimation method accurately and reliably estimates important parameters.

  15. QT interval in healthy dogs: which method of correcting the QT interval in dogs is appropriate for use in small animal clinics?

    Directory of Open Access Journals (Sweden)

    Maira S. Oliveira

    2014-05-01

    Full Text Available The electrocardiography (ECG QT interval is influenced by fluctuations in heart rate (HR what may lead to misinterpretation of its length. Considering that alterations in QT interval length reflect abnormalities of the ventricular repolarisation which predispose to occurrence of arrhythmias, this variable must be properly evaluated. The aim of this work is to determine which method of correcting the QT interval is the most appropriate for dogs regarding different ranges of normal HR (different breeds. Healthy adult dogs (n=130; German Shepherd, Boxer, Pit Bull Terrier, and Poodle were submitted to ECG examination and QT intervals were determined in triplicates from the bipolar limb II lead and corrected for the effects of HR through the application of three published formulae involving quadratic, cubic or linear regression. The mean corrected QT values (QTc obtained using the diverse formulae were significantly different (ρ<0.05, while those derived according to the equation QTcV = QT + 0.087(1- RR were the most consistent (linear regression. QTcV values were strongly correlated (r=0.83 with the QT interval and showed a coefficient of variation of 8.37% and a 95% confidence interval of 0.22-0.23 s. Owing to its simplicity and reliability, the QTcV was considered the most appropriate to be used for the correction of QT interval in dogs.

  16. The Interval-Valued Triangular Fuzzy Soft Set and Its Method of Dynamic Decision Making

    OpenAIRE

    Xiaoguo Chen; Hong Du; Yue Yang

    2014-01-01

    A concept of interval-valued triangular fuzzy soft set is presented, and some operations of “AND,” “OR,” intersection, union and complement, and so forth are defined. Then some relative properties are discussed and several conclusions are drawn. A dynamic decision making model is built based on the definition of interval-valued triangular fuzzy soft set, in which period weight is determined by the exponential decay method. The arithmetic weighted average operator of interval-valued triangular...

  17. Coefficient Omega Bootstrap Confidence Intervals: Nonnormal Distributions

    Science.gov (United States)

    Padilla, Miguel A.; Divers, Jasmin

    2013-01-01

    The performance of the normal theory bootstrap (NTB), the percentile bootstrap (PB), and the bias-corrected and accelerated (BCa) bootstrap confidence intervals (CIs) for coefficient omega was assessed through a Monte Carlo simulation under conditions not previously investigated. Of particular interests were nonnormal Likert-type and binary items.…

  18. Computing interval-valued reliability measures: application of optimal control methods

    DEFF Research Database (Denmark)

    Kozin, Igor; Krymsky, Victor

    2017-01-01

    The paper describes an approach to deriving interval-valued reliability measures given partial statistical information on the occurrence of failures. We apply methods of optimal control theory, in particular, Pontryagin’s principle of maximum to solve the non-linear optimisation problem and derive...... the probabilistic interval-valued quantities of interest. It is proven that the optimisation problem can be translated into another problem statement that can be solved on the class of piecewise continuous probability density functions (pdfs). This class often consists of piecewise exponential pdfs which appear...... as soon as among the constraints there are bounds on a failure rate of a component under consideration. Finding the number of switching points of the piecewise continuous pdfs and their values becomes the focus of the approach described in the paper. Examples are provided....

  19. A Network Reconfiguration Method Considering Data Uncertainties in Smart Distribution Networks

    Directory of Open Access Journals (Sweden)

    Ke-yan Liu

    2017-05-01

    Full Text Available This work presents a method for distribution network reconfiguration with the simultaneous consideration of distributed generation (DG allocation. The uncertainties of load fluctuation before the network reconfiguration are also considered. Three optimal objectives, including minimal line loss cost, minimum Expected Energy Not Supplied, and minimum switch operation cost, are investigated. The multi-objective optimization problem is further transformed into a single-objective optimization problem by utilizing weighting factors. The proposed network reconfiguration method includes two periods. The first period is to create a feasible topology network by using binary particle swarm optimization (BPSO. Then the DG allocation problem is solved by utilizing sensitivity analysis and a Harmony Search algorithm (HSA. In the meanwhile, interval analysis is applied to deal with the uncertainties of load and devices parameters. Test cases are studied using the standard IEEE 33-bus and PG&E 69-bus systems. Different scenarios and comparisons are analyzed in the experiments. The results show the applicability of the proposed method. The performance analysis of the proposed method is also investigated. The computational results indicate that the proposed network reconfiguration algorithm is feasible.

  20. Detection of bursts in neuronal spike trains by the mean inter-spike interval method

    Institute of Scientific and Technical Information of China (English)

    Lin Chen; Yong Deng; Weihua Luo; Zhen Wang; Shaoqun Zeng

    2009-01-01

    Bursts are electrical spikes firing with a high frequency, which are the most important property in synaptic plasticity and information processing in the central nervous system. However, bursts are difficult to identify because bursting activities or patterns vary with phys-iological conditions or external stimuli. In this paper, a simple method automatically to detect bursts in spike trains is described. This method auto-adaptively sets a parameter (mean inter-spike interval) according to intrinsic properties of the detected burst spike trains, without any arbitrary choices or any operator judgrnent. When the mean value of several successive inter-spike intervals is not larger than the parameter, a burst is identified. By this method, bursts can be automatically extracted from different bursting patterns of cultured neurons on multi-electrode arrays, as accurately as by visual inspection. Furthermore, significant changes of burst variables caused by electrical stimulus have been found in spontaneous activity of neuronal network. These suggest that the mean inter-spike interval method is robust for detecting changes in burst patterns and characteristics induced by environmental alterations.

  1. A New Uncertain Analysis Method for the Prediction of Acoustic Field with Random and Interval Parameters

    Directory of Open Access Journals (Sweden)

    Mingjie Wang

    2016-01-01

    Full Text Available For the frequency response analysis of acoustic field with random and interval parameters, a nonintrusive uncertain analysis method named Polynomial Chaos Response Surface (PCRS method is proposed. In the proposed method, the polynomial chaos expansion method is employed to deal with the random parameters, and the response surface method is used to handle the interval parameters. The PCRS method does not require efforts to modify model equations due to its nonintrusive characteristic. By means of the PCRS combined with the existing interval analysis method, the lower and upper bounds of expectation, variance, and probability density function of the frequency response can be efficiently evaluated. Two numerical examples are conducted to validate the accuracy and efficiency of the approach. The results show that the PCRS method is more efficient compared to the direct Monte Carlo simulation (MCS method based on the original numerical model without causing significant loss of accuracy.

  2. An Interval-Valued Intuitionistic Fuzzy TOPSIS Method Based on an Improved Score Function

    Directory of Open Access Journals (Sweden)

    Zhi-yong Bai

    2013-01-01

    Full Text Available This paper proposes an improved score function for the effective ranking order of interval-valued intuitionistic fuzzy sets (IVIFSs and an interval-valued intuitionistic fuzzy TOPSIS method based on the score function to solve multicriteria decision-making problems in which all the preference information provided by decision-makers is expressed as interval-valued intuitionistic fuzzy decision matrices where each of the elements is characterized by IVIFS value and the information about criterion weights is known. We apply the proposed score function to calculate the separation measures of each alternative from the positive and negative ideal solutions to determine the relative closeness coefficients. According to the values of the closeness coefficients, the alternatives can be ranked and the most desirable one(s can be selected in the decision-making process. Finally, two illustrative examples for multicriteria fuzzy decision-making problems of alternatives are used as a demonstration of the applications and the effectiveness of the proposed decision-making method.

  3. Treatment of uncertainty through the interval smart/swing weighting method: a case study

    Directory of Open Access Journals (Sweden)

    Luiz Flávio Autran Monteiro Gomes

    2011-12-01

    Full Text Available An increasingly competitive market means that many decisions must be taken, quickly and with precision, in complex, high risk scenarios. This combination of factors makes it necessary to use decision aiding methods which provide a means of dealing with uncertainty in the judgement of the alternatives. This work presents the use of the MAUT method, combined with the INTERVAL SMART/SWING WEIGHTING method. Although multicriteria decision aiding was not conceived specifically for tackling uncertainty, the combined use of MAUT and the INTERVAL SMART/SWING WEIGHTING method allows approaching decision problems under uncertainty. The main concepts which are involved in these two methods are described and their joint application to the case study concerning the selection of a printing service supplier is presented. The case study makes use of the WINPRE software as a support tool for the calculation of dominance. It is then concluded that the proposed approach can be applied to decision making problems under uncertainty.

  4. A global multicenter study on reference values: 1. Assessment of methods for derivation and comparison of reference intervals.

    Science.gov (United States)

    Ichihara, Kiyoshi; Ozarda, Yesim; Barth, Julian H; Klee, George; Qiu, Ling; Erasmus, Rajiv; Borai, Anwar; Evgina, Svetlana; Ashavaid, Tester; Khan, Dilshad; Schreier, Laura; Rolle, Reynan; Shimizu, Yoshihisa; Kimura, Shogo; Kawano, Reo; Armbruster, David; Mori, Kazuo; Yadav, Binod K

    2017-04-01

    The IFCC Committee on Reference Intervals and Decision Limits coordinated a global multicenter study on reference values (RVs) to explore rational and harmonizable procedures for derivation of reference intervals (RIs) and investigate the feasibility of sharing RIs through evaluation of sources of variation of RVs on a global scale. For the common protocol, rather lenient criteria for reference individuals were adopted to facilitate harmonized recruitment with planned use of the latent abnormal values exclusion (LAVE) method. As of July 2015, 12 countries had completed their study with total recruitment of 13,386 healthy adults. 25 analytes were measured chemically and 25 immunologically. A serum panel with assigned values was measured by all laboratories. RIs were derived by parametric and nonparametric methods. The effect of LAVE methods is prominent in analytes which reflect nutritional status, inflammation and muscular exertion, indicating that inappropriate results are frequent in any country. The validity of the parametric method was confirmed by the presence of analyte-specific distribution patterns and successful Gaussian transformation using the modified Box-Cox formula in all countries. After successful alignment of RVs based on the panel test results, nearly half the analytes showed variable degrees of between-country differences. This finding, however, requires confirmation after adjusting for BMI and other sources of variation. The results are reported in the second part of this paper. The collaborative study enabled us to evaluate rational methods for deriving RIs and comparing the RVs based on real-world datasets obtained in a harmonized manner. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  5. Point and interval forecasts of mortality rates and life expectancy: A comparison of ten principal component methods

    Directory of Open Access Journals (Sweden)

    Han Lin Shang

    2011-07-01

    Full Text Available Using the age- and sex-specific data of 14 developed countries, we compare the point and interval forecast accuracy and bias of ten principal component methods for forecasting mortality rates and life expectancy. The ten methods are variants and extensions of the Lee-Carter method. Based on one-step forecast errors, the weighted Hyndman-Ullah method provides the most accurate point forecasts of mortality rates and the Lee-Miller method is the least biased. For the accuracy and bias of life expectancy, the weighted Hyndman-Ullah method performs the best for female mortality and the Lee-Miller method for male mortality. While all methods underestimate variability in mortality rates, the more complex Hyndman-Ullah methods are more accurate than the simpler methods. The weighted Hyndman-Ullah method provides the most accurate interval forecasts for mortality rates, while the robust Hyndman-Ullah method provides the best interval forecast accuracy for life expectancy.

  6. Estimation of reference intervals from small samples: an example using canine plasma creatinine.

    Science.gov (United States)

    Geffré, A; Braun, J P; Trumel, C; Concordet, D

    2009-12-01

    According to international recommendations, reference intervals should be determined from at least 120 reference individuals, which often are impossible to achieve in veterinary clinical pathology, especially for wild animals. When only a small number of reference subjects is available, the possible bias cannot be known and the normality of the distribution cannot be evaluated. A comparison of reference intervals estimated by different methods could be helpful. The purpose of this study was to compare reference limits determined from a large set of canine plasma creatinine reference values, and large subsets of this data, with estimates obtained from small samples selected randomly. Twenty sets each of 120 and 27 samples were randomly selected from a set of 1439 plasma creatinine results obtained from healthy dogs in another study. Reference intervals for the whole sample and for the large samples were determined by a nonparametric method. The estimated reference limits for the small samples were minimum and maximum, mean +/- 2 SD of native and Box-Cox-transformed values, 2.5th and 97.5th percentiles by a robust method on native and Box-Cox-transformed values, and estimates from diagrams of cumulative distribution functions. The whole sample had a heavily skewed distribution, which approached Gaussian after Box-Cox transformation. The reference limits estimated from small samples were highly variable. The closest estimates to the 1439-result reference interval for 27-result subsamples were obtained by both parametric and robust methods after Box-Cox transformation but were grossly erroneous in some cases. For small samples, it is recommended that all values be reported graphically in a dot plot or histogram and that estimates of the reference limits be compared using different methods.

  7. Application of the entropic coefficient for interval number optimization during interval assessment

    Directory of Open Access Journals (Sweden)

    Tynynyka A. N.

    2017-06-01

    Full Text Available In solving many statistical problems, the most precise choice of the distribution law of a random variable is required, the sample of which the authors observe. This choice requires the construction of an interval series. Therefore, the problem arises of assigning an optimal number of intervals, and this study proposes a number of formulas for solving it. Which of these formulas solves the problem more accurately? In [9], this question is investigated using the Pearson criterion. This article describes the procedure and on its basis gives formulas available in literature and proposed new formulas using the entropy coefficient. A comparison is made with the previously published results of applying Pearson's concord criterion for these purposes. Differences in the estimates of the accuracy of the formulas are found. The proposed new formulas for calculating the number of intervals showed the best results. Calculations have been made to compare the work of the same formulas for the distribution of sample data according to the normal law and the Rayleigh law.

  8. Effect of a High-intensity Interval Training method on maximum oxygen consumption in Chilean schoolchildren

    Directory of Open Access Journals (Sweden)

    Sergio Galdames-Maliqueo

    2017-12-01

    Full Text Available Introduction: The low levels of maximum oxygen consumption (VO2max evaluated in Chilean schoolchildren suggest the startup of trainings that improve the aerobic capacity. Objective: To analyze the effect of a High-intensity Interval Training method on maximum oxygen consumption in Chilean schoolchildren. Materials and methods: Thirty-two high school students from the eighth grade, who were divided into two groups, were part of the study (experimental group = 16 students and control group = 16 students. The main analyzed variable was the maximum oxygen consumption through the Course Navette Test. A High-intensity Interval training method was applied based on the maximum aerobic speed obtained through the Test. A mixed ANOVA was used for statistical analysis. Results: The experimental group showed a significant increase in the Maximum Oxygen Consumption between the pretest and posttest when compared with the control group (p < 0.0001. Conclusion: The results of the study showed a positive effect of the High-intensity Interval Training on the maximum consumption of oxygen. At the end of the study, it is concluded that High-intensity Interval Training is a good stimulation methodology for Chilean schoolchildren.

  9. Valid analytical performance specifications for combined analytical bias and imprecision for the use of common reference intervals.

    Science.gov (United States)

    Hyltoft Petersen, Per; Lund, Flemming; Fraser, Callum G; Sandberg, Sverre; Sölétormos, György

    2018-01-01

    Background Many clinical decisions are based on comparison of patient results with reference intervals. Therefore, an estimation of the analytical performance specifications for the quality that would be required to allow sharing common reference intervals is needed. The International Federation of Clinical Chemistry (IFCC) recommended a minimum of 120 reference individuals to establish reference intervals. This number implies a certain level of quality, which could then be used for defining analytical performance specifications as the maximum combination of analytical bias and imprecision required for sharing common reference intervals, the aim of this investigation. Methods Two methods were investigated for defining the maximum combination of analytical bias and imprecision that would give the same quality of common reference intervals as the IFCC recommendation. Method 1 is based on a formula for the combination of analytical bias and imprecision and Method 2 is based on the Microsoft Excel formula NORMINV including the fractional probability of reference individuals outside each limit and the Gaussian variables of mean and standard deviation. The combinations of normalized bias and imprecision are illustrated for both methods. The formulae are identical for Gaussian and log-Gaussian distributions. Results Method 2 gives the correct results with a constant percentage of 4.4% for all combinations of bias and imprecision. Conclusion The Microsoft Excel formula NORMINV is useful for the estimation of analytical performance specifications for both Gaussian and log-Gaussian distributions of reference intervals.

  10. A novel non-probabilistic approach using interval analysis for robust design optimization

    International Nuclear Information System (INIS)

    Sun, Wei; Dong, Rongmei; Xu, Huanwei

    2009-01-01

    A technique for formulation of the objective and constraint functions with uncertainty plays a crucial role in robust design optimization. This paper presents the first application of interval methods for reformulating the robust optimization problem. Based on interval mathematics, the original real-valued objective and constraint functions are replaced with the interval-valued functions, which directly represent the upper and lower bounds of the new functions under uncertainty. The single objective function is converted into two objective functions for minimizing the mean value and the variation, and the constraint functions are reformulated with the acceptable robustness level, resulting in a bi-level mathematical model. Compared with other methods, this method is efficient and does not require presumed probability distribution of uncertain factors or gradient or continuous information of constraints. Two numerical examples are used to illustrate the validity and feasibility of the presented method

  11. The unified method: III. Nonlinearizable problems on the interval

    International Nuclear Information System (INIS)

    Lenells, J; Fokas, A S

    2012-01-01

    Boundary value problems for integrable nonlinear evolution PDEs formulated on the finite interval can be analyzed by the unified method introduced by one of the authors and extensively used in the literature. The implementation of this general method to this particular class of problems yields the solution in terms of the unique solution of a matrix Riemann–Hilbert problem formulated in the complex k-plane (the Fourier plane), which has a jump matrix with explicit (x, t)-dependence involving six scalar functions of k, called the spectral functions. Two of these functions depend on the initial data, whereas the other four depend on all boundary values. The most difficult step of the new method is the characterization of the latter four spectral functions in terms of the given initial and boundary data, i.e. the elimination of the unknown boundary values. Here, we present an effective characterization of the spectral functions in terms of the given initial and boundary data. We present two different characterizations of this problem. One is based on the analysis of the so-called global relation, on the analysis of the equations obtained from the global relation via certain transformations leaving the dispersion relation of the associated linearized PDE invariant and on the computation of the large k asymptotics of the eigenfunctions defining the relevant spectral functions. The other is based on the analysis of the global relation and on the introduction of the so-called Gelfand–Levitan–Marchenko representations of the eigenfunctions defining the relevant spectral functions. We also show that these two different characterizations are equivalent and that in the limit when the length of the interval tends to infinity, the relevant formulas reduce to the analogous formulas obtained recently for the case of boundary value problems formulated on the half-line. (paper)

  12. Volatility return intervals analysis of the Japanese market

    Science.gov (United States)

    Jung, W.-S.; Wang, F. Z.; Havlin, S.; Kaizoji, T.; Moon, H.-T.; Stanley, H. E.

    2008-03-01

    We investigate scaling and memory effects in return intervals between price volatilities above a certain threshold q for the Japanese stock market using daily and intraday data sets. We find that the distribution of return intervals can be approximated by a scaling function that depends only on the ratio between the return interval τ and its mean . We also find memory effects such that a large (or small) return interval follows a large (or small) interval by investigating the conditional distribution and mean return interval. The results are similar to previous studies of other markets and indicate that similar statistical features appear in different financial markets. We also compare our results between the period before and after the big crash at the end of 1989. We find that scaling and memory effects of the return intervals show similar features although the statistical properties of the returns are different.

  13. Logarithmic Similarity Measure between Interval-Valued Fuzzy Sets and Its Fault Diagnosis Method

    Directory of Open Access Journals (Sweden)

    Zhikang Lu

    2018-02-01

    Full Text Available Fault diagnosis is an important task for the normal operation and maintenance of equipment. In many real situations, the diagnosis data cannot provide deterministic values and are usually imprecise or uncertain. Thus, interval-valued fuzzy sets (IVFSs are very suitable for expressing imprecise or uncertain fault information in real problems. However, existing literature scarcely deals with fault diagnosis problems, such as gasoline engines and steam turbines with IVFSs. However, the similarity measure is one of the important tools in fault diagnoses. Therefore, this paper proposes a new similarity measure of IVFSs based on logarithmic function and its fault diagnosis method for the first time. By the logarithmic similarity measure between the fault knowledge and some diagnosis-testing samples with interval-valued fuzzy information and its relation indices, we can determine the fault type and ranking order of faults corresponding to the relation indices. Then, the misfire fault diagnosis of the gasoline engine and the vibrational fault diagnosis of a turbine are presented to demonstrate the simplicity and effectiveness of the proposed diagnosis method. The fault diagnosis results of gasoline engine and steam turbine show that the proposed diagnosis method not only gives the main fault types of the gasoline engine and steam turbine but also provides useful information for multi-fault analyses and predicting future fault trends. Hence, the logarithmic similarity measure and its fault diagnosis method are main contributions in this study and they provide a useful new way for the fault diagnosis with interval-valued fuzzy information.

  14. Design of time interval generator based on hybrid counting method

    International Nuclear Information System (INIS)

    Yao, Yuan; Wang, Zhaoqi; Lu, Houbing; Chen, Lian; Jin, Ge

    2016-01-01

    Time Interval Generators (TIGs) are frequently used for the characterizations or timing operations of instruments in particle physics experiments. Though some “off-the-shelf” TIGs can be employed, the necessity of a custom test system or control system makes the TIGs, being implemented in a programmable device desirable. Nowadays, the feasibility of using Field Programmable Gate Arrays (FPGAs) to implement particle physics instrumentation has been validated in the design of Time-to-Digital Converters (TDCs) for precise time measurement. The FPGA-TDC technique is based on the architectures of Tapped Delay Line (TDL), whose delay cells are down to few tens of picosecond. In this case, FPGA-based TIGs with high delay step are preferable allowing the implementation of customized particle physics instrumentations and other utilities on the same FPGA device. A hybrid counting method for designing TIGs with both high resolution and wide range is presented in this paper. The combination of two different counting methods realizing an integratable TIG is described in detail. A specially designed multiplexer for tap selection is emphatically introduced. The special structure of the multiplexer is devised for minimizing the different additional delays caused by the unpredictable routings from different taps to the output. A Kintex-7 FPGA is used for the hybrid counting-based implementation of a TIG, providing a resolution up to 11 ps and an interval range up to 8 s.

  15. Design of time interval generator based on hybrid counting method

    Energy Technology Data Exchange (ETDEWEB)

    Yao, Yuan [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); Wang, Zhaoqi [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Lu, Houbing [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Hefei Electronic Engineering Institute, Hefei 230037 (China); Chen, Lian [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Jin, Ge, E-mail: goldjin@ustc.edu.cn [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China)

    2016-10-01

    Time Interval Generators (TIGs) are frequently used for the characterizations or timing operations of instruments in particle physics experiments. Though some “off-the-shelf” TIGs can be employed, the necessity of a custom test system or control system makes the TIGs, being implemented in a programmable device desirable. Nowadays, the feasibility of using Field Programmable Gate Arrays (FPGAs) to implement particle physics instrumentation has been validated in the design of Time-to-Digital Converters (TDCs) for precise time measurement. The FPGA-TDC technique is based on the architectures of Tapped Delay Line (TDL), whose delay cells are down to few tens of picosecond. In this case, FPGA-based TIGs with high delay step are preferable allowing the implementation of customized particle physics instrumentations and other utilities on the same FPGA device. A hybrid counting method for designing TIGs with both high resolution and wide range is presented in this paper. The combination of two different counting methods realizing an integratable TIG is described in detail. A specially designed multiplexer for tap selection is emphatically introduced. The special structure of the multiplexer is devised for minimizing the different additional delays caused by the unpredictable routings from different taps to the output. A Kintex-7 FPGA is used for the hybrid counting-based implementation of a TIG, providing a resolution up to 11 ps and an interval range up to 8 s.

  16. A Bayesian nonparametric estimation of distributions and quantiles

    International Nuclear Information System (INIS)

    Poern, K.

    1988-11-01

    The report describes a Bayesian, nonparametric method for the estimation of a distribution function and its quantiles. The method, presupposing random sampling, is nonparametric, so the user has to specify a prior distribution on a space of distributions (and not on a parameter space). In the current application, where the method is used to estimate the uncertainty of a parametric calculational model, the Dirichlet prior distribution is to a large extent determined by the first batch of Monte Carlo-realizations. In this case the results of the estimation technique is very similar to the conventional empirical distribution function. The resulting posterior distribution is also Dirichlet, and thus facilitates the determination of probability (confidence) intervals at any given point in the space of interest. Another advantage is that also the posterior distribution of a specified quantitle can be derived and utilized to determine a probability interval for that quantile. The method was devised for use in the PROPER code package for uncertainty and sensitivity analysis. (orig.)

  17. A quick method to calculate QTL confidence interval

    Indian Academy of Sciences (India)

    2011-08-19

    Aug 19, 2011 ... experimental design and analysis to reveal the real molecular nature of the ... strap sample form the bootstrap distribution of QTL location. The 2.5 and ..... ative probability to harbour a true QTL, hence x-LOD rule is not stable ... Darvasi A. and Soller M. 1997 A simple method to calculate resolv- ing power ...

  18. TL glow ratios at different temperature intervals of integration in thermoluminescence method. Comparison of Japanese standard (MHLW notified) method with CEN standard methods

    International Nuclear Information System (INIS)

    Todoriki, Setsuko; Saito, Kimie; Tsujimoto, Yuka

    2008-01-01

    The effect of the integration temperature intervals of TL intensities on the TL glow ratio was examined in comparison of the notified method of the Ministry of Health, Labour and Welfare (MHLW method) with EN1788. Two kinds of un-irradiated geological standard rock and three kinds of spices (black pepper, turmeric, and oregano) irradiated at 0.3 kGy or 1.0 kGy were subjected to TL analysis. Although the TL glow ratio exceeded 0.1 in the andesite according to the calculation of the MHLW notified method (integration interval; 70-490degC), the maximum of the first glow were observed at 300degC or more, attributed the influence of the natural radioactivity and distinguished from food irradiation. When the integration interval was set to 166-227degC according to EN1788, the TL glow ratios became remarkably smaller than 0.1, and the evaluation of the un-irradiated sample became more clear. For spices, the TL glow ratios by the MHLW notified method fell below 0.1 in un-irradiated samples and exceeded 0.1 in irradiated ones. Moreover, Glow1 maximum temperatures of the irradiated samples were observed at the range of 168-196degC, and those of un-irradiated samples were 258degC or more. Therefore, all samples were correctly judged by the criteria of the MHLW method. However, based on the temperature range of integration defined by EN1788, the TL glow ratio of un-irradiated samples remarkably became small compared with that of the MHLW method, and the discrimination of the irradiated sample from non-irradiation sample became clearer. (author)

  19. Distribution network planning method considering distributed generation for peak cutting

    International Nuclear Information System (INIS)

    Ouyang Wu; Cheng Haozhong; Zhang Xiubin; Yao Liangzhong

    2010-01-01

    Conventional distribution planning method based on peak load brings about large investment, high risk and low utilization efficiency. A distribution network planning method considering distributed generation (DG) for peak cutting is proposed in this paper. The new integrated distribution network planning method with DG implementation aims to minimize the sum of feeder investments, DG investments, energy loss cost and the additional cost of DG for peak cutting. Using the solution techniques combining genetic algorithm (GA) with the heuristic approach, the proposed model determines the optimal planning scheme including the feeder network and the siting and sizing of DG. The strategy for the site and size of DG, which is based on the radial structure characteristics of distribution network, reduces the complexity degree of solving the optimization model and eases the computational burden substantially. Furthermore, the operation schedule of DG at the different load level is also provided.

  20. Calculation Methods for Wallenius’ Noncentral Hypergeometric Distribution

    DEFF Research Database (Denmark)

    Fog, Agner

    2008-01-01

    Two different probability distributions are both known in the literature as "the" noncentral hypergeometric distribution. Wallenius' noncentral hypergeometric distribution can be described by an urn model without replacement with bias. Fisher's noncentral hypergeometric distribution...... is the conditional distribution of independent binomial variates given their sum. No reliable calculation method for Wallenius' noncentral hypergeometric distribution has hitherto been described in the literature. Several new methods for calculating probabilities from Wallenius' noncentral hypergeometric...... distribution are derived. Range of applicability, numerical problems, and efficiency are discussed for each method. Approximations to the mean and variance are also discussed. This distribution has important applications in models of biased sampling and in models of evolutionary systems....

  1. Methods for Distributed Optimal Energy Management

    DEFF Research Database (Denmark)

    Brehm, Robert

    The presented research deals with the fundamental underlying methods and concepts of how the growing number of distributed generation units based on renewable energy resources and distributed storage devices can be most efficiently integrated into the existing utility grid. In contrast to convent......The presented research deals with the fundamental underlying methods and concepts of how the growing number of distributed generation units based on renewable energy resources and distributed storage devices can be most efficiently integrated into the existing utility grid. In contrast...... to conventional centralised optimal energy flow management systems, here-in, focus is set on how optimal energy management can be achieved in a decentralised distributed architecture such as a multi-agent system. Distributed optimisation methods are introduced, targeting optimisation of energy flow in virtual......-consumption of renewable energy resources in low voltage grids. It can be shown that this method prevents mutual discharging of batteries and prevents peak loads, a supervisory control instance can dictate the level of autarchy from the utility grid. Further it is shown that the problem of optimal energy flow management...

  2. A New Method of Multiattribute Decision-Making Based on Interval-Valued Hesitant Fuzzy Soft Sets and Its Application

    Directory of Open Access Journals (Sweden)

    Yan Yang

    2017-01-01

    Full Text Available Combining interval-valued hesitant fuzzy soft sets (IVHFSSs and a new comparative law, we propose a new method, which can effectively solve multiattribute decision-making (MADM problems. Firstly, a characteristic function of two interval values and a new comparative law of interval-valued hesitant fuzzy elements (IVHFEs based on the possibility degree are proposed. Then, we define two important definitions of IVHFSSs including the interval-valued hesitant fuzzy soft quasi subset and soft quasi equal based on the new comparative law. Finally, an algorithm is presented to solve MADM problems. We also use the method proposed in this paper to evaluate the importance of major components of the well drilling mud pump.

  3. Improved Accuracy of Nonlinear Parameter Estimation with LAV and Interval Arithmetic Methods

    Directory of Open Access Journals (Sweden)

    Humberto Muñoz

    2009-06-01

    Full Text Available The reliable solution of nonlinear parameter es- timation problems is an important computational problem in many areas of science and engineering, including such applications as real time optimization. Its goal is to estimate accurate model parameters that provide the best fit to measured data, despite small- scale noise in the data or occasional large-scale mea- surement errors (outliers. In general, the estimation techniques are based on some kind of least squares or maximum likelihood criterion, and these require the solution of a nonlinear and non-convex optimiza- tion problem. Classical solution methods for these problems are local methods, and may not be reliable for finding the global optimum, with no guarantee the best model parameters have been found. Interval arithmetic can be used to compute completely and reliably the global optimum for the nonlinear para- meter estimation problem. Finally, experimental re- sults will compare the least squares, l2, and the least absolute value, l1, estimates using interval arithmetic in a chemical engineering application.

  4. Indication of multiscaling in the volatility return intervals of stock markets

    Science.gov (United States)

    Wang, Fengzhong; Yamasaki, Kazuko; Havlin, Shlomo; Stanley, H. Eugene

    2008-01-01

    The distribution of the return intervals τ between price volatilities above a threshold height q for financial records has been approximated by a scaling behavior. To explore how accurate is the scaling and therefore understand the underlined nonlinear mechanism, we investigate intraday data sets of 500 stocks which consist of Standard & Poor’s 500 index. We show that the cumulative distribution of return intervals has systematic deviations from scaling. We support this finding by studying the m -th moment μm≡⟨(τ/⟨τ⟩)m⟩1/m , which show a certain trend with the mean interval ⟨τ⟩ . We generate surrogate records using the Schreiber method, and find that their cumulative distributions almost collapse to a single curve and moments are almost constant for most ranges of ⟨τ⟩ . Those substantial differences suggest that nonlinear correlations in the original volatility sequence account for the deviations from a single scaling law. We also find that the original and surrogate records exhibit slight tendencies for short and long ⟨τ⟩ , due to the discreteness and finite size effects of the records, respectively. To avoid as possible those effects for testing the multiscaling behavior, we investigate the moments in the range 10volatility.

  5. Inferring uncertainty from interval estimates: Effects of alpha level and numeracy

    Directory of Open Access Journals (Sweden)

    Luke F. Rinne

    2013-05-01

    Full Text Available Interval estimates are commonly used to descriptively communicate the degree of uncertainty in numerical values. Conventionally, low alpha levels (e.g., .05 ensure a high probability of capturing the target value between interval endpoints. Here, we test whether alpha levels and individual differences in numeracy influence distributional inferences. In the reported experiment, participants received prediction intervals for fictitious towns' annual rainfall totals (assuming approximately normal distributions. Then, participants estimated probabilities that future totals would be captured within varying margins about the mean, indicating the approximate shapes of their inferred probability distributions. Results showed that low alpha levels (vs. moderate levels; e.g., .25 more frequently led to inferences of over-dispersed approximately normal distributions or approximately uniform distributions, reducing estimate accuracy. Highly numerate participants made more accurate estimates overall, but were more prone to inferring approximately uniform distributions. These findings have important implications for presenting interval estimates to various audiences.

  6. A note on a kind of character sums over the short interval

    Indian Academy of Sciences (India)

    Abstract. Let p be a prime, χ denote the Dirichlet character modulo p and L(p) = {a ∈ Z+|(a, p) = 1, a¯a ≡ 1(mod p), |a − ¯a| ≤ H}. We study the distribution of elements in the set L(p) in character over the short interval. In this paper, we use the analytic method and show the distribution property of. ∑ n≤N n∈L(p) χ(n),.

  7. An Improvement to Interval Estimation for Small Samples

    Directory of Open Access Journals (Sweden)

    SUN Hui-Ling

    2017-02-01

    Full Text Available Because it is difficult and complex to determine the probability distribution of small samples,it is improper to use traditional probability theory to process parameter estimation for small samples. Bayes Bootstrap method is always used in the project. Although,the Bayes Bootstrap method has its own limitation,In this article an improvement is given to the Bayes Bootstrap method,This method extended the amount of samples by numerical simulation without changing the circumstances in a small sample of the original sample. And the new method can give the accurate interval estimation for the small samples. Finally,by using the Monte Carlo simulation to model simulation to the specific small sample problems. The effectiveness and practicability of the Improved-Bootstrap method was proved.

  8. A comparison of confidence interval methods for the intraclass correlation coefficient in community-based cluster randomization trials with a binary outcome.

    Science.gov (United States)

    Braschel, Melissa C; Svec, Ivana; Darlington, Gerarda A; Donner, Allan

    2016-04-01

    Many investigators rely on previously published point estimates of the intraclass correlation coefficient rather than on their associated confidence intervals to determine the required size of a newly planned cluster randomized trial. Although confidence interval methods for the intraclass correlation coefficient that can be applied to community-based trials have been developed for a continuous outcome variable, fewer methods exist for a binary outcome variable. The aim of this study is to evaluate confidence interval methods for the intraclass correlation coefficient applied to binary outcomes in community intervention trials enrolling a small number of large clusters. Existing methods for confidence interval construction are examined and compared to a new ad hoc approach based on dividing clusters into a large number of smaller sub-clusters and subsequently applying existing methods to the resulting data. Monte Carlo simulation is used to assess the width and coverage of confidence intervals for the intraclass correlation coefficient based on Smith's large sample approximation of the standard error of the one-way analysis of variance estimator, an inverted modified Wald test for the Fleiss-Cuzick estimator, and intervals constructed using a bootstrap-t applied to a variance-stabilizing transformation of the intraclass correlation coefficient estimate. In addition, a new approach is applied in which clusters are randomly divided into a large number of smaller sub-clusters with the same methods applied to these data (with the exception of the bootstrap-t interval, which assumes large cluster sizes). These methods are also applied to a cluster randomized trial on adolescent tobacco use for illustration. When applied to a binary outcome variable in a small number of large clusters, existing confidence interval methods for the intraclass correlation coefficient provide poor coverage. However, confidence intervals constructed using the new approach combined with Smith

  9. [Establishing biological reference intervals of alanine transaminase for clinical laboratory stored database].

    Science.gov (United States)

    Guo, Wei; Song, Binbin; Shen, Junfei; Wu, Jiong; Zhang, Chunyan; Wang, Beili; Pan, Baishen

    2015-08-25

    To establish an indirect reference interval based on the test results of alanine aminotransferase stored in a laboratory information system. All alanine aminotransferase results were included for outpatients and physical examinations that were stored in the laboratory information system of Zhongshan Hospital during 2014. The original data were transformed using a Box-Cox transformation to obtain an approximate normal distribution. Outliers were identified and omitted using the Chauvenet and Tukey methods. The indirect reference intervals were obtained by simultaneously applying nonparametric and Hoffmann methods. The reference change value was selected to determine the statistical significance of the observed differences between the calculated and published reference intervals. The indirect reference intervals for alanine aminotransferase of all groups were 12 to 41 U/L (male, outpatient), 12 to 48 U/L (male, physical examination), 9 to 32 U/L (female, outpatient), and 8 to 35 U/L (female, physical examination), respectively. The absolute differences when compared with the direct results were all smaller than the reference change value of alanine aminotransferase. The Box-Cox transformation combined with the Hoffmann and Tukey methods is a simple and reliable technique that should be promoted and used by clinical laboratories.

  10. Bootstrap confidence intervals for three-way methods

    NARCIS (Netherlands)

    Kiers, Henk A.L.

    Results from exploratory three-way analysis techniques such as CANDECOMP/PARAFAC and Tucker3 analysis are usually presented without giving insight into uncertainties due to sampling. Here a bootstrap procedure is proposed that produces percentile intervals for all output parameters. Special

  11. Influence of geometry of the discharge interval on distribution of ion and electron streams at surface of the Penning source cathode

    International Nuclear Information System (INIS)

    Egiazaryan, G.A.; Khachatrian, Zh.B.; Badalyan, E.S.; Ter-Gevorgyan, E.I.; Hovhannisyan, V.N.

    2006-01-01

    In the discharge of oscillating electrons, the mechanism of the processes, which controls the distribution of the ion and electron streams over the cathode surface, is investigated experimentally. The influence of the length of the discharge interval on value and distribution of the ion and electron streams is analyzed. The distribution both of ion and electron streams at the cathode surface is determined at different conditions of the discharge. It is shown that for given values of the anode diameter d a =31 mm and the gas pressure P=5x10 -5 Torr, the intensive stream of positive ions falls entirely on the cathode central area in the whole interval of the anode length variation (l a =1-11 cm). At the cathode, the ion current reaches the maximal value at a certain (optimal) value of the anode length that, in turn, depends on the anode voltage U a . The intensive stream of longitudinal electrons forms in the short anodes only (l a =2.5-3.5 cm) and depending on the choice of the discharge regime, may fall both on central and middle parts of the cathode

  12. Methods for confidence interval estimation of a ratio parameter with application to location quotients

    Directory of Open Access Journals (Sweden)

    Beyene Joseph

    2005-10-01

    Full Text Available Abstract Background The location quotient (LQ ratio, a measure designed to quantify and benchmark the degree of relative concentration of an activity in the analysis of area localization, has received considerable attention in the geographic and economics literature. This index can also naturally be applied in the context of population health to quantify and compare health outcomes across spatial domains. However, one commonly observed limitation of LQ is its widespread use as only a point estimate without an accompanying confidence interval. Methods In this paper we present statistical methods that can be used to construct confidence intervals for location quotients. The delta and Fieller's methods are generic approaches for a ratio parameter and the generalized linear modelling framework is a useful re-parameterization particularly helpful for generating profile-likelihood based confidence intervals for the location quotient. A simulation experiment is carried out to assess the performance of each of the analytic approaches and a health utilization data set is used for illustration. Results Both the simulation results as well as the findings from the empirical data show that the different analytical methods produce very similar confidence limits for location quotients. When incidence of outcome is not rare and sample sizes are large, the confidence limits are almost indistinguishable. The confidence limits from the generalized linear model approach might be preferable in small sample situations. Conclusion LQ is a useful measure which allows quantification and comparison of health and other outcomes across defined geographical regions. It is a very simple index to compute and has a straightforward interpretation. Reporting this estimate with appropriate confidence limits using methods presented in this paper will make the measure particularly attractive for policy and decision makers.

  13. The establishment of tocopherol reference intervals for Hungarian adult population using a validated HPLC method.

    Science.gov (United States)

    Veres, Gábor; Szpisjak, László; Bajtai, Attila; Siska, Andrea; Klivényi, Péter; Ilisz, István; Földesi, Imre; Vécsei, László; Zádori, Dénes

    2017-09-01

    Evidence suggests that decreased α-tocopherol (the most biologically active substance in the vitamin E group) level can cause neurological symptoms, most likely ataxia. The aim of the current study was to first provide reference intervals for serum tocopherols in the adult Hungarian population with appropriate sample size, recruiting healthy control subjects and neurological patients suffering from conditions without symptoms of ataxia, myopathy or cognitive deficiency. A validated HPLC method applying a diode array detector and rac-tocol as internal standard was utilized for that purpose. Furthermore, serum cholesterol levels were determined as well for data normalization. The calculated 2.5-97.5% reference intervals for α-, β/γ- and δ-tocopherols were 24.62-54.67, 0.81-3.69 and 0.29-1.07 μm, respectively, whereas the tocopherol/cholesterol ratios were 5.11-11.27, 0.14-0.72 and 0.06-0.22 μmol/mmol, respectively. The establishment of these reference intervals may improve the diagnostic accuracy of tocopherol measurements in certain neurological conditions with decreased tocopherol levels. Moreover, the current study draws special attention to the possible pitfalls in the complex process of the determination of reference intervals as well, including the selection of study population, the application of internal standard and method validation and the calculation of tocopherol/cholesterol ratios. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Analyses of moments in pseudorapidity intervals at √s = 546 GeV by means of two probability distributions in pure-birth process

    International Nuclear Information System (INIS)

    Biyajima, M.; Shirane, K.; Suzuki, N.

    1988-01-01

    Moments in pseudorapidity intervals at the CERN Sp-barpS collider (√s = 546 GeV) are analyzed by means of two probability distributions in the pure-birth stochastic process. Our results show that a probability distribution obtained from the Poisson distribution as an initial condition is more useful than that obtained from the Kronecker δ function. Analyses of moments by Koba-Nielsen-Olesen scaling functions derived from solutions of the pure-birth stochastic process are also made. Moreover, analyses of preliminary data at √s = 200 and 900 GeV are added

  15. Time interval measurement between two emissions: Kr + Au

    International Nuclear Information System (INIS)

    Aboufirassi, M; Bougault, R.; Brou, R.; Colin, J.; Durand, D.; Genoux-Lubain, A.; Horn, D.; Laville, J.L.; Le Brun, C.; Lecolley, J.F.; Lefebvres, F.; Lopez, O.; Louvel, M.; Mahi, M.; Steckmeyer, J.C.; Tamain, B.

    1998-01-01

    To indicate the method allowing the determination of the emission intervals, the results obtained with the Kr + Au system at 43 and 60 A.MeV are presented. The experiments were performed with the NAUTILUS exclusive detectors. Central collisions were selected by means of a relative velocity criterion to reject the events containing a forward emitted fragment. For the two bombardment energies the data analysis shows that the formation of a compound of mass around A = 200. By comparing the fragment dynamical variables with simulations one can conclude about the simultaneity of the compound deexcitation processes. It was found that a 5 MeV/A is able to reproduce the characteristics of the detected fragments. Also, it was found that to reproduce the dynamical characteristics of the fragments issued from central collisions it was not necessary to superimpose a radial collective energy upon the Coulomb and thermal motion. The distribution of the relative angles between detected fragments is used here as a chronometer. For simultaneous ruptures the small relative angles are forbidden by the Coulomb repulsion, while for sequential processes this interdiction is the more lifted the longer the interval between the two emissions is. For the system discussed here the comparison between simulation and data has been carried out for the extreme cases, i.e. for a vanishing and infinite time interval between the two emissions, respectively. More sophisticated simulations to describe angular distributions between the emitted fragments were also developed

  16. A Combined Weighting Method Based on Hybrid of Interval Evidence Fusion and Random Sampling

    Directory of Open Access Journals (Sweden)

    Ying Yan

    2017-01-01

    Full Text Available Due to the complexity of system and lack of expertise, epistemic uncertainties may present in the experts’ judgment on the importance of certain indices during group decision-making. A novel combination weighting method is proposed to solve the index weighting problem when various uncertainties are present in expert comments. Based on the idea of evidence theory, various types of uncertain evaluation information are uniformly expressed through interval evidence structures. Similarity matrix between interval evidences is constructed, and expert’s information is fused. Comment grades are quantified using the interval number, and cumulative probability function for evaluating the importance of indices is constructed based on the fused information. Finally, index weights are obtained by Monte Carlo random sampling. The method can process expert’s information with varying degrees of uncertainties, which possesses good compatibility. Difficulty in effectively fusing high-conflict group decision-making information and large information loss after fusion is avertible. Original expert judgments are retained rather objectively throughout the processing procedure. Cumulative probability function constructing and random sampling processes do not require any human intervention or judgment. It can be implemented by computer programs easily, thus having an apparent advantage in evaluation practices of fairly huge index systems.

  17. Bootstrap resampling: a powerful method of assessing confidence intervals for doses from experimental data

    International Nuclear Information System (INIS)

    Iwi, G.; Millard, R.K.; Palmer, A.M.; Preece, A.W.; Saunders, M.

    1999-01-01

    Bootstrap resampling provides a versatile and reliable statistical method for estimating the accuracy of quantities which are calculated from experimental data. It is an empirically based method, in which large numbers of simulated datasets are generated by computer from existing measurements, so that approximate confidence intervals of the derived quantities may be obtained by direct numerical evaluation. A simple introduction to the method is given via a detailed example of estimating 95% confidence intervals for cumulated activity in the thyroid following injection of 99m Tc-sodium pertechnetate using activity-time data from 23 subjects. The application of the approach to estimating confidence limits for the self-dose to the kidney following injection of 99m Tc-DTPA organ imaging agent based on uptake data from 19 subjects is also illustrated. Results are then given for estimates of doses to the foetus following administration of 99m Tc-sodium pertechnetate for clinical reasons during pregnancy, averaged over 25 subjects. The bootstrap method is well suited for applications in radiation dosimetry including uncertainty, reliability and sensitivity analysis of dose coefficients in biokinetic models, but it can also be applied in a wide range of other biomedical situations. (author)

  18. Multiobjective Two-Stage Stochastic Programming Problems with Interval Discrete Random Variables

    Directory of Open Access Journals (Sweden)

    S. K. Barik

    2012-01-01

    Full Text Available Most of the real-life decision-making problems have more than one conflicting and incommensurable objective functions. In this paper, we present a multiobjective two-stage stochastic linear programming problem considering some parameters of the linear constraints as interval type discrete random variables with known probability distribution. Randomness of the discrete intervals are considered for the model parameters. Further, the concepts of best optimum and worst optimum solution are analyzed in two-stage stochastic programming. To solve the stated problem, first we remove the randomness of the problem and formulate an equivalent deterministic linear programming model with multiobjective interval coefficients. Then the deterministic multiobjective model is solved using weighting method, where we apply the solution procedure of interval linear programming technique. We obtain the upper and lower bound of the objective function as the best and the worst value, respectively. It highlights the possible risk involved in the decision-making tool. A numerical example is presented to demonstrate the proposed solution procedure.

  19. Fast Reliability Assessing Method for Distribution Network with Distributed Renewable Energy Generation

    Science.gov (United States)

    Chen, Fan; Huang, Shaoxiong; Ding, Jinjin; Ding, Jinjin; Gao, Bo; Xie, Yuguang; Wang, Xiaoming

    2018-01-01

    This paper proposes a fast reliability assessing method for distribution grid with distributed renewable energy generation. First, the Weibull distribution and the Beta distribution are used to describe the probability distribution characteristics of wind speed and solar irradiance respectively, and the models of wind farm, solar park and local load are built for reliability assessment. Then based on power system production cost simulation probability discretization and linearization power flow, a optimal power flow objected with minimum cost of conventional power generation is to be resolved. Thus a reliability assessment for distribution grid is implemented fast and accurately. The Loss Of Load Probability (LOLP) and Expected Energy Not Supplied (EENS) are selected as the reliability index, a simulation for IEEE RBTS BUS6 system in MATLAB indicates that the fast reliability assessing method calculates the reliability index much faster with the accuracy ensured when compared with Monte Carlo method.

  20. Structural hybrid reliability index and its convergent solving method based on random–fuzzy–interval reliability model

    OpenAIRE

    Hai An; Ling Zhou; Hui Sun

    2016-01-01

    Aiming to resolve the problems of a variety of uncertainty variables that coexist in the engineering structure reliability analysis, a new hybrid reliability index to evaluate structural hybrid reliability, based on the random–fuzzy–interval model, is proposed in this article. The convergent solving method is also presented. First, the truncated probability reliability model, the fuzzy random reliability model, and the non-probabilistic interval reliability model are introduced. Then, the new...

  1. Comparison of Bootstrap Confidence Intervals Using Monte Carlo Simulations

    Directory of Open Access Journals (Sweden)

    Roberto S. Flowers-Cano

    2018-02-01

    Full Text Available Design of hydraulic works requires the estimation of design hydrological events by statistical inference from a probability distribution. Using Monte Carlo simulations, we compared coverage of confidence intervals constructed with four bootstrap techniques: percentile bootstrap (BP, bias-corrected bootstrap (BC, accelerated bias-corrected bootstrap (BCA and a modified version of the standard bootstrap (MSB. Different simulation scenarios were analyzed. In some cases, the mother distribution function was fit to the random samples that were generated. In other cases, a distribution function different to the mother distribution was fit to the samples. When the fitted distribution had three parameters, and was the same as the mother distribution, the intervals constructed with the four techniques had acceptable coverage. However, the bootstrap techniques failed in several of the cases in which the fitted distribution had two parameters.

  2. Transmission line sag calculations using interval mathematics

    Energy Technology Data Exchange (ETDEWEB)

    Shaalan, H. [Institute of Electrical and Electronics Engineers, Washington, DC (United States)]|[US Merchant Marine Academy, Kings Point, NY (United States)

    2007-07-01

    Electric utilities are facing the need for additional generating capacity, new transmission systems and more efficient use of existing resources. As such, there are several uncertainties associated with utility decisions. These uncertainties include future load growth, construction times and costs, and performance of new resources. Regulatory and economic environments also present uncertainties. Uncertainty can be modeled based on a probabilistic approach where probability distributions for all of the uncertainties are assumed. Another approach to modeling uncertainty is referred to as unknown but bounded. In this approach, the upper and lower bounds on the uncertainties are assumed without probability distributions. Interval mathematics is a tool for the practical use and extension of the unknown but bounded concept. In this study, the calculation of transmission line sag was used as an example to demonstrate the use of interval mathematics. The objective was to determine the change in cable length, based on a fixed span and an interval of cable sag values for a range of temperatures. The resulting change in cable length was an interval corresponding to the interval of cable sag values. It was shown that there is a small change in conductor length due to variation in sag based on the temperature ranges used in this study. 8 refs.

  3. An Integrated Method for Interval Multi-Objective Planning of a Water Resource System in the Eastern Part of Handan

    Directory of Open Access Journals (Sweden)

    Meiqin Suo

    2017-07-01

    Full Text Available In this study, an integrated solving method is proposed for interval multi-objective planning. The proposed method is based on fuzzy linear programming and an interactive two-step method. It cannot only provide objectively optimal values for multiple objectives at the same time, but also effectively offer a globally optimal interval solution. Meanwhile, the degree of satisfaction related to different objective functions would be obtained. Then, the integrated solving method for interval multi-objective planning is applied to a case study of planning multi-water resources joint scheduling under uncertainty in the eastern part of Handan, China. The solutions obtained are useful for decision makers in easing the contradiction between supply of multi-water resources and demand from different water users. Moreover, it can provide the optimal comprehensive benefits of economy, society, and the environment.

  4. Statistics of return intervals between long heartbeat intervals and their usability for online prediction of disorders

    International Nuclear Information System (INIS)

    Bogachev, Mikhail I; Bunde, Armin; Kireenkov, Igor S; Nifontov, Eugene M

    2009-01-01

    We study the statistics of return intervals between large heartbeat intervals (above a certain threshold Q) in 24 h records obtained from healthy subjects. We find that both the linear and the nonlinear long-term memory inherent in the heartbeat intervals lead to power-laws in the probability density function P Q (r) of the return intervals. As a consequence, the probability W Q (t; Δt) that at least one large heartbeat interval will occur within the next Δt heartbeat intervals, with an increasing elapsed number of intervals t after the last large heartbeat interval, follows a power-law. Based on these results, we suggest a method of obtaining a priori information about the occurrence of the next large heartbeat interval, and thus to predict it. We show explicitly that the proposed method, which exploits long-term memory, is superior to the conventional precursory pattern recognition technique, which focuses solely on short-term memory. We believe that our results can be straightforwardly extended to obtain more reliable predictions in other physiological signals like blood pressure, as well as in other complex records exhibiting multifractal behaviour, e.g. turbulent flow, precipitation, river flows and network traffic.

  5. Neural Network Ensemble Based Approach for 2D-Interval Prediction of Solar Photovoltaic Power

    Directory of Open Access Journals (Sweden)

    Mashud Rana

    2016-10-01

    Full Text Available Solar energy generated from PhotoVoltaic (PV systems is one of the most promising types of renewable energy. However, it is highly variable as it depends on the solar irradiance and other meteorological factors. This variability creates difficulties for the large-scale integration of PV power in the electricity grid and requires accurate forecasting of the electricity generated by PV systems. In this paper we consider 2D-interval forecasts, where the goal is to predict summary statistics for the distribution of the PV power values in a future time interval. 2D-interval forecasts have been recently introduced, and they are more suitable than point forecasts for applications where the predicted variable has a high variability. We propose a method called NNE2D that combines variable selection based on mutual information and an ensemble of neural networks, to compute 2D-interval forecasts, where the two interval boundaries are expressed in terms of percentiles. NNE2D was evaluated for univariate prediction of Australian solar PV power data for two years. The results show that it is a promising method, outperforming persistence baselines and other methods used for comparison in terms of accuracy and coverage probability.

  6. Empirical likelihood-based confidence intervals for the sensitivity of a continuous-scale diagnostic test at a fixed level of specificity.

    Science.gov (United States)

    Gengsheng Qin; Davis, Angela E; Jing, Bing-Yi

    2011-06-01

    For a continuous-scale diagnostic test, it is often of interest to find the range of the sensitivity of the test at the cut-off that yields a desired specificity. In this article, we first define a profile empirical likelihood ratio for the sensitivity of a continuous-scale diagnostic test and show that its limiting distribution is a scaled chi-square distribution. We then propose two new empirical likelihood-based confidence intervals for the sensitivity of the test at a fixed level of specificity by using the scaled chi-square distribution. Simulation studies are conducted to compare the finite sample performance of the newly proposed intervals with the existing intervals for the sensitivity in terms of coverage probability. A real example is used to illustrate the application of the recommended methods.

  7. Predicting fecal coliform using the interval-to-interval approach and SWAT in the Miyun watershed, China.

    Science.gov (United States)

    Bai, Jianwen; Shen, Zhenyao; Yan, Tiezhu; Qiu, Jiali; Li, Yangyang

    2017-06-01

    Pathogens in manure can cause waterborne-disease outbreaks, serious illness, and even death in humans. Therefore, information about the transformation and transport of bacteria is crucial for determining their source. In this study, the Soil and Water Assessment Tool (SWAT) was applied to simulate fecal coliform bacteria load in the Miyun Reservoir watershed, China. The data for the fecal coliform were obtained at three sampling sites, Chenying (CY), Gubeikou (GBK), and Xiahui (XH). The calibration processes of the fecal coliform were conducted using the CY and GBK sites, and validation was conducted at the XH site. An interval-to-interval approach was designed and incorporated into the processes of fecal coliform calibration and validation. The 95% confidence interval of the predicted values and the 95% confidence interval of measured values were considered during calibration and validation in the interval-to-interval approach. Compared with the traditional point-to-point comparison, this method can improve simulation accuracy. The results indicated that the simulation of fecal coliform using the interval-to-interval approach was reasonable for the watershed. This method could provide a new research direction for future model calibration and validation studies.

  8. NONLINEAR ASSIGNMENT-BASED METHODS FOR INTERVAL-VALUED INTUITIONISTIC FUZZY MULTI-CRITERIA DECISION ANALYSIS WITH INCOMPLETE PREFERENCE INFORMATION

    OpenAIRE

    TING-YU CHEN

    2012-01-01

    In the context of interval-valued intuitionistic fuzzy sets, this paper develops nonlinear assignment-based methods to manage imprecise and uncertain subjective ratings under incomplete preference structures and thereby determines the optimal ranking order of the alternatives for multiple criteria decision analysis. By comparing each interval-valued intuitionistic fuzzy number's score function, accuracy function, membership uncertainty index, and hesitation uncertainty index, a ranking proced...

  9. Semiparametric regression analysis of interval-censored competing risks data.

    Science.gov (United States)

    Mao, Lu; Lin, Dan-Yu; Zeng, Donglin

    2017-09-01

    Interval-censored competing risks data arise when each study subject may experience an event or failure from one of several causes and the failure time is not observed directly but rather is known to lie in an interval between two examinations. We formulate the effects of possibly time-varying (external) covariates on the cumulative incidence or sub-distribution function of competing risks (i.e., the marginal probability of failure from a specific cause) through a broad class of semiparametric regression models that captures both proportional and non-proportional hazards structures for the sub-distribution. We allow each subject to have an arbitrary number of examinations and accommodate missing information on the cause of failure. We consider nonparametric maximum likelihood estimation and devise a fast and stable EM-type algorithm for its computation. We then establish the consistency, asymptotic normality, and semiparametric efficiency of the resulting estimators for the regression parameters by appealing to modern empirical process theory. In addition, we show through extensive simulation studies that the proposed methods perform well in realistic situations. Finally, we provide an application to a study on HIV-1 infection with different viral subtypes. © 2017, The International Biometric Society.

  10. On Bayesian treatment of systematic uncertainties in confidence interval calculation

    CERN Document Server

    Tegenfeldt, Fredrik

    2005-01-01

    In high energy physics, a widely used method to treat systematic uncertainties in confidence interval calculations is based on combining a frequentist construction of confidence belts with a Bayesian treatment of systematic uncertainties. In this note we present a study of the coverage of this method for the standard Likelihood Ratio (aka Feldman & Cousins) construction for a Poisson process with known background and Gaussian or log-Normal distributed uncertainties in the background or signal efficiency. For uncertainties in the signal efficiency of upto 40 % we find over-coverage on the level of 2 to 4 % depending on the size of uncertainties and the region in signal space. Uncertainties in the background generally have smaller effect on the coverage. A considerable smoothing of the coverage curves is observed. A software package is presented which allows fast calculation of the confidence intervals for a variety of assumptions on shape and size of systematic uncertainties for different nuisance paramete...

  11. Serum prolactin revisited: parametric reference intervals and cross platform evaluation of polyethylene glycol precipitation-based methods for discrimination between hyperprolactinemia and macroprolactinemia.

    Science.gov (United States)

    Overgaard, Martin; Pedersen, Susanne Møller

    2017-10-26

    Hyperprolactinemia diagnosis and treatment is often compromised by the presence of biologically inactive and clinically irrelevant higher-molecular-weight complexes of prolactin, macroprolactin. The objective of this study was to evaluate the performance of two macroprolactin screening regimes across commonly used automated immunoassay platforms. Parametric total and monomeric gender-specific reference intervals were determined for six immunoassay methods using female (n=96) and male sera (n=127) from healthy donors. The reference intervals were validated using 27 hyperprolactinemic and macroprolactinemic sera, whose presence of monomeric and macroforms of prolactin were determined using gel filtration chromatography (GFC). Normative data for six prolactin assays included the range of values (2.5th-97.5th percentiles). Validation sera (hyperprolactinemic and macroprolactinemic; n=27) showed higher discordant classification [mean=2.8; 95% confidence interval (CI) 1.2-4.4] for the monomer reference interval method compared to the post-polyethylene glycol (PEG) recovery cutoff method (mean=1.8; 95% CI 0.8-2.8). The two monomer/macroprolactin discrimination methods did not differ significantly (p=0.089). Among macroprolactinemic sera evaluated by both discrimination methods, the Cobas and Architect/Kryptor prolactin assays showed the lowest and the highest number of misclassifications, respectively. Current automated immunoassays for prolactin testing require macroprolactin screening methods based on PEG precipitation in order to discriminate truly from falsely elevated serum prolactin. While the recovery cutoff and monomeric reference interval macroprolactin screening methods demonstrate similar discriminative ability, the latter method also provides the clinician with an easy interpretable monomeric prolactin concentration along with a monomeric reference interval.

  12. A Comparison of Distribution Free and Non-Distribution Free Factor Analysis Methods

    Science.gov (United States)

    Ritter, Nicola L.

    2012-01-01

    Many researchers recognize that factor analysis can be conducted on both correlation matrices and variance-covariance matrices. Although most researchers extract factors from non-distribution free or parametric methods, researchers can also extract factors from distribution free or non-parametric methods. The nature of the data dictates the method…

  13. Reviewing interval cancers: Time well spent?

    International Nuclear Information System (INIS)

    Gower-Thomas, Kate; Fielder, Hilary M.P.; Branston, Lucy; Greening, Sarah; Beer, Helen; Rogers, Cerilan

    2002-01-01

    OBJECTIVES: To categorize interval cancers, and thus identify false-negatives, following prevalent and incident screens in the Welsh breast screening programme. SETTING: Breast Test Wales (BTW) Llandudno, Cardiff and Swansea breast screening units. METHODS: Five hundred and sixty interval breast cancers identified following negative mammographic screening between 1989 and 1997 were reviewed by eight screening radiologists. The blind review was achieved by mixing the screening films of women who subsequently developed an interval cancer with screen negative films of women who did not develop cancer, in a ratio of 4:1. Another radiologist used patients' symptomatic films to record a reference against which the reviewers' reports of the screening films were compared. Interval cancers were categorized as 'true', 'occult', 'false-negative' or 'unclassified' interval cancers or interval cancers with minimal signs, based on the National Health Service breast screening programme (NHSBSP) guidelines. RESULTS: Of the classifiable interval films, 32% were false-negatives, 55% were true intervals and 12% occult. The proportion of false-negatives following incident screens was half that following prevalent screens (P = 0.004). Forty percent of the seed films were recalled by the panel. CONCLUSIONS: Low false-negative interval cancer rates following incident screens (18%) versus prevalent screens (36%) suggest that lower cancer detection rates at incident screens may have resulted from fewer cancers than expected being present, rather than from a failure to detect tumours. The panel method for categorizing interval cancers has significant flaws as the results vary markedly with different protocol and is no more accurate than other, quicker and more timely methods. Gower-Thomas, K. et al. (2002)

  14. Assessing Mediational Models: Testing and Interval Estimation for Indirect Effects.

    Science.gov (United States)

    Biesanz, Jeremy C; Falk, Carl F; Savalei, Victoria

    2010-08-06

    Theoretical models specifying indirect or mediated effects are common in the social sciences. An indirect effect exists when an independent variable's influence on the dependent variable is mediated through an intervening variable. Classic approaches to assessing such mediational hypotheses ( Baron & Kenny, 1986 ; Sobel, 1982 ) have in recent years been supplemented by computationally intensive methods such as bootstrapping, the distribution of the product methods, and hierarchical Bayesian Markov chain Monte Carlo (MCMC) methods. These different approaches for assessing mediation are illustrated using data from Dunn, Biesanz, Human, and Finn (2007). However, little is known about how these methods perform relative to each other, particularly in more challenging situations, such as with data that are incomplete and/or nonnormal. This article presents an extensive Monte Carlo simulation evaluating a host of approaches for assessing mediation. We examine Type I error rates, power, and coverage. We study normal and nonnormal data as well as complete and incomplete data. In addition, we adapt a method, recently proposed in statistical literature, that does not rely on confidence intervals (CIs) to test the null hypothesis of no indirect effect. The results suggest that the new inferential method-the partial posterior p value-slightly outperforms existing ones in terms of maintaining Type I error rates while maximizing power, especially with incomplete data. Among confidence interval approaches, the bias-corrected accelerated (BC a ) bootstrapping approach often has inflated Type I error rates and inconsistent coverage and is not recommended; In contrast, the bootstrapped percentile confidence interval and the hierarchical Bayesian MCMC method perform best overall, maintaining Type I error rates, exhibiting reasonable power, and producing stable and accurate coverage rates.

  15. Closed-form confidence intervals for functions of the normal mean and standard deviation.

    Science.gov (United States)

    Donner, Allan; Zou, G Y

    2012-08-01

    Confidence interval methods for a normal mean and standard deviation are well known and simple to apply. However, the same cannot be said for important functions of these parameters. These functions include the normal distribution percentiles, the Bland-Altman limits of agreement, the coefficient of variation and Cohen's effect size. We present a simple approach to this problem by using variance estimates recovered from confidence limits computed for the mean and standard deviation separately. All resulting confidence intervals have closed forms. Simulation results demonstrate that this approach performs very well for limits of agreement, coefficients of variation and their differences.

  16. Generalized Analysis of a Distribution Separation Method

    Directory of Open Access Journals (Sweden)

    Peng Zhang

    2016-04-01

    Full Text Available Separating two probability distributions from a mixture model that is made up of the combinations of the two is essential to a wide range of applications. For example, in information retrieval (IR, there often exists a mixture distribution consisting of a relevance distribution that we need to estimate and an irrelevance distribution that we hope to get rid of. Recently, a distribution separation method (DSM was proposed to approximate the relevance distribution, by separating a seed irrelevance distribution from the mixture distribution. It was successfully applied to an IR task, namely pseudo-relevance feedback (PRF, where the query expansion model is often a mixture term distribution. Although initially developed in the context of IR, DSM is indeed a general mathematical formulation for probability distribution separation. Thus, it is important to further generalize its basic analysis and to explore its connections to other related methods. In this article, we first extend DSM’s theoretical analysis, which was originally based on the Pearson correlation coefficient, to entropy-related measures, including the KL-divergence (Kullback–Leibler divergence, the symmetrized KL-divergence and the JS-divergence (Jensen–Shannon divergence. Second, we investigate the distribution separation idea in a well-known method, namely the mixture model feedback (MMF approach. We prove that MMF also complies with the linear combination assumption, and then, DSM’s linear separation algorithm can largely simplify the EM algorithm in MMF. These theoretical analyses, as well as further empirical evaluation results demonstrate the advantages of our DSM approach.

  17. Time Interval to Initiation of Contraceptive Methods Following ...

    African Journals Online (AJOL)

    2018-01-30

    Jan 30, 2018 ... interval between a woman's last childbirth and the initiation of contraception. Materials and ..... DF=Degree of freedom; χ2=Chi‑square test ..... practice of modern contraception among single women in a rural and urban ...

  18. Method of forecasting power distribution

    International Nuclear Information System (INIS)

    Kaneto, Kunikazu.

    1981-01-01

    Purpose: To obtain forecasting results at high accuracy by reflecting the signals from neutron detectors disposed in the reactor core on the forecasting results. Method: An on-line computer transfers, to a simulator, those process data such as temperature and flow rate for coolants in each of the sections and various measuring signals such as control rod positions from the nuclear reactor. The simulator calculates the present power distribution before the control operation. The signals from the neutron detectors at each of the positions in the reactor core are estimated from the power distribution and errors are determined based on the estimated values and the measured values to determine the smooth error distribution in the axial direction. Then, input conditions at the time to be forecast are set by a data setter. The simulator calculates the forecast power distribution after the control operation based on the set conditions. The forecast power distribution is corrected using the error distribution. (Yoshino, Y.)

  19. Geometric computations with interval and new robust methods applications in computer graphics, GIS and computational geometry

    CERN Document Server

    Ratschek, H

    2003-01-01

    This undergraduate and postgraduate text will familiarise readers with interval arithmetic and related tools to gain reliable and validated results and logically correct decisions for a variety of geometric computations plus the means for alleviating the effects of the errors. It also considers computations on geometric point-sets, which are neither robust nor reliable in processing with standard methods. The authors provide two effective tools for obtaining correct results: (a) interval arithmetic, and (b) ESSA the new powerful algorithm which improves many geometric computations and makes th

  20. Apparatus and method for data communication in an energy distribution network

    Science.gov (United States)

    Hussain, Mohsin; LaPorte, Brock; Uebel, Udo; Zia, Aftab

    2014-07-08

    A system for communicating information on an energy distribution network is disclosed. In one embodiment, the system includes a local supervisor on a communication network, wherein the local supervisor can collect data from one or more energy generation/monitoring devices. The system also includes a command center on the communication network, wherein the command center can generate one or more commands for controlling the one or more energy generation devices. The local supervisor can periodically transmit a data signal indicative of the data to the command center via a first channel of the communication network at a first interval. The local supervisor can also periodically transmit a request for a command to the command center via a second channel of the communication network at a second interval shorter than the first interval. This channel configuration provides effective data communication without a significant increase in the use of network resources.

  1. Mathematical Modeling for Water Quality Management under Interval and Fuzzy Uncertainties

    Directory of Open Access Journals (Sweden)

    J. Liu

    2013-01-01

    Full Text Available In this study, an interval fuzzy credibility-constrained programming (IFCP method is developed for river water quality management. IFCP is derived from incorporating techniques of fuzzy credibility-constrained programming (FCP and interval-parameter programming (IPP within a general optimization framework. IFCP is capable of tackling uncertainties presented as interval numbers and possibility distributions as well as analyzing the reliability of satisfying (or the risk of violating system’s constraints. A real-world case for water quality management planning of the Xiangxi River in the Three Gorges Reservoir Region (which faces severe water quality problems due to pollution from point and nonpoint sources is then conducted for demonstrating the applicability of the developed method. The results demonstrate that high biological oxygen demand (BOD discharge is observed at the Baishahe chemical plant and Gufu wastewater treatment plant. For nonpoint sources, crop farming generates large amounts of total phosphorus (TP and total nitrogen (TN. The results are helpful for managers in not only making decisions of effluent discharges from point and nonpoint sources but also gaining insight into the tradeoff between system benefit and environmental requirement.

  2. Sampling Methods for Wallenius' and Fisher's Noncentral Hypergeometric Distributions

    DEFF Research Database (Denmark)

    Fog, Agner

    2008-01-01

    the mode, ratio-of-uniforms rejection method, and rejection by sampling in the tau domain. Methods for the multivariate distributions include: simulation of urn experiments, conditional method, Gibbs sampling, and Metropolis-Hastings sampling. These methods are useful for Monte Carlo simulation of models...... of biased sampling and models of evolution and for calculating moments and quantiles of the distributions.......Several methods for generating variates with univariate and multivariate Wallenius' and Fisher's noncentral hypergeometric distributions are developed. Methods for the univariate distributions include: simulation of urn experiments, inversion by binary search, inversion by chop-down search from...

  3. Integrated Power Flow and Short Circuit Calculation Method for Distribution Network with Inverter Based Distributed Generation

    Directory of Open Access Journals (Sweden)

    Shan Yang

    2016-01-01

    Full Text Available Power flow calculation and short circuit calculation are the basis of theoretical research for distribution network with inverter based distributed generation. The similarity of equivalent model for inverter based distributed generation during normal and fault conditions of distribution network and the differences between power flow and short circuit calculation are analyzed in this paper. Then an integrated power flow and short circuit calculation method for distribution network with inverter based distributed generation is proposed. The proposed method let the inverter based distributed generation be equivalent to Iθ bus, which makes it suitable to calculate the power flow of distribution network with a current limited inverter based distributed generation. And the low voltage ride through capability of inverter based distributed generation can be considered as well in this paper. Finally, some tests of power flow and short circuit current calculation are performed on a 33-bus distribution network. The calculated results from the proposed method in this paper are contrasted with those by the traditional method and the simulation method, whose results have verified the effectiveness of the integrated method suggested in this paper.

  4. Review of Congestion Management Methods for Distribution Networks with High Penetration of Distributed Energy Resources

    DEFF Research Database (Denmark)

    Huang, Shaojun; Wu, Qiuwei; Liu, Zhaoxi

    2014-01-01

    This paper reviews the existing congestion management methods for distribution networks with high penetration of DERs documented in the recent research literatures. The congestion management methods for distribution networks reviewed can be grouped into two categories – market methods and direct...... control methods. The market methods consist of dynamic tariff, distribution capacity market, shadow price and flexible service market. The direct control methods are comprised of network reconfiguration, reactive power control and active power control. Based on the review of the existing methods...

  5. Magnetic Resonance Fingerprinting with short relaxation intervals.

    Science.gov (United States)

    Amthor, Thomas; Doneva, Mariya; Koken, Peter; Sommer, Karsten; Meineke, Jakob; Börnert, Peter

    2017-09-01

    The aim of this study was to investigate a technique for improving the performance of Magnetic Resonance Fingerprinting (MRF) in repetitive sampling schemes, in particular for 3D MRF acquisition, by shortening relaxation intervals between MRF pulse train repetitions. A calculation method for MRF dictionaries adapted to short relaxation intervals and non-relaxed initial spin states is presented, based on the concept of stationary fingerprints. The method is applicable to many different k-space sampling schemes in 2D and 3D. For accuracy analysis, T 1 and T 2 values of a phantom are determined by single-slice Cartesian MRF for different relaxation intervals and are compared with quantitative reference measurements. The relevance of slice profile effects is also investigated in this case. To further illustrate the capabilities of the method, an application to in-vivo spiral 3D MRF measurements is demonstrated. The proposed computation method enables accurate parameter estimation even for the shortest relaxation intervals, as investigated for different sampling patterns in 2D and 3D. In 2D Cartesian measurements, we achieved a scan acceleration of more than a factor of two, while maintaining acceptable accuracy: The largest T 1 values of a sample set deviated from their reference values by 0.3% (longest relaxation interval) and 2.4% (shortest relaxation interval). The largest T 2 values showed systematic deviations of up to 10% for all relaxation intervals, which is discussed. The influence of slice profile effects for multislice acquisition is shown to become increasingly relevant for short relaxation intervals. In 3D spiral measurements, a scan time reduction of 36% was achieved, maintaining the quality of in-vivo T1 and T2 maps. Reducing the relaxation interval between MRF sequence repetitions using stationary fingerprint dictionaries is a feasible method to improve the scan efficiency of MRF sequences. The method enables fast implementations of 3D spatially

  6. Reduction Method for Active Distribution Networks

    DEFF Research Database (Denmark)

    Raboni, Pietro; Chen, Zhe

    2013-01-01

    On-line security assessment is traditionally performed by Transmission System Operators at the transmission level, ignoring the effective response of distributed generators and small loads. On the other hand the required computation time and amount of real time data for including Distribution...... Networks also would be too large. In this paper an adaptive aggregation method for subsystems with power electronic interfaced generators and voltage dependant loads is proposed. With this tool may be relatively easier including distribution networks into security assessment. The method is validated...... by comparing the results obtained in PSCAD® with the detailed network model and with the reduced one. Moreover the control schemes of a wind turbine and a photovoltaic plant included in the detailed network model are described....

  7. Structural hybrid reliability index and its convergent solving method based on random–fuzzy–interval reliability model

    Directory of Open Access Journals (Sweden)

    Hai An

    2016-08-01

    Full Text Available Aiming to resolve the problems of a variety of uncertainty variables that coexist in the engineering structure reliability analysis, a new hybrid reliability index to evaluate structural hybrid reliability, based on the random–fuzzy–interval model, is proposed in this article. The convergent solving method is also presented. First, the truncated probability reliability model, the fuzzy random reliability model, and the non-probabilistic interval reliability model are introduced. Then, the new hybrid reliability index definition is presented based on the random–fuzzy–interval model. Furthermore, the calculation flowchart of the hybrid reliability index is presented and it is solved using the modified limit-step length iterative algorithm, which ensures convergence. And the validity of convergent algorithm for the hybrid reliability model is verified through the calculation examples in literature. In the end, a numerical example is demonstrated to show that the hybrid reliability index is applicable for the wear reliability assessment of mechanisms, where truncated random variables, fuzzy random variables, and interval variables coexist. The demonstration also shows the good convergence of the iterative algorithm proposed in this article.

  8. Interval selection with machine-dependent intervals

    OpenAIRE

    Bohmova K.; Disser Y.; Mihalak M.; Widmayer P.

    2013-01-01

    We study an offline interval scheduling problem where every job has exactly one associated interval on every machine. To schedule a set of jobs, exactly one of the intervals associated with each job must be selected, and the intervals selected on the same machine must not intersect.We show that deciding whether all jobs can be scheduled is NP-complete already in various simple cases. In particular, by showing the NP-completeness for the case when all the intervals associated with the same job...

  9. Increasing accuracy in the interval analysis by the improved format of interval extension based on the first order Taylor series

    Science.gov (United States)

    Li, Yi; Xu, Yan Long

    2018-05-01

    When the dependence of the function on uncertain variables is non-monotonic in interval, the interval of function obtained by the classic interval extension based on the first order Taylor series will exhibit significant errors. In order to reduce theses errors, the improved format of the interval extension with the first order Taylor series is developed here considering the monotonicity of function. Two typical mathematic examples are given to illustrate this methodology. The vibration of a beam with lumped masses is studied to demonstrate the usefulness of this method in the practical application, and the necessary input data of which are only the function value at the central point of interval, sensitivity and deviation of function. The results of above examples show that the interval of function from the method developed by this paper is more accurate than the ones obtained by the classic method.

  10. Time interval measurement between two emissions: Ar + Au

    International Nuclear Information System (INIS)

    Bizard, G.; Bougault, R.; Brou, R.; Buta, A.; Durand, D.; Genoux-Lubain, A.; Hamdani, T.; Horn, D.; Laville, J.L.; Le Brun, C.; Lecolley, J.F.; Louvel, M.; Peter, J.; Regimbart, R.; Steckmeyer, J.C.; Tamain, B.

    1998-01-01

    The Ar + Au system was studied at two bombarding energies, 30 and 60 A.MeV. The comparison of the distributions of fragment emission angles in central collisions was carried out by means of a simulation allowing the emission time interval variation. It was found that this interval depends on the bombarding energy (i.e. deposed excitation energy).For 30 A.MeV this interval is 500 fm/c (0.33 · 10 -23 s), while for 60 A.MeV it is so short that the multifragmentation concept can be used

  11. A methodology for more efficient tail area sampling with discrete probability distribution

    International Nuclear Information System (INIS)

    Park, Sang Ryeol; Lee, Byung Ho; Kim, Tae Woon

    1988-01-01

    Monte Carlo Method is commonly used to observe the overall distribution and to determine the lower or upper bound value in statistical approach when direct analytical calculation is unavailable. However, this method would not be efficient if the tail area of a distribution is concerned. A new method entitled 'Two Step Tail Area Sampling' is developed, which uses the assumption of discrete probability distribution and samples only the tail area without distorting the overall distribution. This method uses two step sampling procedure. First, sampling at points separated by large intervals is done and second, sampling at points separated by small intervals is done with some check points determined at first step sampling. Comparison with Monte Carlo Method shows that the results obtained from the new method converge to analytic value faster than Monte Carlo Method if the numbers of calculation of both methods are the same. This new method is applied to DNBR (Departure from Nucleate Boiling Ratio) prediction problem in design of the pressurized light water nuclear reactor

  12. Distribution-independent hierarchicald N-body methods

    International Nuclear Information System (INIS)

    Aluru, S.

    1994-01-01

    The N-body problem is to simulate the motion of N particles under the influence of mutual force fields based on an inverse square law. The problem has applications in several domains including astrophysics, molecular dynamics, fluid dynamics, radiosity methods in computer graphics and numerical complex analysis. Research efforts have focused on reducing the O(N 2 ) time per iteration required by the naive algorithm of computing each pairwise interaction. Widely respected among these are the Barnes-Hut and Greengard methods. Greengard claims his algorithm reduces the complexity to O(N) time per iteration. Throughout this thesis, we concentrate on rigorous, distribution-independent, worst-case analysis of the N-body methods. We show that Greengard's algorithm is not O(N), as claimed. Both Barnes-Hut and Greengard's methods depend on the same data structure, which we show is distribution-dependent. For the distribution that results in the smallest running time, we show that Greengard's algorithm is Ω(N log 2 N) in two dimensions and Ω(N log 4 N) in three dimensions. We have designed a hierarchical data structure whose size depends entirely upon the number of particles and is independent of the distribution of the particles. We show that both Greengard's and Barnes-Hut algorithms can be used in conjunction with this data structure to reduce their complexity. Apart from reducing the complexity of the Barnes-Hut algorithm, the data structure also permits more accurate error estimation. We present two- and three-dimensional algorithms for creating the data structure. The multipole method designed using this data structure has a complexity of O(N log N) in two dimensions and O(N log 2 N) in three dimensions

  13. Interval timing in genetically modified mice: a simple paradigm

    OpenAIRE

    Balci, F.; Papachristos, E. B.; Gallistel, C. R.; Brunner, D.; Gibson, J.; Shumyatsky, G. P.

    2007-01-01

    We describe a behavioral screen for the quantitative study of interval timing and interval memory in mice. Mice learn to switch from a short-latency feeding station to a long-latency station when the short latency has passed without a feeding. The psychometric function is the cumulative distribution of switch latencies. Its median measures timing accuracy and its interquartile interval measures timing precision. Next, using this behavioral paradigm, we have examined mice with a gene knockout ...

  14. Inference on the reliability of Weibull distribution with multiply Type-I censored data

    International Nuclear Information System (INIS)

    Jia, Xiang; Wang, Dong; Jiang, Ping; Guo, Bo

    2016-01-01

    In this paper, we focus on the reliability of Weibull distribution under multiply Type-I censoring, which is a general form of Type-I censoring. In multiply Type-I censoring in this study, all units in the life testing experiment are terminated at different times. Reliability estimation with the maximum likelihood estimate of Weibull parameters is conducted. With the delta method and Fisher information, we propose a confidence interval for reliability and compare it with the bias-corrected and accelerated bootstrap confidence interval. Furthermore, a scenario involving a few expert judgments of reliability is considered. A method is developed to generate extended estimations of reliability according to the original judgments and transform them to estimations of Weibull parameters. With Bayes theory and the Monte Carlo Markov Chain method, a posterior sample is obtained to compute the Bayes estimate and credible interval for reliability. Monte Carlo simulation demonstrates that the proposed confidence interval outperforms the bootstrap one. The Bayes estimate and credible interval for reliability are both satisfactory. Finally, a real example is analyzed to illustrate the application of the proposed methods. - Highlights: • We focus on reliability of Weibull distribution under multiply Type-I censoring. • The proposed confidence interval for the reliability is superior after comparison. • The Bayes estimates with a few expert judgements on reliability are satisfactory. • We specify the cases where the MLEs do not exist and present methods to remedy it. • The distribution of estimate of reliability should be used for accurate estimate.

  15. Extension of a chaos control method to unstable trajectories on infinite- or finite-time intervals: Experimental verification

    International Nuclear Information System (INIS)

    Yagasaki, Kazuyuki

    2007-01-01

    In experiments for single and coupled pendula, we demonstrate the effectiveness of a new control method based on dynamical systems theory for stabilizing unstable aperiodic trajectories defined on infinite- or finite-time intervals. The basic idea of the method is similar to that of the OGY method, which is a well-known, chaos control method. Extended concepts of the stable and unstable manifolds of hyperbolic trajectories are used here

  16. Confidence intervals for correlations when data are not normal.

    Science.gov (United States)

    Bishara, Anthony J; Hittner, James B

    2017-02-01

    With nonnormal data, the typical confidence interval of the correlation (Fisher z') may be inaccurate. The literature has been unclear as to which of several alternative methods should be used instead, and how extreme a violation of normality is needed to justify an alternative. Through Monte Carlo simulation, 11 confidence interval methods were compared, including Fisher z', two Spearman rank-order methods, the Box-Cox transformation, rank-based inverse normal (RIN) transformation, and various bootstrap methods. Nonnormality often distorted the Fisher z' confidence interval-for example, leading to a 95 % confidence interval that had actual coverage as low as 68 %. Increasing the sample size sometimes worsened this problem. Inaccurate Fisher z' intervals could be predicted by a sample kurtosis of at least 2, an absolute sample skewness of at least 1, or significant violations of normality hypothesis tests. Only the Spearman rank-order and RIN transformation methods were universally robust to nonnormality. Among the bootstrap methods, an observed imposed bootstrap came closest to accurate coverage, though it often resulted in an overly long interval. The results suggest that sample nonnormality can justify avoidance of the Fisher z' interval in favor of a more robust alternative. R code for the relevant methods is provided in supplementary materials.

  17. Confidence limits for parameters of Poisson and binomial distributions

    International Nuclear Information System (INIS)

    Arnett, L.M.

    1976-04-01

    The confidence limits for the frequency in a Poisson process and for the proportion of successes in a binomial process were calculated and tabulated for the situations in which the observed values of the frequency or proportion and an a priori distribution of these parameters are available. Methods are used that produce limits with exactly the stated confidence levels. The confidence interval [a,b] is calculated so that Pr [a less than or equal to lambda less than or equal to b c,μ], where c is the observed value of the parameter, and μ is the a priori hypothesis of the distribution of this parameter. A Bayesian type analysis is used. The intervals calculated are narrower and appreciably different from results, known to be conservative, that are often used in problems of this type. Pearson and Hartley recognized the characteristics of their methods and contemplated that exact methods could someday be used. The calculation of the exact intervals requires involved numerical analyses readily implemented only on digital computers not available to Pearson and Hartley. A Monte Carlo experiment was conducted to verify a selected interval from those calculated. This numerical experiment confirmed the results of the analytical methods and the prediction of Pearson and Hartley that their published tables give conservative results

  18. Optimal Wind Power Uncertainty Intervals for Electricity Market Operation

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Ying; Zhou, Zhi; Botterud, Audun; Zhang, Kaifeng

    2018-01-01

    It is important to select an appropriate uncertainty level of the wind power forecast for power system scheduling and electricity market operation. Traditional methods hedge against a predefined level of wind power uncertainty, such as a specific confidence interval or uncertainty set, which leaves the questions of how to best select the appropriate uncertainty levels. To bridge this gap, this paper proposes a model to optimize the forecast uncertainty intervals of wind power for power system scheduling problems, with the aim of achieving the best trade-off between economics and reliability. Then we reformulate and linearize the models into a mixed integer linear programming (MILP) without strong assumptions on the shape of the probability distribution. In order to invest the impacts on cost, reliability, and prices in a electricity market, we apply the proposed model on a twosettlement electricity market based on a six-bus test system and on a power system representing the U.S. state of Illinois. The results show that the proposed method can not only help to balance the economics and reliability of the power system scheduling, but also help to stabilize the energy prices in electricity market operation.

  19. Establishment of 25-OH-vitamin D reference interval by radioimmunoassay and its clinical significance

    International Nuclear Information System (INIS)

    Chen Jianbo; Huang Xianzhong; Hu Chaohui; Zu Yuli; Zhu Qingyi

    2009-01-01

    Objective: To establish reference interval of 25-OH-vitamin D (25-OH-Vit D) by radioimmunoassay and provide suggestion for clinical applications. Method: Collecting 204 healthy persons specimens, and validating and establishing reference interval of 25-OH-Vit D by treatment of outlying observations, judgement of data distributions and analysis of test results. Results: The reference interval of 25-OH-Vit D established in our laboratory is 16.0-39.7 ng/ml for adolescents and 11.7-40.0 ng/ml for adults. Conclusions: The levels of 25-OH-Vit D in humans depend on their age, sex and life style. Some people's vitamin D intake is not enough. Doctors should pay attention to the sufficient 25-OH-Vit D needed when evaluate vitamin D intake. (authors)

  20. Interval and global progressivity of the income tax from wages in the Czech and Slovak Republics

    Directory of Open Access Journals (Sweden)

    Kubátová Květa

    2016-02-01

    Full Text Available The article deals with the measurement of progressivity of personal income tax in the Czech Republic and Slovakia imposed on wages. It works with both the methods known from the literature: the local method (interval and global progressivity. The data source is the wage statistics of the Statistical Offices and taxes are calculated fictitiously on the basis of law with adoption of assumptions. Results for interval progressivity in both countries show that while progressivity of the lowest income taxpayers is higher, it decreases with increasing gross income. Personal income tax in the Czech and Slovak Republics is observed as progressive in the entire range, even though the statutory tax rate is linear. The Lorenz curve shows that the distributions of gross wages in the Czech Republic and Slovakia are of a similar nature. The values of the coefficient of interval progressivity and the coefficient according to Musgrave and Thin (CR has a coefficient of 1.024 and SR of 1.037 show that personal income tax is more progressive in Slovakia. Although Slovak personal income tax imposed on wages is more progressive, post-tax incomes of employees are more equitably distributed in the Czech Republic.

  1. Distributed MIMO-ISAR Sub-image Fusion Method

    Directory of Open Access Journals (Sweden)

    Gu Wenkun

    2017-02-01

    Full Text Available The fast fluctuation associated with maneuvering a target’s radar cross-section often affects the imaging performance stability of traditional monostatic Inverse Synthetic Aperture Radar (ISAR. To address this problem, in this study, we propose an imaging method based on the fusion of sub-images of frequencydiversity-distributed multiple Input-Multiple Output-Inverse Synthetic Aperture Radar (MIMO-ISAR. First, we establish the analytic expression of a two-dimensional ISAR sub-image acquired by different channels of distributed MIMO-ISAR. Then, we derive the distance and azimuth distortion factors of the image acquired by the different channels. By compensating for the distortion of the ISAR image, we ultimately realize distributed MIMO-ISAR fusion imaging. Simulations verify the validity of this imaging method using distributed MIMO-ISAR.

  2. Reference Value Advisor: a new freeware set of macroinstructions to calculate reference intervals with Microsoft Excel.

    Science.gov (United States)

    Geffré, Anne; Concordet, Didier; Braun, Jean-Pierre; Trumel, Catherine

    2011-03-01

    International recommendations for determination of reference intervals have been recently updated, especially for small reference sample groups, and use of the robust method and Box-Cox transformation is now recommended. Unfortunately, these methods are not included in most software programs used for data analysis by clinical laboratories. We have created a set of macroinstructions, named Reference Value Advisor, for use in Microsoft Excel to calculate reference limits applying different methods. For any series of data, Reference Value Advisor calculates reference limits (with 90% confidence intervals [CI]) using a nonparametric method when n≥40 and by parametric and robust methods from native and Box-Cox transformed values; tests normality of distributions using the Anderson-Darling test and outliers using Tukey and Dixon-Reed tests; displays the distribution of values in dot plots and histograms and constructs Q-Q plots for visual inspection of normality; and provides minimal guidelines in the form of comments based on international recommendations. The critical steps in determination of reference intervals are correct selection of as many reference individuals as possible and analysis of specimens in controlled preanalytical and analytical conditions. Computing tools cannot compensate for flaws in selection and size of the reference sample group and handling and analysis of samples. However, if those steps are performed properly, Reference Value Advisor, available as freeware at http://www.biostat.envt.fr/spip/spip.php?article63, permits rapid assessment and comparison of results calculated using different methods, including currently unavailable methods. This allows for selection of the most appropriate method, especially as the program provides the CI of limits. It should be useful in veterinary clinical pathology when only small reference sample groups are available. ©2011 American Society for Veterinary Clinical Pathology.

  3. Confidence Intervals from Normalized Data: A correction to Cousineau (2005

    Directory of Open Access Journals (Sweden)

    Richard D. Morey

    2008-09-01

    Full Text Available Presenting confidence intervals around means is a common method of expressing uncertainty in data. Loftus and Masson (1994 describe confidence intervals for means in within-subjects designs. These confidence intervals are based on the ANOVA mean squared error. Cousineau (2005 presents an alternative to the Loftus and Masson method, but his method produces confidence intervals that are smaller than those of Loftus and Masson. I show why this is the case and offer a simple correction that makes the expected size of Cousineau confidence intervals the same as that of Loftus and Masson confidence intervals.

  4. Reclaimed mineland curve number response to temporal distribution of rainfall

    Science.gov (United States)

    Warner, R.C.; Agouridis, C.T.; Vingralek, P.T.; Fogle, A.W.

    2010-01-01

    The curve number (CN) method is a common technique to estimate runoff volume, and it is widely used in coal mining operations such as those in the Appalachian region of Kentucky. However, very little CN data are available for watersheds disturbed by surface mining and then reclaimed using traditional techniques. Furthermore, as the CN method does not readily account for variations in infiltration rates due to varying rainfall distributions, the selection of a single CN value to encompass all temporal rainfall distributions could lead engineers to substantially under- or over-size water detention structures used in mining operations or other land uses such as development. Using rainfall and runoff data from a surface coal mine located in the Cumberland Plateau of eastern Kentucky, CNs were computed for conventionally reclaimed lands. The effects of temporal rainfall distributions on CNs was also examined by classifying storms as intense, steady, multi-interval intense, or multi-interval steady. Results indicate that CNs for such reclaimed lands ranged from 62 to 94 with a mean value of 85. Temporal rainfall distributions were also shown to significantly affect CN values with intense storms having significantly higher CNs than multi-interval storms. These results indicate that a period of recovery is present between rainfall bursts of a multi-interval storm that allows depressional storage and infiltration rates to rebound. ?? 2010 American Water Resources Association.

  5. Extending the alias Monte Carlo sampling method to general distributions

    International Nuclear Information System (INIS)

    Edwards, A.L.; Rathkopf, J.A.; Smidt, R.K.

    1991-01-01

    The alias method is a Monte Carlo sampling technique that offers significant advantages over more traditional methods. It equals the accuracy of table lookup and the speed of equal probable bins. The original formulation of this method sampled from discrete distributions and was easily extended to histogram distributions. We have extended the method further to applications more germane to Monte Carlo particle transport codes: continuous distributions. This paper presents the alias method as originally derived and our extensions to simple continuous distributions represented by piecewise linear functions. We also present a method to interpolate accurately between distributions tabulated at points other than the point of interest. We present timing studies that demonstrate the method's increased efficiency over table lookup and show further speedup achieved through vectorization. 6 refs., 12 figs., 2 tabs

  6. Negative binomial distribution fits to multiplicity distributions is restricted δη intervals from central O+Cu collisions at 14.6A GeV/c and their implication for open-quotes Intermittencyclose quotes

    International Nuclear Information System (INIS)

    Tannenbaum, M.J.

    1993-01-01

    Experience in analyzing the data from Light and Heavy Ion Collisions in terms of distributions rather than moments suggests that conventional fluctuations of multiplicity and transverse energy can be well described by Gamma or Negative Binomial Distributions (NBD). Multiplicity distributions were obtained for central 16 O+Cu collisions in bins of δη= 0.1,0.2, 0.3 .... 0.5,1.0, where the bin of 1.0 covers 1.2 < η < 2.2 in the laboratory. NBD fits were performed to these distributions with excellent results in all δη bins. The κ parameter of the NBD fit increases linearly with the δη interval, which is a totally unexpected and particularly striking result. Due to the well known property of the NBD under convolution, this result indicates that the multiplicity distributions in adjacent bins of pseudorapidity δη ∼ 0.1 are largely statistically independent. The relationship to 2-particle correlations and open-quotes Intermittencyclose quotes will be discussed

  7. Various methods for the estimation of the post mortem interval from Calliphoridae: A review

    Directory of Open Access Journals (Sweden)

    Ruchi Sharma

    2015-03-01

    Forensic entomology is recognized in many countries as an important tool for legal investigations. Unfortunately, it has not received much attention in India as an important investigative tool. The maggots of the flies crawling on the dead bodies are widely considered to be just another disgusting element of decay and are not collected at the time of autopsy. They can aid in death investigations (time since death, manner of death, etc.. This paper reviews the various methods of post mortem interval estimation using Calliphoridae to make the investigators, law personnel and researchers aware of the importance of entomology in criminal investigations. The various problems confronted by forensic entomologists in estimating the time since death have also been discussed and there is a need for further research in the field as well as the laborator. Correct estimation of the post mortem interval is one of the most important aspects of legal medicine.

  8. neutron multiplicity measurements on 220 l waste drums containing Pu in the range 0.1-1 g 240Pueff with the time interval analysis method

    International Nuclear Information System (INIS)

    Baeten, P.; Bruggeman, M.; Carchon, R.; De Boeck, W.

    1998-01-01

    Measurement results are presented for the assay of plutonium in 220 l waste drums containing Pu-masses in the range 0.1-1 g 240 Pu eff obtained with the time interval analysis (TIA) method. TIA is a neutron multiplicity method based on the concept of one- and two-dimensional Rossi-alpha distributions. The main source of measurement bias in neutron multiplicity measurements at low count-rates is the impredictable variation of the high-multiplicity neutron background of spallation neutrons induced by cosmic rays. The TIA-method was therefore equipped with a special background filter, which is designed and optimized to reduce the influence of these spallation neutrons by rejecting the high-multiplicity events. The measurement results, obtained with the background correction filter outlined in this paper, prove the repeatability and validity of the TIA-method and show that multiplicity counting with the TIA-technique is applicable for masses as low as 0.1 g 240 Pu eff even at a detection efficiency of 12%. (orig.)

  9. Internal representations of temporal statistics and feedback calibrate motor-sensory interval timing.

    Directory of Open Access Journals (Sweden)

    Luigi Acerbi

    Full Text Available Humans have been shown to adapt to the temporal statistics of timing tasks so as to optimize the accuracy of their responses, in agreement with the predictions of Bayesian integration. This suggests that they build an internal representation of both the experimentally imposed distribution of time intervals (the prior and of the error (the loss function. The responses of a Bayesian ideal observer depend crucially on these internal representations, which have only been previously studied for simple distributions. To study the nature of these representations we asked subjects to reproduce time intervals drawn from underlying temporal distributions of varying complexity, from uniform to highly skewed or bimodal while also varying the error mapping that determined the performance feedback. Interval reproduction times were affected by both the distribution and feedback, in good agreement with a performance-optimizing Bayesian observer and actor model. Bayesian model comparison highlighted that subjects were integrating the provided feedback and represented the experimental distribution with a smoothed approximation. A nonparametric reconstruction of the subjective priors from the data shows that they are generally in agreement with the true distributions up to third-order moments, but with systematically heavier tails. In particular, higher-order statistical features (kurtosis, multimodality seem much harder to acquire. Our findings suggest that humans have only minor constraints on learning lower-order statistical properties of unimodal (including peaked and skewed distributions of time intervals under the guidance of corrective feedback, and that their behavior is well explained by Bayesian decision theory.

  10. Dynamic reliability assessment and prediction for repairable systems with interval-censored data

    International Nuclear Information System (INIS)

    Peng, Yizhen; Wang, Yu; Zi, YanYang; Tsui, Kwok-Leung; Zhang, Chuhua

    2017-01-01

    The ‘Test, Analyze and Fix’ process is widely applied to improve the reliability of a repairable system. In this process, dynamic reliability assessment for the system has been paid a great deal of attention. Due to instrument malfunctions, staff omissions and imperfect inspection strategies, field reliability data are often subject to interval censoring, making dynamic reliability assessment become a difficult task. Most traditional methods assume this kind of data as multiple normal distributed variables or the missing mechanism as missing at random, which may cause a large bias in parameter estimation. This paper proposes a novel method to evaluate and predict the dynamic reliability of a repairable system subject to interval-censored problem. First, a multiple imputation strategy based on the assumption that the reliability growth trend follows a nonhomogeneous Poisson process is developed to derive the distributions of missing data. Second, a new order statistic model that can transfer the dependent variables into independent variables is developed to simplify the imputation procedure. The unknown parameters of the model are iteratively inferred by the Monte Carlo expectation maximization (MCEM) algorithm. Finally, to verify the effectiveness of the proposed method, a simulation and a real case study for gas pipeline compressor system are implemented. - Highlights: • A new multiple imputation strategy was developed to derive the PDF of missing data. • A new order statistic model was developed to simplify the imputation procedure. • The parameters of the order statistic model were iteratively inferred by MCEM. • A real cases study was conducted to verify the effectiveness of the proposed method.

  11. Count-to-count time interval distribution analysis in a fast reactor

    International Nuclear Information System (INIS)

    Perez-Navarro Gomez, A.

    1973-01-01

    The most important kinetic parameters have been measured at the zero power fast reactor CORAL-I by means of the reactor noise analysis in the time domain, using measurements of the count-to-count time intervals. (Author) 69 refs

  12. Fuzzy interval Finite Element/Statistical Energy Analysis for mid-frequency analysis of built-up systems with mixed fuzzy and interval parameters

    Science.gov (United States)

    Yin, Hui; Yu, Dejie; Yin, Shengwen; Xia, Baizhan

    2016-10-01

    This paper introduces mixed fuzzy and interval parametric uncertainties into the FE components of the hybrid Finite Element/Statistical Energy Analysis (FE/SEA) model for mid-frequency analysis of built-up systems, thus an uncertain ensemble combining non-parametric with mixed fuzzy and interval parametric uncertainties comes into being. A fuzzy interval Finite Element/Statistical Energy Analysis (FIFE/SEA) framework is proposed to obtain the uncertain responses of built-up systems, which are described as intervals with fuzzy bounds, termed as fuzzy-bounded intervals (FBIs) in this paper. Based on the level-cut technique, a first-order fuzzy interval perturbation FE/SEA (FFIPFE/SEA) and a second-order fuzzy interval perturbation FE/SEA method (SFIPFE/SEA) are developed to handle the mixed parametric uncertainties efficiently. FFIPFE/SEA approximates the response functions by the first-order Taylor series, while SFIPFE/SEA improves the accuracy by considering the second-order items of Taylor series, in which all the mixed second-order items are neglected. To further improve the accuracy, a Chebyshev fuzzy interval method (CFIM) is proposed, in which the Chebyshev polynomials is used to approximate the response functions. The FBIs are eventually reconstructed by assembling the extrema solutions at all cut levels. Numerical results on two built-up systems verify the effectiveness of the proposed methods.

  13. A Combined Weighting Method Based on Hybrid of Interval Evidence Fusion and Random Sampling

    OpenAIRE

    Yan, Ying; Suo, Bin

    2017-01-01

    Due to the complexity of system and lack of expertise, epistemic uncertainties may present in the experts’ judgment on the importance of certain indices during group decision-making. A novel combination weighting method is proposed to solve the index weighting problem when various uncertainties are present in expert comments. Based on the idea of evidence theory, various types of uncertain evaluation information are uniformly expressed through interval evidence structures. Similarity matrix b...

  14. The Total Deviation Index estimated by Tolerance Intervals to evaluate the concordance of measurement devices

    Directory of Open Access Journals (Sweden)

    Ascaso Carlos

    2010-04-01

    Full Text Available Abstract Background In an agreement assay, it is of interest to evaluate the degree of agreement between the different methods (devices, instruments or observers used to measure the same characteristic. We propose in this study a technical simplification for inference about the total deviation index (TDI estimate to assess agreement between two devices of normally-distributed measurements and describe its utility to evaluate inter- and intra-rater agreement if more than one reading per subject is available for each device. Methods We propose to estimate the TDI by constructing a probability interval of the difference in paired measurements between devices, and thereafter, we derive a tolerance interval (TI procedure as a natural way to make inferences about probability limit estimates. We also describe how the proposed method can be used to compute bounds of the coverage probability. Results The approach is illustrated in a real case example where the agreement between two instruments, a handle mercury sphygmomanometer device and an OMRON 711 automatic device, is assessed in a sample of 384 subjects where measures of systolic blood pressure were taken twice by each device. A simulation study procedure is implemented to evaluate and compare the accuracy of the approach to two already established methods, showing that the TI approximation produces accurate empirical confidence levels which are reasonably close to the nominal confidence level. Conclusions The method proposed is straightforward since the TDI estimate is derived directly from a probability interval of a normally-distributed variable in its original scale, without further transformations. Thereafter, a natural way of making inferences about this estimate is to derive the appropriate TI. Constructions of TI based on normal populations are implemented in most standard statistical packages, thus making it simpler for any practitioner to implement our proposal to assess agreement.

  15. Risk Assessment for Distribution Systems Using an Improved PEM-Based Method Considering Wind and Photovoltaic Power Distribution

    Directory of Open Access Journals (Sweden)

    Qingwu Gong

    2017-03-01

    Full Text Available The intermittency and variability of permeated distributed generators (DGs could cause many critical security and economy risks to distribution systems. This paper applied a certain mathematical distribution to imitate the output variability and uncertainty of DGs. Then, four risk indices—EENS (expected energy not supplied, PLC (probability of load curtailment, EFLC (expected frequency of load curtailment, and SI (severity index—were established to reflect the system risk level of the distribution system. For the certain mathematical distribution of the DGs’ output power, an improved PEM (point estimate method-based method was proposed to calculate these four system risk indices. In this improved PEM-based method, an enumeration method was used to list the states of distribution systems, and an improved PEM was developed to deal with the uncertainties of DGs, and the value of load curtailment in distribution systems was calculated by an optimal power flow algorithm. Finally, the effectiveness and advantages of this proposed PEM-based method for distribution system assessment were verified by testing a modified IEEE 30-bus system. Simulation results have shown that this proposed PEM-based method has a high computational accuracy and highly reduced computational costs compared with other risk assessment methods and is very effective for risk assessments.

  16. The study of distribution and forms of uranium occurrences in Lake Baikal sediments by the SSNTD method

    International Nuclear Information System (INIS)

    Zhmodik, S.M.; Verkhovtseva, N.V.; Soloboeva, E.V.; Mironov, A.G.; Nemirovskaya, N.A.; Ilic, R.; Khlystov, O.M.; Titov, A.T.

    2005-01-01

    Sediments of Lake Baikal drill cores VER-96-1 St8 TW2 (53 deg. 32 ' 15 ' 'E; 107 deg. 56 ' 25 ' 'N) (interval 181.8-235cm from the sediment surface) were studied by means of SSNTD with the aim of defining uranium occurrence in the sediments and the uranium concentration. The neutron-fission ((n,f)-autoradiographic) method allowed a detailed study of uranium distribution of these Lake Baikal sediments within the Academicheskiy Ridge. Layered accumulations of uranium-bearing grained phosphorite, uranium-bearing particles of organic material, and abnormal uranium concentration in diatomite of unknown origin were discovered

  17. Allocation of ESS by interval optimization method considering impact of ship swinging on hybrid PV/diesel ship power system

    International Nuclear Information System (INIS)

    Wen, Shuli; Lan, Hai; Hong, Ying-Yi; Yu, David C.; Zhang, Lijun; Cheng, Peng

    2016-01-01

    Highlights: • An uncertainty model of PV generation on board is developed based on the experiments. • The moving and swinging of the ship are considered in the optimal ESS sizing problem. • Optimal sizing of ESS in a hybrid PV/diesel/ESS ship power system is gained by the interval optimization method. • Different cases were studied to show the significance of the proposed method considering the swinging effects on the cost. - Abstract: Owing to low efficiency of traditional ships and the serious environmental pollution that they cause, the use of solar energy and an energy storage system (ESS) in a ship’s power system is increasingly attracting attention. However, the swinging of a ship raises crucial challenges in designing an optimal system for a large oil tanker ship, which are associated with uncertainties in solar energy. In this study, a series of experiments are performed to investigate the characteristics of a photovoltaic (PV) system on a moving ship. Based on the experimental results, an interval uncertainty model of on-board PV generation is established, which considers the effect of the swinging of the ship. Due to the power balance equations, the outputs of the diesel generator and the ESS on a large oil tanker are also modeled using interval variables. An interval optimization method is developed to determine the optimal size of the ESS in this hybrid ship power system to reduce the fuel cost, capital cost of the ESS, and emissions of greenhouse gases. Variations of the ship load are analyzed using a new method, taking five operating conditions into account. Several cases are compared in detail to demonstrate the effectiveness of the proposed algorithm.

  18. Growth Estimators and Confidence Intervals for the Mean of Negative Binomial Random Variables with Unknown Dispersion

    Directory of Open Access Journals (Sweden)

    David Shilane

    2013-01-01

    Full Text Available The negative binomial distribution becomes highly skewed under extreme dispersion. Even at moderately large sample sizes, the sample mean exhibits a heavy right tail. The standard normal approximation often does not provide adequate inferences about the data's expected value in this setting. In previous work, we have examined alternative methods of generating confidence intervals for the expected value. These methods were based upon Gamma and Chi Square approximations or tail probability bounds such as Bernstein's inequality. We now propose growth estimators of the negative binomial mean. Under high dispersion, zero values are likely to be overrepresented in the data. A growth estimator constructs a normal-style confidence interval by effectively removing a small, predetermined number of zeros from the data. We propose growth estimators based upon multiplicative adjustments of the sample mean and direct removal of zeros from the sample. These methods do not require estimating the nuisance dispersion parameter. We will demonstrate that the growth estimators' confidence intervals provide improved coverage over a wide range of parameter values and asymptotically converge to the sample mean. Interestingly, the proposed methods succeed despite adding both bias and variance to the normal approximation.

  19. Musical training generalises across modalities and reveals efficient and adaptive mechanisms for reproducing temporal intervals.

    Science.gov (United States)

    Aagten-Murphy, David; Cappagli, Giulia; Burr, David

    2014-03-01

    Expert musicians are able to time their actions accurately and consistently during a musical performance. We investigated how musical expertise influences the ability to reproduce auditory intervals and how this generalises across different techniques and sensory modalities. We first compared various reproduction strategies and interval length, to examine the effects in general and to optimise experimental conditions for testing the effect of music, and found that the effects were robust and consistent across different paradigms. Focussing on a 'ready-set-go' paradigm subjects reproduced time intervals drawn from distributions varying in total length (176, 352 or 704 ms) or in the number of discrete intervals within the total length (3, 5, 11 or 21 discrete intervals). Overall, Musicians performed more veridical than Non-Musicians, and all subjects reproduced auditory-defined intervals more accurately than visually-defined intervals. However, Non-Musicians, particularly with visual stimuli, consistently exhibited a substantial and systematic regression towards the mean interval. When subjects judged intervals from distributions of longer total length they tended to regress more towards the mean, while the ability to discriminate between discrete intervals within the distribution had little influence on subject error. These results are consistent with a Bayesian model that minimizes reproduction errors by incorporating a central tendency prior weighted by the subject's own temporal precision relative to the current distribution of intervals. Finally a strong correlation was observed between all durations of formal musical training and total reproduction errors in both modalities (accounting for 30% of the variance). Taken together these results demonstrate that formal musical training improves temporal reproduction, and that this improvement transfers from audition to vision. They further demonstrate the flexibility of sensorimotor mechanisms in adapting to

  20. Determination and identification of naturally occurring decay series using milli-second order pulse time interval analysis (TIA)

    International Nuclear Information System (INIS)

    Hashimoto, T.; Sanada, Y.; Uezu, Y.

    2003-01-01

    A delayed coincidence method, called a time interval analysis (TIA) method, has been successfully applied to selective determination of the correlated α-α decay events in millisecond order life-time. A main decay process applicable to TIA-treatment is 220 Rn → 216 Po(T 1/2 :145ms) → {Th-series}. The TIA is fundamentally based on the difference of time interval distribution between non-correlated decay events and other events such as background or random events when they were compiled the time interval data within a fixed time (for example, a tenth of concerned half lives). The sensitivity of the TIA-analysis due to correlated α-α decay events could be subsequently improved in respect of background elimination using the pulse shape discrimination technique (PSD with PERALS counter) to reject β/γ-pulses, purging of nitrogen gas into extra scintillator, and applying solvent extraction of Ra. (author)

  1. Verified Interval Orbit Propagation in Satellite Collision Avoidance

    NARCIS (Netherlands)

    Römgens, B.A.; Mooij, E.; Naeije, M.C.

    2011-01-01

    Verified interval integration methods enclose a solution set corresponding to interval initial values and parameters, and bound integration and rounding errors. Verified methods suffer from overestimation of the solution, i.e., non-solutions are also included in the solution enclosure. Two verified

  2. Effect of insertion method and postinsertion time interval prior to force application on the removal torque of orthodontic miniscrews.

    Science.gov (United States)

    Sharifi, Maryam; Ghassemi, Amirreza; Bayani, Shahin

    2015-01-01

    Success of orthodontic miniscrews in providing stable anchorage is dependent on their stability. The purpose of this study was to assess the effect of insertion method and postinsertion time interval on the removal torque of miniscrews as an indicator of their stability. Seventy-two miniscrews (Jeil Medical) were inserted into the femoral bones of three male German Shepherd dogs and assigned to nine groups of eight miniscrews. Three insertion methods, including hand-driven, motor-driven with 5.0-Ncm insertion torque, and motor-driven with 20.0-Ncm insertion torque, were tested. Three time intervals of 0, 2, and 6 weeks between miniscrew insertion and removal were tested as well. Removal torque values were measured in newton centimeters by a removal torque tester (IMADA). Data were analyzed by one-way analysis of variance (ANOVA) followed by the Bonferroni post hoc test at a .05 level of significance. A miniscrew survival rate of 93% was observed in this study. The highest mean value of removal torque among the three postinsertion intervals (2.4 ± 0.59 Ncm) was obtained immediately after miniscrew insertion with a statistically significant difference from the other two time intervals (P torque values were obtained immediately after insertion.

  3. Optimization of Spacecraft Rendezvous and Docking using Interval Analysis

    NARCIS (Netherlands)

    Van Kampen, E.; Chu, Q.P.; Mulder, J.A.

    2010-01-01

    This paper applies interval optimization to the fixed-time multiple impulse rendezvous and docking problem. Current methods for solving this type of optimization problem include for example genetic algorithms and gradient based optimization. Unlike these methods, interval methods can guarantee that

  4. The Multi-Attribute Group Decision-Making Method Based on Interval Grey Trapezoid Fuzzy Linguistic Variables.

    Science.gov (United States)

    Yin, Kedong; Wang, Pengyu; Li, Xuemei

    2017-12-13

    With respect to multi-attribute group decision-making (MAGDM) problems, where attribute values take the form of interval grey trapezoid fuzzy linguistic variables (IGTFLVs) and the weights (including expert and attribute weight) are unknown, improved grey relational MAGDM methods are proposed. First, the concept of IGTFLV, the operational rules, the distance between IGTFLVs, and the projection formula between the two IGTFLV vectors are defined. Second, the expert weights are determined by using the maximum proximity method based on the projection values between the IGTFLV vectors. The attribute weights are determined by the maximum deviation method and the priorities of alternatives are determined by improved grey relational analysis. Finally, an example is given to prove the effectiveness of the proposed method and the flexibility of IGTFLV.

  5. Power-law inter-spike interval distributions infer a conditional maximization of entropy in cortical neurons.

    Directory of Open Access Journals (Sweden)

    Yasuhiro Tsubo

    Full Text Available The brain is considered to use a relatively small amount of energy for its efficient information processing. Under a severe restriction on the energy consumption, the maximization of mutual information (MMI, which is adequate for designing artificial processing machines, may not suit for the brain. The MMI attempts to send information as accurate as possible and this usually requires a sufficient energy supply for establishing clearly discretized communication bands. Here, we derive an alternative hypothesis for neural code from the neuronal activities recorded juxtacellularly in the sensorimotor cortex of behaving rats. Our hypothesis states that in vivo cortical neurons maximize the entropy of neuronal firing under two constraints, one limiting the energy consumption (as assumed previously and one restricting the uncertainty in output spike sequences at given firing rate. Thus, the conditional maximization of firing-rate entropy (CMFE solves a tradeoff between the energy cost and noise in neuronal response. In short, the CMFE sends a rich variety of information through broader communication bands (i.e., widely distributed firing rates at the cost of accuracy. We demonstrate that the CMFE is reflected in the long-tailed, typically power law, distributions of inter-spike intervals obtained for the majority of recorded neurons. In other words, the power-law tails are more consistent with the CMFE rather than the MMI. Thus, we propose the mathematical principle by which cortical neurons may represent information about synaptic input into their output spike trains.

  6. A model of interval timing by neural integration.

    Science.gov (United States)

    Simen, Patrick; Balci, Fuat; de Souza, Laura; Cohen, Jonathan D; Holmes, Philip

    2011-06-22

    We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes, that correlations among them can be largely cancelled by balancing excitation and inhibition, that neural populations can act as integrators, and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys, and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule's predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior.

  7. Dynamic Subsidy Method for Congestion Management in Distribution Networks

    DEFF Research Database (Denmark)

    Huang, Shaojun; Wu, Qiuwei

    2016-01-01

    Dynamic subsidy (DS) is a locational price paid by the distribution system operator (DSO) to its customers in order to shift energy consumption to designated hours and nodes. It is promising for demand side management and congestion management. This paper proposes a new DS method for congestion...... management in distribution networks, including the market mechanism, the mathematical formulation through a two-level optimization, and the method solving the optimization by tightening the constraints and linearization. Case studies were conducted with a one node system and the Bus 4 distribution network...... of the Roy Billinton Test System (RBTS) with high penetration of electric vehicles (EVs) and heat pumps (HPs). The case studies demonstrate the efficacy of the DS method for congestion management in distribution networks. Studies in this paper show that the DS method offers the customers a fair opportunity...

  8. Constraining the Long-Term Average of Earthquake Recurrence Intervals From Paleo- and Historic Earthquakes by Assimilating Information From Instrumental Seismicity

    Science.gov (United States)

    Zoeller, G.

    2017-12-01

    Paleo- and historic earthquakes are the most important source of information for the estimationof long-term recurrence intervals in fault zones, because sequences of paleoearthquakes cover more than one seismic cycle. On the other hand, these events are often rare, dating uncertainties are enormous and the problem of missing or misinterpreted events leads to additional problems. Taking these shortcomings into account, long-term recurrence intervals are usually unstable as long as no additional information are included. In the present study, we assume that the time to the next major earthquake depends on the rate of small and intermediate events between the large ones in terms of a ``clock-change'' model that leads to a Brownian Passage Time distribution for recurrence intervals. We take advantage of an earlier finding that the aperiodicity of this distribution can be related to the Gutenberg-Richter-b-value, which is usually around one and can be estimated easily from instrumental seismicity in the region under consideration. This allows to reduce the uncertainties in the estimation of the mean recurrence interval significantly, especially for short paleoearthquake sequences and high dating uncertainties. We present illustrative case studies from Southern California and compare the method with the commonly used approach of exponentially distributed recurrence times assuming a stationary Poisson process.

  9. Cathode power distribution system and method of using the same for power distribution

    Science.gov (United States)

    Williamson, Mark A; Wiedmeyer, Stanley G; Koehl, Eugene R; Bailey, James L; Willit, James L; Barnes, Laurel A; Blaskovitz, Robert J

    2014-11-11

    Embodiments include a cathode power distribution system and/or method of using the same for power distribution. The cathode power distribution system includes a plurality of cathode assemblies. Each cathode assembly of the plurality of cathode assemblies includes a plurality of cathode rods. The system also includes a plurality of bus bars configured to distribute current to each of the plurality of cathode assemblies. The plurality of bus bars include a first bus bar configured to distribute the current to first ends of the plurality of cathode assemblies and a second bus bar configured to distribute the current to second ends of the plurality of cathode assemblies.

  10. Chaos on the interval

    CERN Document Server

    Ruette, Sylvie

    2017-01-01

    The aim of this book is to survey the relations between the various kinds of chaos and related notions for continuous interval maps from a topological point of view. The papers on this topic are numerous and widely scattered in the literature; some of them are little known, difficult to find, or originally published in Russian, Ukrainian, or Chinese. Dynamical systems given by the iteration of a continuous map on an interval have been broadly studied because they are simple but nevertheless exhibit complex behaviors. They also allow numerical simulations, which enabled the discovery of some chaotic phenomena. Moreover, the "most interesting" part of some higher-dimensional systems can be of lower dimension, which allows, in some cases, boiling it down to systems in dimension one. Some of the more recent developments such as distributional chaos, the relation between entropy and Li-Yorke chaos, sequence entropy, and maps with infinitely many branches are presented in book form for the first time. The author gi...

  11. Examination of measurement and its method of compensation of the sensitivity distribution using phased array coil for body scan

    CERN Document Server

    Kimura, T; Iizuka, A; Taniguchi, Y; Ishikuro, A; Hongo, T; Inoue, H; Ogura, A

    2003-01-01

    The influence on the quality of images by measurement of a sensitivity distribution and the use of a sensitivity compensation filter was considered using an opposite-type phased array coil and volume-type phased array coil. With the opposite-type phased array coil, the relation between coil interval and filter was investigated for the image intensity correction (IIC) filter, surface coil intensity correction (SCIC) filter (GE), and the Normalize filter (SIEMENS). The SCIC filter and Normalize filter showed distance dependability over the coil interval of signal-to-noise ratio (SNR) and uniformity was observed, and the existence of an optimal coil interval was suggested. Moreover, with the IIC filter, distance dependability over a coil interval was small, and the decrease in contrast with use was remarkable. On the other hand, with the volume-type phased array coil, the overlap of an array element was investigated to determine the influence it had on sensitivity distribution. Although the value stabilized in t...

  12. The Rational Third-Kind Chebyshev Pseudospectral Method for the Solution of the Thomas-Fermi Equation over Infinite Interval

    Directory of Open Access Journals (Sweden)

    Majid Tavassoli Kajani

    2013-01-01

    Full Text Available We propose a pseudospectral method for solving the Thomas-Fermi equation which is a nonlinear ordinary differential equation on semi-infinite interval. This approach is based on the rational third-kind Chebyshev pseudospectral method that is indeed a combination of Tau and collocation methods. This method reduces the solution of this problem to the solution of a system of algebraic equations. Comparison with some numerical solutions shows that the present solution is highly accurate.

  13. Assessment of nuclear concentration uncertainties in isotope kinetics by the interval calculation method

    International Nuclear Information System (INIS)

    Kolesov, V.; Kamaev, D.; Hitrick, D.; Ukraitsev, V.

    2008-01-01

    Basically, the problem of dependency fuel cycle characteristic uncertainties from group constants and decay parameters uncertainties can be solved (to some extent) as well by use of sensitivity analysis. However such procedure is rather labor consuming and does not give guaranteed estimations for received parameters since it works, strictly speaking only for small deviations cause it is initially based on linearization of the mathematical problems. Suggested and realized technique of fuel cycle characteristic uncertainties estimation is based on so-call interval analysis (or interval calculations). The basic advantage of this technique is the opportunity of deriving correct estimations. In a professional terms this decision consist on introduction of a new special type of data such as Interval data in a codes and definition from them all arithmetic operations. Interval type data are in a real practice operation and use now. There are many realizations of interval arithmetic implemented by different ways. (orig.)

  14. The Multi-Attribute Group Decision-Making Method Based on Interval Grey Trapezoid Fuzzy Linguistic Variables

    Directory of Open Access Journals (Sweden)

    Kedong Yin

    2017-12-01

    Full Text Available With respect to multi-attribute group decision-making (MAGDM problems, where attribute values take the form of interval grey trapezoid fuzzy linguistic variables (IGTFLVs and the weights (including expert and attribute weight are unknown, improved grey relational MAGDM methods are proposed. First, the concept of IGTFLV, the operational rules, the distance between IGTFLVs, and the projection formula between the two IGTFLV vectors are defined. Second, the expert weights are determined by using the maximum proximity method based on the projection values between the IGTFLV vectors. The attribute weights are determined by the maximum deviation method and the priorities of alternatives are determined by improved grey relational analysis. Finally, an example is given to prove the effectiveness of the proposed method and the flexibility of IGTFLV.

  15. Electrical power distribution control methods, electrical energy demand monitoring methods, and power management devices

    Science.gov (United States)

    Chassin, David P [Pasco, WA; Donnelly, Matthew K [Kennewick, WA; Dagle, Jeffery E [Richland, WA

    2011-12-06

    Electrical power distribution control methods, electrical energy demand monitoring methods, and power management devices are described. In one aspect, an electrical power distribution control method includes providing electrical energy from an electrical power distribution system, applying the electrical energy to a load, providing a plurality of different values for a threshold at a plurality of moments in time and corresponding to an electrical characteristic of the electrical energy, and adjusting an amount of the electrical energy applied to the load responsive to an electrical characteristic of the electrical energy triggering one of the values of the threshold at the respective moment in time.

  16. Approximation of the semi-infinite interval

    Directory of Open Access Journals (Sweden)

    A. McD. Mercer

    1980-01-01

    Full Text Available The approximation of a function f∈C[a,b] by Bernstein polynomials is well-known. It is based on the binomial distribution. O. Szasz has shown that there are analogous approximations on the interval [0,∞ based on the Poisson distribution. Recently R. Mohapatra has generalized Szasz' result to the case in which the approximating function is αe−ux∑k=N∞(uxkα+β−1Γ(kα+βf(kαuThe present note shows that these results are special cases of a Tauberian theorem for certain infinite series having positive coefficients.

  17. A method to measure depth distributions of implanted ions

    International Nuclear Information System (INIS)

    Arnesen, A.; Noreland, T.

    1977-04-01

    A new variant of the radiotracer method for depth distribution determinations has been tested. Depth distributions of radioactive implanted ions are determined by dissolving thin, uniform layers of evaporated material from the surface of a backing and by measuring the activity before and after the layer removal. The method has been used to determine depth distributions for 25 keV and 50 keV 57 Co ions in aluminium and gold. (Auth.)

  18. Statistical variability and confidence intervals for planar dose QA pass rates

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, Daniel W.; Nelms, Benjamin E.; Attwood, Kristopher; Kumaraswamy, Lalith; Podgorsak, Matthew B. [Department of Physics, State University of New York at Buffalo, Buffalo, New York 14260 (United States) and Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States); Canis Lupus LLC, Merrimac, Wisconsin 53561 (United States); Department of Biostatistics, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States); Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States); Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States); Department of Molecular and Cellular Biophysics and Biochemistry, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States) and Department of Physiology and Biophysics, State University of New York at Buffalo, Buffalo, New York 14214 (United States)

    2011-11-15

    techniques. Results: For the prostate and head/neck cases studied, the pass rates obtained with gamma analysis of high density dose planes were 2%-5% higher than respective %/DTA composite analysis on average (ranging as high as 11%), depending on tolerances and normalization. Meanwhile, the pass rates obtained via local normalization were 2%-12% lower than with global maximum normalization on average (ranging as high as 27%), depending on tolerances and calculation method. Repositioning of simulated low-density sampled grids leads to a distribution of possible pass rates for each measured/calculated dose plane pair. These distributions can be predicted using a binomial distribution in order to establish confidence intervals that depend largely on the sampling density and the observed pass rate (i.e., the degree of difference between measured and calculated dose). These results can be extended to apply to 3D arrays of detectors, as well. Conclusions: Dose plane QA analysis can be greatly affected by choice of calculation metric and user-defined parameters, and so all pass rates should be reported with a complete description of calculation method. Pass rates for low-density arrays are subject to statistical uncertainty (vs. the high-density pass rate), but these sampling errors can be modeled using statistical confidence intervals derived from the sampled pass rate and detector density. Thus, pass rates for low-density array measurements should be accompanied by a confidence interval indicating the uncertainty of each pass rate.

  19. Comparison of estimation methods for fitting weibull distribution to ...

    African Journals Online (AJOL)

    Comparison of estimation methods for fitting weibull distribution to the natural stand of Oluwa Forest Reserve, Ondo State, Nigeria. ... Journal of Research in Forestry, Wildlife and Environment ... The result revealed that maximum likelihood method was more accurate in fitting the Weibull distribution to the natural stand.

  20. Harmonising Reference Intervals for Three Calculated Parameters used in Clinical Chemistry.

    Science.gov (United States)

    Hughes, David; Koerbin, Gus; Potter, Julia M; Glasgow, Nicholas; West, Nic; Abhayaratna, Walter P; Cavanaugh, Juleen; Armbruster, David; Hickman, Peter E

    2016-08-01

    For more than a decade there has been a global effort to harmonise all phases of the testing process, with particular emphasis on the most frequently utilised measurands. In addition, it is recognised that calculated parameters derived from these measurands should also be a target for harmonisation. Using data from the Aussie Normals study we report reference intervals for three calculated parameters: serum osmolality, serum anion gap and albumin-adjusted serum calcium. The Aussie Normals study was an a priori study that analysed samples from 1856 healthy volunteers. The nine analytes used for the calculations in this study were measured on Abbott Architect analysers. The data demonstrated normal (Gaussian) distributions for the albumin-adjusted serum calcium, the anion gap (using potassium in the calculation) and the calculated serum osmolality (using both the Bhagat et al. and Smithline and Gardner formulae). To assess the suitability of these reference intervals for use as harmonised reference intervals, we reviewed data from the Royal College of Pathologists of Australasia/Australasian Association of Clinical Biochemists (RCPA/AACB) bias survey. We conclude that the reference intervals for the calculated serum osmolality (using the Smithline and Gardner formulae) may be suitable for use as a common reference interval. Although a common reference interval for albumin-adjusted serum calcium may be possible, further investigations (including a greater range of albumin concentrations) are needed. This is due to the bias between the Bromocresol Green (BCG) and Bromocresol Purple (BCP) methods at lower serum albumin concentrations. Problems with the measurement of Total CO 2 in the bias survey meant that we could not use the data for assessing the suitability of a common reference interval for the anion gap. Further study is required.

  1. Standard test method for distribution coefficients of inorganic species by the batch method

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This test method covers the determination of distribution coefficients of chemical species to quantify uptake onto solid materials by a batch sorption technique. It is a laboratory method primarily intended to assess sorption of dissolved ionic species subject to migration through pores and interstices of site specific geomedia. It may also be applied to other materials such as manufactured adsorption media and construction materials. Application of the results to long-term field behavior is not addressed in this method. Distribution coefficients for radionuclides in selected geomedia are commonly determined for the purpose of assessing potential migratory behavior of contaminants in the subsurface of contaminated sites and waste disposal facilities. This test method is also applicable to studies for parametric studies of the variables and mechanisms which contribute to the measured distribution coefficient. 1.2 The values stated in SI units are to be regarded as standard. No other units of measurement a...

  2. Correct Bayesian and frequentist intervals are similar

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1986-01-01

    This paper argues that Bayesians and frequentists will normally reach numerically similar conclusions, when dealing with vague data or sparse data. It is shown that both statistical methodologies can deal reasonably with vague data. With sparse data, in many important practical cases Bayesian interval estimates and frequentist confidence intervals are approximately equal, although with discrete data the frequentist intervals are somewhat longer. This is not to say that the two methodologies are equally easy to use: The construction of a frequentist confidence interval may require new theoretical development. Bayesians methods typically require numerical integration, perhaps over many variables. Also, Bayesian can easily fall into the trap of over-optimism about their amount of prior knowledge. But in cases where both intervals are found correctly, the two intervals are usually not very different. (orig.)

  3. The intervals method: a new approach to analyse finite element outputs using multivariate statistics

    Directory of Open Access Journals (Sweden)

    Jordi Marcé-Nogué

    2017-10-01

    Full Text Available Background In this paper, we propose a new method, named the intervals’ method, to analyse data from finite element models in a comparative multivariate framework. As a case study, several armadillo mandibles are analysed, showing that the proposed method is useful to distinguish and characterise biomechanical differences related to diet/ecomorphology. Methods The intervals’ method consists of generating a set of variables, each one defined by an interval of stress values. Each variable is expressed as a percentage of the area of the mandible occupied by those stress values. Afterwards these newly generated variables can be analysed using multivariate methods. Results Applying this novel method to the biological case study of whether armadillo mandibles differ according to dietary groups, we show that the intervals’ method is a powerful tool to characterize biomechanical performance and how this relates to different diets. This allows us to positively discriminate between specialist and generalist species. Discussion We show that the proposed approach is a useful methodology not affected by the characteristics of the finite element mesh. Additionally, the positive discriminating results obtained when analysing a difficult case study suggest that the proposed method could be a very useful tool for comparative studies in finite element analysis using multivariate statistical approaches.

  4. The intervals method: a new approach to analyse finite element outputs using multivariate statistics

    Science.gov (United States)

    De Esteban-Trivigno, Soledad; Püschel, Thomas A.; Fortuny, Josep

    2017-01-01

    Background In this paper, we propose a new method, named the intervals’ method, to analyse data from finite element models in a comparative multivariate framework. As a case study, several armadillo mandibles are analysed, showing that the proposed method is useful to distinguish and characterise biomechanical differences related to diet/ecomorphology. Methods The intervals’ method consists of generating a set of variables, each one defined by an interval of stress values. Each variable is expressed as a percentage of the area of the mandible occupied by those stress values. Afterwards these newly generated variables can be analysed using multivariate methods. Results Applying this novel method to the biological case study of whether armadillo mandibles differ according to dietary groups, we show that the intervals’ method is a powerful tool to characterize biomechanical performance and how this relates to different diets. This allows us to positively discriminate between specialist and generalist species. Discussion We show that the proposed approach is a useful methodology not affected by the characteristics of the finite element mesh. Additionally, the positive discriminating results obtained when analysing a difficult case study suggest that the proposed method could be a very useful tool for comparative studies in finite element analysis using multivariate statistical approaches. PMID:29043107

  5. Interpregnancy intervals: impact of postpartum contraceptive effectiveness and coverage.

    Science.gov (United States)

    Thiel de Bocanegra, Heike; Chang, Richard; Howell, Mike; Darney, Philip

    2014-04-01

    The purpose of this study was to determine the use of contraceptive methods, which was defined by effectiveness, length of coverage, and their association with short interpregnancy intervals, when controlling for provider type and client demographics. We identified a cohort of 117,644 women from the 2008 California Birth Statistical Master file with second or higher order birth and at least 1 Medicaid (Family Planning, Access, Care, and Treatment [Family PACT] program or Medi-Cal) claim within 18 months after index birth. We explored the effect of contraceptive method provision on the odds of having an optimal interpregnancy interval and controlled for covariates. The average length of contraceptive coverage was 3.81 months (SD = 4.84). Most women received user-dependent hormonal contraceptives as their most effective contraceptive method (55%; n = 65,103 women) and one-third (33%; n = 39,090 women) had no contraceptive claim. Women who used long-acting reversible contraceptive methods had 3.89 times the odds and women who used user-dependent hormonal methods had 1.89 times the odds of achieving an optimal birth interval compared with women who used barrier methods only; women with no method had 0.66 times the odds. When user-dependent methods are considered, the odds of having an optimal birth interval increased for each additional month of contraceptive coverage by 8% (odds ratio, 1.08; 95% confidence interval, 1.08-1.09). Women who were seen by Family PACT or by both Family PACT and Medi-Cal providers had significantly higher odds of optimal birth intervals compared with women who were served by Medi-Cal only. To achieve optimal birth spacing and ultimately to improve birth outcomes, attention should be given to contraceptive counseling and access to contraceptive methods in the postpartum period. Copyright © 2014 Mosby, Inc. All rights reserved.

  6. Theoretical method for determining particle distribution functions of classical systems

    International Nuclear Information System (INIS)

    Johnson, E.

    1980-01-01

    An equation which involves the triplet distribution function and the three-particle direct correlation function is obtained. This equation was derived using an analogue of the Ornstein--Zernike equation. The new equation is used to develop a variational method for obtaining the triplet distribution function of uniform one-component atomic fluids from the pair distribution function. The variational method may be used with the first and second equations in the YBG hierarchy to obtain pair and triplet distribution functions. It should be easy to generalize the results to the n-particle distribution function

  7. Distribution functions of probabilistic automata

    Science.gov (United States)

    Vatan, F.

    2001-01-01

    Each probabilistic automaton M over an alphabet A defines a probability measure Prob sub(M) on the set of all finite and infinite words over A. We can identify a k letter alphabet A with the set {0, 1,..., k-1}, and, hence, we can consider every finite or infinite word w over A as a radix k expansion of a real number X(w) in the interval [0, 1]. This makes X(w) a random variable and the distribution function of M is defined as usual: F(x) := Prob sub(M) { w: X(w) automata in detail. Automata with continuous distribution functions are characterized. By a new, and much more easier method, it is shown that the distribution function F(x) is an analytic function if it is a polynomial. Finally, answering a question posed by D. Knuth and A. Yao, we show that a polynomial distribution function F(x) on [0, 1] can be generated by a prob abilistic automaton iff all the roots of F'(x) = 0 in this interval, if any, are rational numbers. For this, we define two dynamical systems on the set of polynomial distributions and study attracting fixed points of random composition of these two systems.

  8. Comparison of force fields and calculation methods for vibration intervals of isotopic H+3 molecules

    International Nuclear Information System (INIS)

    Carney, G.D.; Adler-Golden, S.M.; Lesseski, D.C.

    1986-01-01

    This paper reports (a) improved values for low-lying vibration intervals of H + 3 , H 2 D + , D 2 H + , and D + 3 calculated using the variational method and Simons--Parr--Finlan representations of the Carney--Porter and Dykstra--Swope ab initio H + 3 potential energy surfaces, (b) quartic normal coordinate force fields for isotopic H + 3 molecules, (c) comparisons of variational and second-order perturbation theory, and (d) convergence properties of the Lai--Hagstrom internal coordinate vibrational Hamiltonian. Standard deviations between experimental and ab initio fundamental vibration intervals of H + 3 , H 2 D + , D 2 H + , and D + 3 for these potential surfaces are 6.9 (Carney--Porter) and 1.2 cm -1 (Dykstra--Swope). The standard deviations between perturbation theory and exact variational fundamentals are 5 and 10 cm -1 for the respective surfaces. The internal coordinate Hamiltonian is found to be less efficient than the previously employed ''t'' coordinate Hamiltonian for these molecules, except in the case of H 2 D +

  9. Time-variant random interval natural frequency analysis of structures

    Science.gov (United States)

    Wu, Binhua; Wu, Di; Gao, Wei; Song, Chongmin

    2018-02-01

    This paper presents a new robust method namely, unified interval Chebyshev-based random perturbation method, to tackle hybrid random interval structural natural frequency problem. In the proposed approach, random perturbation method is implemented to furnish the statistical features (i.e., mean and standard deviation) and Chebyshev surrogate model strategy is incorporated to formulate the statistical information of natural frequency with regards to the interval inputs. The comprehensive analysis framework combines the superiority of both methods in a way that computational cost is dramatically reduced. This presented method is thus capable of investigating the day-to-day based time-variant natural frequency of structures accurately and efficiently under concrete intrinsic creep effect with probabilistic and interval uncertain variables. The extreme bounds of the mean and standard deviation of natural frequency are captured through the embedded optimization strategy within the analysis procedure. Three particularly motivated numerical examples with progressive relationship in perspective of both structure type and uncertainty variables are demonstrated to justify the computational applicability, accuracy and efficiency of the proposed method.

  10. Time interval measurement between to emission: a systematics

    International Nuclear Information System (INIS)

    Bizard, G.; Bougault, R.; Brou, R.; Colin, J.; Durand, D.; Genoux-Lubain, A.; Horn, D.; Kerambrun, A.; Laville, J.L.; Le Brun, C.; Lecolley, J.F.; Lopez, O.; Louvel, M.; Mahi, M.; Meslin, C.; Steckmeyer, J.C.; Tamain, B.; Wieloch, A.

    1998-01-01

    A systematic study of the evolution of intervals of fragment emission times as a function of the energy deposited in the compound system was performed. Several measurements, Ne at 60 MeV/u, Ar at 30 and 60 MeV/u and two measurements for Kr at 60 MeV/u (central and semi-peripheral collisions) are presented. In all the experiments the target was Au and the mass of the compounds system was around A = 200. The excitation energies per nucleon reached in the case of these heavy systems cover the range of 3 to 5.5 MeV/u. The method used to determine the emission time intervals is based on the correlation functions associated to the relative angle distributions. The gaps between the data and simulations allow to evaluate the emission times. A rapid decrease of these time intervals was observed when the excitation energy increased. This variation starts at 500 fm/c which corresponds to a sequential emission. This relatively long time which indicates a weak interaction between fragments, corresponds practically to the measurement threshold. The shortest intervals (about 50 fm/c) are associated to a spontaneous multifragmentation and were observed in the case of central collisions at Ar+Au and Kr+Au at 60 MeV/u. Two interpretations are possible. The multifragmentation process might be viewed as a sequential process of very short time-separation or else, one can separate two zones heaving in mind that the multifragmentation is predominant from 4,5 MeV/u excitation energy upwards. This question is still open and its study is under way at LPC. An answer could come from the study of the rupture process of an excited nucleus, notably by the determination of its life-time

  11. Three-Phase Harmonic Analysis Method for Unbalanced Distribution Systems

    Directory of Open Access Journals (Sweden)

    Jen-Hao Teng

    2014-01-01

    Full Text Available Due to the unbalanced features of distribution systems, a three-phase harmonic analysis method is essential to accurately analyze the harmonic impact on distribution systems. Moreover, harmonic analysis is the basic tool for harmonic filter design and harmonic resonance mitigation; therefore, the computational performance should also be efficient. An accurate and efficient three-phase harmonic analysis method for unbalanced distribution systems is proposed in this paper. The variations of bus voltages, bus current injections and branch currents affected by harmonic current injections can be analyzed by two relationship matrices developed from the topological characteristics of distribution systems. Some useful formulas are then derived to solve the three-phase harmonic propagation problem. After the harmonic propagation for each harmonic order is calculated, the total harmonic distortion (THD for bus voltages can be calculated accordingly. The proposed method has better computational performance, since the time-consuming full admittance matrix inverse employed by the commonly-used harmonic analysis methods is not necessary in the solution procedure. In addition, the proposed method can provide novel viewpoints in calculating the branch currents and bus voltages under harmonic pollution which are vital for harmonic filter design. Test results demonstrate the effectiveness and efficiency of the proposed method.

  12. Interval Cancers in a Population-Based Screening Program for Colorectal Cancer in Catalonia, Spain

    Directory of Open Access Journals (Sweden)

    M. Garcia

    2015-01-01

    Full Text Available Objective. To analyze interval cancers among participants in a screening program for colorectal cancer (CRC during four screening rounds. Methods. The study population consisted of participants of a fecal occult blood test-based screening program from February 2000 to September 2010, with a 30-month follow-up (n = 30,480. We used hospital administration data to identify CRC. An interval cancer was defined as an invasive cancer diagnosed within 30 months of a negative screening result and before the next recommended examination. Gender, age, stage, and site distribution of interval cancers were compared with those in the screen-detected group. Results. Within the study period, 97 tumors were screen-detected and 74 tumors were diagnosed after a negative screening. In addition, 17 CRC (18.3% were found after an inconclusive result and 2 cases were diagnosed within the surveillance interval (2.1%. There was an increase of interval cancers over the four rounds (from 32.4% to 46.0%. When compared with screen-detected cancers, interval cancers were found predominantly in the rectum (OR: 3.66; 95% CI: 1.51–8.88 and at more advanced stages (P=0.025. Conclusion. There are large numbers of cancer that are not detected through fecal occult blood test-based screening. The low sensitivity should be emphasized to ensure that individuals with symptoms are not falsely reassured.

  13. A new view to uncertainty in Electre III method by introducing interval numbers

    Directory of Open Access Journals (Sweden)

    Mohammad Kazem Sayyadi

    2012-07-01

    Full Text Available The Electre III is a widely accepted multi attribute decision making model, which takes into account the uncertainty and vagueness. Uncertainty concept in Electre III is introduced by indifference, preference and veto thresholds, but sometimes determining their accurate values can be very hard. In this paper we represent the values of performance matrix as interval numbers and we define the links between interval numbers and concordance matrix .Without changing the concept of concordance, in our propose concept, Electre III is usable in decision making problems with interval numbers.

  14. A parallel optimization method for product configuration and supplier selection based on interval

    Science.gov (United States)

    Zheng, Jian; Zhang, Meng; Li, Guoxi

    2017-06-01

    In the process of design and manufacturing, product configuration is an important way of product development, and supplier selection is an essential component of supply chain management. To reduce the risk of procurement and maximize the profits of enterprises, this study proposes to combine the product configuration and supplier selection, and express the multiple uncertainties as interval numbers. An integrated optimization model of interval product configuration and supplier selection was established, and NSGA-II was put forward to locate the Pareto-optimal solutions to the interval multiobjective optimization model.

  15. An extension of compromise ranking method with interval numbers for the evaluation of renewable energy sources

    Directory of Open Access Journals (Sweden)

    M. Mousavi

    2014-06-01

    Full Text Available Evaluating and prioritizing appropriate renewable energy sources is inevitably a complex decision process. Various information and conflicting attributes should be taken into account. For this purpose, multi-attribute decision making (MADM methods can assist managers or decision makers in formulating renewable energy sources priorities by considering important objective and attributes. In this paper, a new extension of compromise ranking method with interval numbers is presented for the prioritization of renewable energy sources that is based on the performance similarity of alternatives to ideal solutions. To demonstrate the applicability of the proposed decision method, an application example is provided and the computational results are analyzed. Results illustrate that the presented method is viable in solving the evaluation and prioritization problem of renewable energy sources.

  16. The Emergent Capabilities of Distributed Satellites and Methods for Selecting Distributed Satellite Science Missions

    Science.gov (United States)

    Corbin, B. A.; Seager, S.; Ross, A.; Hoffman, J.

    2017-12-01

    Distributed satellite systems (DSS) have emerged as an effective and cheap way to conduct space science, thanks to advances in the small satellite industry. However, relatively few space science missions have utilized multiple assets to achieve their primary scientific goals. Previous research on methods for evaluating mission concepts designs have shown that distributed systems are rarely competitive with monolithic systems, partially because it is difficult to quantify the added value of DSSs over monolithic systems. Comparatively little research has focused on how DSSs can be used to achieve new, fundamental space science goals that cannot be achieved with monolithic systems or how to choose a design from a larger possible tradespace of options. There are seven emergent capabilities of distributed satellites: shared sampling, simultaneous sampling, self-sampling, census sampling, stacked sampling, staged sampling, and sacrifice sampling. These capabilities are either fundamentally, analytically, or operationally unique in their application to distributed science missions, and they can be leveraged to achieve science goals that are either impossible or difficult and costly to achieve with monolithic systems. The Responsive Systems Comparison (RSC) method combines Multi-Attribute Tradespace Exploration with Epoch-Era Analysis to examine benefits, costs, and flexible options in complex systems over the mission lifecycle. Modifications to the RSC method as it exists in previously published literature were made in order to more accurately characterize how value is derived from space science missions. New metrics help rank designs by the value derived over their entire mission lifecycle and show more accurate cumulative value distributions. The RSC method was applied to four case study science missions that leveraged the emergent capabilities of distributed satellites to achieve their primary science goals. In all four case studies, RSC showed how scientific value was

  17. Semi-Markov models for interval censored transient cognitive states with back transitions and a competing risk.

    Science.gov (United States)

    Wei, Shaoceng; Kryscio, Richard J

    2016-12-01

    Continuous-time multi-state stochastic processes are useful for modeling the flow of subjects from intact cognition to dementia with mild cognitive impairment and global impairment as intervening transient cognitive states and death as a competing risk. Each subject's cognition is assessed periodically resulting in interval censoring for the cognitive states while death without dementia is not interval censored. Since back transitions among the transient states are possible, Markov chains are often applied to this type of panel data. In this manuscript, we apply a semi-Markov process in which we assume that the waiting times are Weibull distributed except for transitions from the baseline state, which are exponentially distributed and in which we assume no additional changes in cognition occur between two assessments. We implement a quasi-Monte Carlo (QMC) method to calculate the higher order integration needed for likelihood estimation. We apply our model to a real dataset, the Nun Study, a cohort of 461 participants. © The Author(s) 2014.

  18. Delay-Dependent Guaranteed Cost Control of an Interval System with Interval Time-Varying Delay

    Directory of Open Access Journals (Sweden)

    Xiao Min

    2009-01-01

    Full Text Available This paper concerns the problem of the delay-dependent robust stability and guaranteed cost control for an interval system with time-varying delay. The interval system with matrix factorization is provided and leads to less conservative conclusions than solving a square root. The time-varying delay is assumed to belong to an interval and the derivative of the interval time-varying delay is not a restriction, which allows a fast time-varying delay; also its applicability is broad. Based on the Lyapunov-Ktasovskii approach, a delay-dependent criterion for the existence of a state feedback controller, which guarantees the closed-loop system stability, the upper bound of cost function, and disturbance attenuation lever for all admissible uncertainties as well as out perturbation, is proposed in terms of linear matrix inequalities (LMIs. The criterion is derived by free weighting matrices that can reduce the conservatism. The effectiveness has been verified in a number example and the compute results are presented to validate the proposed design method.

  19. Interval Forecast for Smooth Transition Autoregressive Model ...

    African Journals Online (AJOL)

    In this paper, we propose a simple method for constructing interval forecast for smooth transition autoregressive (STAR) model. This interval forecast is based on bootstrapping the residual error of the estimated STAR model for each forecast horizon and computing various Akaike information criterion (AIC) function. This new ...

  20. A Comparative Study of Distribution System Parameter Estimation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup

    2016-07-17

    In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of both methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.

  1. Analytical method for optimization of maintenance policy based on available system failure data

    International Nuclear Information System (INIS)

    Coria, V.H.; Maximov, S.; Rivas-Dávalos, F.; Melchor, C.L.; Guardado, J.L.

    2015-01-01

    An analytical optimization method for preventive maintenance (PM) policy with minimal repair at failure, periodic maintenance, and replacement is proposed for systems with historical failure time data influenced by a current PM policy. The method includes a new imperfect PM model based on Weibull distribution and incorporates the current maintenance interval T 0 and the optimal maintenance interval T to be found. The Weibull parameters are analytically estimated using maximum likelihood estimation. Based on this model, the optimal number of PM and the optimal maintenance interval for minimizing the expected cost over an infinite time horizon are also analytically determined. A number of examples are presented involving different failure time data and current maintenance intervals to analyze how the proposed analytical optimization method for periodic PM policy performances in response to changes in the distribution of the failure data and the current maintenance interval. - Highlights: • An analytical optimization method for preventive maintenance (PM) policy is proposed. • A new imperfect PM model is developed. • The Weibull parameters are analytically estimated using maximum likelihood. • The optimal maintenance interval and number of PM are also analytically determined. • The model is validated by several numerical examples

  2. Comparing four methods to estimate usual intake distributions

    NARCIS (Netherlands)

    Souverein, O.W.; Dekkers, A.L.; Geelen, A.; Haubrock, J.; Vries, de J.H.M.; Ocke, M.C.; Harttig, U.; Boeing, H.; Veer, van 't P.

    2011-01-01

    Background/Objectives: The aim of this paper was to compare methods to estimate usual intake distributions of nutrients and foods. As ‘true’ usual intake distributions are not known in practice, the comparison was carried out through a simulation study, as well as empirically, by application to data

  3. Applications of interval computations

    CERN Document Server

    Kreinovich, Vladik

    1996-01-01

    Primary Audience for the Book • Specialists in numerical computations who are interested in algorithms with automatic result verification. • Engineers, scientists, and practitioners who desire results with automatic verification and who would therefore benefit from the experience of suc­ cessful applications. • Students in applied mathematics and computer science who want to learn these methods. Goal Of the Book This book contains surveys of applications of interval computations, i. e. , appli­ cations of numerical methods with automatic result verification, that were pre­ sented at an international workshop on the subject in EI Paso, Texas, February 23-25, 1995. The purpose of this book is to disseminate detailed and surveyed information about existing and potential applications of this new growing field. Brief Description of the Papers At the most fundamental level, interval arithmetic operations work with sets: The result of a single arithmetic operation is the set of all possible results as the o...

  4. Methods and Tools for Profiling and Control of Distributed Systems

    Directory of Open Access Journals (Sweden)

    Sukharev Roman

    2017-01-01

    Full Text Available The article analyzes and standardizes methods for profiling distributed systems that focus on simulation to conduct experiments and build a graph model of the system. The theory of queueing networks is used for simulation modeling of distributed systems, receiving and processing user requests. To automate the above method of profiling distributed systems the software application was developed with a modular structure and similar to a SCADA-system.

  5. On Some Nonclassical Algebraic Properties of Interval-Valued Fuzzy Soft Sets

    Science.gov (United States)

    2014-01-01

    Interval-valued fuzzy soft sets realize a hybrid soft computing model in a general framework. Both Molodtsov's soft sets and interval-valued fuzzy sets can be seen as special cases of interval-valued fuzzy soft sets. In this study, we first compare four different types of interval-valued fuzzy soft subsets and reveal the relations among them. Then we concentrate on investigating some nonclassical algebraic properties of interval-valued fuzzy soft sets under the soft product operations. We show that some fundamental algebraic properties including the commutative and associative laws do not hold in the conventional sense, but hold in weaker forms characterized in terms of the relation =L. We obtain a number of algebraic inequalities of interval-valued fuzzy soft sets characterized by interval-valued fuzzy soft inclusions. We also establish the weak idempotent law and the weak absorptive law of interval-valued fuzzy soft sets using interval-valued fuzzy soft J-equal relations. It is revealed that the soft product operations ∧ and ∨ of interval-valued fuzzy soft sets do not always have similar algebraic properties. Moreover, we find that only distributive inequalities described by the interval-valued fuzzy soft L-inclusions hold for interval-valued fuzzy soft sets. PMID:25143964

  6. On Some Nonclassical Algebraic Properties of Interval-Valued Fuzzy Soft Sets

    Directory of Open Access Journals (Sweden)

    Xiaoyan Liu

    2014-01-01

    Full Text Available Interval-valued fuzzy soft sets realize a hybrid soft computing model in a general framework. Both Molodtsov’s soft sets and interval-valued fuzzy sets can be seen as special cases of interval-valued fuzzy soft sets. In this study, we first compare four different types of interval-valued fuzzy soft subsets and reveal the relations among them. Then we concentrate on investigating some nonclassical algebraic properties of interval-valued fuzzy soft sets under the soft product operations. We show that some fundamental algebraic properties including the commutative and associative laws do not hold in the conventional sense, but hold in weaker forms characterized in terms of the relation =L. We obtain a number of algebraic inequalities of interval-valued fuzzy soft sets characterized by interval-valued fuzzy soft inclusions. We also establish the weak idempotent law and the weak absorptive law of interval-valued fuzzy soft sets using interval-valued fuzzy soft J-equal relations. It is revealed that the soft product operations ∧ and ∨ of interval-valued fuzzy soft sets do not always have similar algebraic properties. Moreover, we find that only distributive inequalities described by the interval-valued fuzzy soft L-inclusions hold for interval-valued fuzzy soft sets.

  7. On some nonclassical algebraic properties of interval-valued fuzzy soft sets.

    Science.gov (United States)

    Liu, Xiaoyan; Feng, Feng; Zhang, Hui

    2014-01-01

    Interval-valued fuzzy soft sets realize a hybrid soft computing model in a general framework. Both Molodtsov's soft sets and interval-valued fuzzy sets can be seen as special cases of interval-valued fuzzy soft sets. In this study, we first compare four different types of interval-valued fuzzy soft subsets and reveal the relations among them. Then we concentrate on investigating some nonclassical algebraic properties of interval-valued fuzzy soft sets under the soft product operations. We show that some fundamental algebraic properties including the commutative and associative laws do not hold in the conventional sense, but hold in weaker forms characterized in terms of the relation = L . We obtain a number of algebraic inequalities of interval-valued fuzzy soft sets characterized by interval-valued fuzzy soft inclusions. We also establish the weak idempotent law and the weak absorptive law of interval-valued fuzzy soft sets using interval-valued fuzzy soft J-equal relations. It is revealed that the soft product operations ∧ and ∨ of interval-valued fuzzy soft sets do not always have similar algebraic properties. Moreover, we find that only distributive inequalities described by the interval-valued fuzzy soft L-inclusions hold for interval-valued fuzzy soft sets.

  8. Probability intervals for the top event unavailability of fault trees

    International Nuclear Information System (INIS)

    Lee, Y.T.; Apostolakis, G.E.

    1976-06-01

    The evaluation of probabilities of rare events is of major importance in the quantitative assessment of the risk from large technological systems. In particular, for nuclear power plants the complexity of the systems, their high reliability and the lack of significant statistical records have led to the extensive use of logic diagrams in the estimation of low probabilities. The estimation of probability intervals for the probability of existence of the top event of a fault tree is examined. Given the uncertainties of the primary input data, a method is described for the evaluation of the first four moments of the top event occurrence probability. These moments are then used to estimate confidence bounds by several approaches which are based on standard inequalities (e.g., Tchebycheff, Cantelli, etc.) or on empirical distributions (the Johnson family). Several examples indicate that the Johnson family of distributions yields results which are in good agreement with those produced by Monte Carlo simulation

  9. Rigorous Verification for the Solution of Nonlinear Interval System ...

    African Journals Online (AJOL)

    We survey a general method for solving nonlinear interval systems of equations. In particular, we paid special attention to the computational aspects of linear interval systems since the bulk of computations are done during the stage of computing outer estimation of the including linear interval systems. The height of our ...

  10. Pinning Synchronization for Complex Networks with Interval Coupling Delay by Variable Subintervals Method and Finsler’s Lemma

    Directory of Open Access Journals (Sweden)

    Dawei Gong

    2017-01-01

    Full Text Available The pinning synchronous problem for complex networks with interval delays is studied in this paper. First, by using an inequality which is introduced from Newton-Leibniz formula, a new synchronization criterion is derived. Second, combining Finsler’s Lemma with homogenous matrix, convergent linear matrix inequality (LMI relaxations for synchronization analysis are proposed with matrix-valued coefficients. Third, a new variable subintervals method is applied to expand the obtained results. Different from previous results, the interval delays are divided into some subdelays, which can introduce more free weighting matrices. Fourth, the results are shown as LMI, which can be easily analyzed or tested. Finally, the stability of the networks is proved via Lyapunov’s stability theorem, and the simulation of the trajectory claims the practicality of the proposed pinning control.

  11. Integrated Power Flow and Short Circuit Calculation Method for Distribution Network with Inverter Based Distributed Generation

    OpenAIRE

    Yang, Shan; Tong, Xiangqian

    2016-01-01

    Power flow calculation and short circuit calculation are the basis of theoretical research for distribution network with inverter based distributed generation. The similarity of equivalent model for inverter based distributed generation during normal and fault conditions of distribution network and the differences between power flow and short circuit calculation are analyzed in this paper. Then an integrated power flow and short circuit calculation method for distribution network with inverte...

  12. Graphing within-subjects confidence intervals using SPSS and S-Plus.

    Science.gov (United States)

    Wright, Daniel B

    2007-02-01

    Within-subjects confidence intervals are often appropriate to report and to display. Loftus and Masson (1994) have reported methods to calculate these, and their use is becoming common. In the present article, procedures for calculating within-subjects confidence intervals in SPSS and S-Plus are presented (an R version is on the accompanying Web site). The procedure in S-Plus allows the user to report the bias corrected and adjusted bootstrap confidence intervals as well as the standard confidence intervals based on traditional methods. The presented code can be easily altered to fit the individual user's needs.

  13. Multi-level methods and approximating distribution functions

    International Nuclear Information System (INIS)

    Wilson, D.; Baker, R. E.

    2016-01-01

    Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparable to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.

  14. Multi-level methods and approximating distribution functions

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, D., E-mail: daniel.wilson@dtc.ox.ac.uk; Baker, R. E. [Mathematical Institute, University of Oxford, Radcliffe Observatory Quarter, Woodstock Road, Oxford, OX2 6GG (United Kingdom)

    2016-07-15

    Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparable to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.

  15. A scoping review of the psychological responses to interval exercise: is interval exercise a viable alternative to traditional exercise?

    Science.gov (United States)

    Stork, Matthew J; Banfield, Laura E; Gibala, Martin J; Martin Ginis, Kathleen A

    2017-12-01

    While considerable evidence suggests that interval exercise confers numerous physiological adaptations linked to improved health, its psychological consequences and behavioural implications are less clear and the subject of intense debate. The purpose of this scoping review was to catalogue studies investigating the psychological responses to interval exercise in order to identify what psychological outcomes have been assessed, the research methods used, and the results. A secondary objective was to identify research issues and gaps. Forty-two published articles met the review inclusion/exclusion criteria. These studies involved 1258 participants drawn from various active/inactive and healthy/unhealthy populations, and 55 interval exercise protocols (69% high-intensity interval training [HIIT], 27% sprint interval training [SIT], and 4% body-weight interval training [BWIT]). Affect and enjoyment were the most frequently studied psychological outcomes. Post-exercise assessments indicate that overall, enjoyment of, and preferences for interval exercise are equal or greater than for continuous exercise, and participants can hold relatively positive social cognitions regarding interval exercise. Although several methodological issues (e.g., inconsistent use of terminology, measures and protocols) and gaps (e.g., data on adherence and real-world protocols) require attention, from a psychological perspective, the emerging data support the viability of interval exercise as an alternative to continuous exercise.

  16. Delay-Dependent Guaranteed Cost H∞ Control of an Interval System with Interval Time-Varying Delay

    Directory of Open Access Journals (Sweden)

    Zhongke Shi

    2009-01-01

    Full Text Available This paper concerns the problem of the delay-dependent robust stability and guaranteed cost H∞ control for an interval system with time-varying delay. The interval system with matrix factorization is provided and leads to less conservative conclusions than solving a square root. The time-varying delay is assumed to belong to an interval and the derivative of the interval time-varying delay is not a restriction, which allows a fast time-varying delay; also its applicability is broad. Based on the Lyapunov-Ktasovskii approach, a delay-dependent criterion for the existence of a state feedback controller, which guarantees the closed-loop system stability, the upper bound of cost function, and disturbance attenuation lever for all admissible uncertainties as well as out perturbation, is proposed in terms of linear matrix inequalities (LMIs. The criterion is derived by free weighting matrices that can reduce the conservatism. The effectiveness has been verified in a number example and the compute results are presented to validate the proposed design method.

  17. PROGRAMMING OF METHODS FOR THE NEEDS OF LOGISTICS DISTRIBUTION SOLVING PROBLEMS

    Directory of Open Access Journals (Sweden)

    Andrea Štangová

    2014-06-01

    Full Text Available Logistics has become one of the dominant factors which is affecting the successful management, competitiveness and mentality of the global economy. Distribution logistics materializes the connesciton of production and consumer marke. It uses different methodology and methods of multicriterial evaluation and allocation. This thesis adresses the problem of the costs of securing the distribution of product. It was therefore relevant to design a software product thet would be helpful in solvin the problems related to distribution logistics. Elodis – electronic distribution logistics program was designed on the basis of theoretical analysis of the issue of distribution logistics and on the analysis of the software products market. The program uses a multicriterial evaluation methods to deremine the appropriate type and mathematical and geometrical method to determine an appropriate allocation of the distribution center, warehouse and company.

  18. DEVELOPMENT MANAGEMENT TRANSFER PRICING BY APPLICATION OF THE INTERVAL ESTIMATES

    Directory of Open Access Journals (Sweden)

    Elena B. Shuvalova

    2013-01-01

    Full Text Available The article discusses the application of the method of interval estimation of conformity of the transaction price the market price. A comparative analysis of interval and point estimate. Identified the positive and negative effects of using interval estimation.

  19. Identification of atrial fibrillation using electrocardiographic RR-interval difference

    Science.gov (United States)

    Eliana, M.; Nuryani, N.

    2017-11-01

    Automated detection of atrial fibrillation (AF) is an interesting topic. It is an account of very dangerous, not only as a trigger of embolic stroke, but it’s also related to some else chronical disease. In this study, we analyse the presence of AF by determining irregularities of RR-interval. We utilize the interval comparison to measure the degree of irregularities of RR-interval in a defined segment. The series of RR-interval is segmented with the length of 10 of them. In this study, we use interval comparison for the method. We were comparing all of the intervals there each other. Then we put the threshold to define the low difference and high difference (δ). A segment is defined as AF or Normal Sinus by the number of high δ, so we put the tolerance (β) of high δ there. We have used this method to test the 23 patients data from MIT-BIH. Using the approach and the clinical data we find accuracy, sensitivity, and specificity of 84.98%, 91.99%, and 77.85% respectively.

  20. Health-Care Waste Treatment Technology Selection Using the Interval 2-Tuple Induced TOPSIS Method

    Directory of Open Access Journals (Sweden)

    Chao Lu

    2016-06-01

    Full Text Available Health-care waste (HCW management is a major challenge for municipalities, particularly in the cities of developing nations. Selecting the best treatment technology for HCW can be regarded as a complex multi-criteria decision making (MCDM issue involving a number of alternatives and multiple evaluation criteria. In addition, decision makers tend to express their personal assessments via multi-granularity linguistic term sets because of different backgrounds and knowledge, some of which may be imprecise, uncertain and incomplete. Therefore, the main objective of this study is to propose a new hybrid decision making approach combining interval 2-tuple induced distance operators with the technique for order preference by similarity to an ideal solution (TOPSIS for tackling HCW treatment technology selection problems with linguistic information. The proposed interval 2-tuple induced TOPSIS (ITI-TOPSIS can not only model the uncertainty and diversity of the assessment information given by decision makers, but also reflect the complex attitudinal characters of decision makers and provide much more complete information for the selection of the optimum disposal alternative. Finally, an empirical example in Shanghai, China is provided to illustrate the proposed decision making method, and results show that the ITI-TOPSIS proposed in this paper can solve the problem of HCW treatment technology selection effectively.

  1. Multipath interference test method for distributed amplifiers

    Science.gov (United States)

    Okada, Takahiro; Aida, Kazuo

    2005-12-01

    A method for testing distributed amplifiers is presented; the multipath interference (MPI) is detected as a beat spectrum between the multipath signal and the direct signal using a binary frequency shifted keying (FSK) test signal. The lightwave source is composed of a DFB-LD that is directly modulated by a pulse stream passing through an equalizer, and emits the FSK signal of the frequency deviation of about 430MHz at repetition rate of 80-100 kHz. The receiver consists of a photo-diode and an electrical spectrum analyzer (ESA). The base-band power spectrum peak appeared at the frequency of the FSK frequency deviation can be converted to amount of MPI using a calibration chart. The test method has improved the minimum detectable MPI as low as -70 dB, compared to that of -50 dB of the conventional test method. The detailed design and performance of the proposed method are discussed, including the MPI simulator for calibration procedure, computer simulations for evaluating the error caused by the FSK repetition rate and the fiber length under test and experiments on singlemode fibers and distributed Raman amplifier.

  2. Parametric change point estimation, testing and confidence interval ...

    African Journals Online (AJOL)

    In many applications like finance, industry and medicine, it is important to consider that the model parameters may undergo changes at unknown moment in time. This paper deals with estimation, testing and confidence interval of a change point for a univariate variable which is assumed to be normally distributed. To detect ...

  3. A Method of Visualizing Three-Dimensional Distribution of Yeast in Bread Dough

    Science.gov (United States)

    Maeda, Tatsurou; Do, Gab-Soo; Sugiyama, Junichi; Oguchi, Kosei; Shiraga, Seizaburou; Ueda, Mitsuyoshi; Takeya, Koji; Endo, Shigeru

    A novel technique was developed to monitor the change in three-dimensional (3D) distribution of yeast in frozen bread dough samples in accordance with the progress of mixing process. Application of a surface engineering technology allowed the identification of yeast in bread dough by bonding EGFP (Enhanced Green Fluorescent Protein) to the surface of yeast cells. The fluorescent yeast (a biomarker) was recognized as bright spots at the wavelength of 520 nm. A Micro-Slicer Image Processing System (MSIPS) with a fluorescence microscope was utilized to acquire cross-sectional images of frozen dough samples sliced at intervals of 1 μm. A set of successive two-dimensional images was reconstructed to analyze 3D distribution of yeast. Samples were taken from each of four normal mixing stages (i.e., pick up, clean up, development, and final stages) and also from over mixing stage. In the pick up stage yeast distribution was uneven with local areas of dense yeast. As the mixing progressed from clean up to final stages, the yeast became more evenly distributed throughout the dough sample. However, the uniformity in yeast distribution was lost in the over mixing stage possibly due to the breakdown of gluten structure within the dough sample.

  4. Supplier evaluation in manufacturing environment using compromise ranking method with grey interval numbers

    Directory of Open Access Journals (Sweden)

    Prasenjit Chatterjee

    2012-04-01

    Full Text Available Evaluation of proper supplier for manufacturing organizations is one of the most challenging problems in real time manufacturing environment due to a wide variety of customer demands. It has become more and more complicated to meet the challenges of international competitiveness and as the decision makers need to assess a wide range of alternative suppliers based on a set of conflicting criteria. Thus, the main objective of supplier selection is to select highly potential supplier through which all the set goals regarding the purchasing and manufacturing activity can be achieved. Because of these reasons, supplier selection has got considerable attention by the academicians and researchers. This paper presents a combined multi-criteria decision making methodology for supplier evaluation for given industrial applications. The proposed methodology is based on a compromise ranking method combined with Grey Interval Numbers considering different cardinal and ordinal criteria and their relative importance. A ‘supplier selection index’ is also proposed to help evaluation and ranking the alternative suppliers. Two examples are illustrated to demonstrate the potentiality and applicability of the proposed method.

  5. [Confidence interval or p-value--similarities and differences between two important methods of statistical inference of quantitative studies].

    Science.gov (United States)

    Harari, Gil

    2014-01-01

    Statistic significance, also known as p-value, and CI (Confidence Interval) are common statistics measures and are essential for the statistical analysis of studies in medicine and life sciences. These measures provide complementary information about the statistical probability and conclusions regarding the clinical significance of study findings. This article is intended to describe the methodologies, compare between the methods, assert their suitability for the different needs of study results analysis and to explain situations in which each method should be used.

  6. A general method for enclosing solutions of interval linear equations

    Czech Academy of Sciences Publication Activity Database

    Rohn, Jiří

    2012-01-01

    Roč. 6, č. 4 (2012), s. 709-717 ISSN 1862-4472 R&D Projects: GA ČR GA201/09/1957; GA ČR GC201/08/J020 Institutional research plan: CEZ:AV0Z10300504 Keywords : interval linear equations * solution set * enclosure * absolute value inequality Subject RIV: BA - General Mathematics Impact factor: 1.654, year: 2012

  7. Power cables thermal protection by interval simulation of imprecise dynamical systems

    Energy Technology Data Exchange (ETDEWEB)

    Bontempi, G. [Universite Libre de Brussels (Belgium). Dept. d' Informatique; Vaccaro, A.; Villacci, D. [Universita del Sannio Benevento (Italy). Dept. of Engineering

    2004-11-01

    The embedding of advanced simulation techniques in power cables enables improved thermal protection because of higher accuracy, adaptiveness and. flexibility. In particular, they make possible (i) the accurate solution of differential equations describing the cables thermal dynamics and (ii) the adoption of the resulting solution in the accomplishment of dedicated protective functions. However, the use of model-based protective systems is exposed to the uncertainty affecting some model components (e.g. weather along the line route, thermophysical properties of the soil, cable parameters). When uncertainty can be described in terms of probability distribution, well-known techniques, such as Monte Carlo, are used to simulate the system behaviour. On the other hand, when the description of uncertainty in probabilistic terms is unfeasible or problematic, nonprobabilistic alternatives should be taken into consideration. This paper will discuss and compare three interval-based techniques as alternatives to probabilistic methods in the simulation of power cable dynamics. The experimental session will assess the interval-based approaches by simulating the thermal behaviour of medium voltage power cables.(author)

  8. Convex Interval Games

    NARCIS (Netherlands)

    Alparslan-Gok, S.Z.; Brânzei, R.; Tijs, S.H.

    2008-01-01

    In this paper, convex interval games are introduced and some characterizations are given. Some economic situations leading to convex interval games are discussed. The Weber set and the Shapley value are defined for a suitable class of interval games and their relations with the interval core for

  9. An Extended TOPSIS Method for the Multiple Attribute Decision Making Problems Based on Interval Neutrosophic Set

    Directory of Open Access Journals (Sweden)

    Pingping Chi

    2013-03-01

    Full Text Available The interval neutrosophic set (INS can be easier to express the incomplete, indeterminate and inconsistent information, and TOPSIS is one of the most commonly used and effective method for multiple attribute decision making, however, in general, it can only process the attribute values with crisp numbers. In this paper, we have extended TOPSIS to INS, and with respect to the multiple attribute decision making problems in which the attribute weights are unknown and the attribute values take the form of INSs, we proposed an expanded TOPSIS method. Firstly, the definition of INS and the operational laws are given, and distance between INSs is defined. Then, the attribute weights are determined based on the Maximizing deviation method and an extended TOPSIS method is developed to rank the alternatives. Finally, an illustrative example is given to verify the developed approach and to demonstrate its practicality and effectiveness.

  10. A new framework of statistical inferences based on the valid joint sampling distribution of the observed counts in an incomplete contingency table.

    Science.gov (United States)

    Tian, Guo-Liang; Li, Hui-Qiong

    2017-08-01

    Some existing confidence interval methods and hypothesis testing methods in the analysis of a contingency table with incomplete observations in both margins entirely depend on an underlying assumption that the sampling distribution of the observed counts is a product of independent multinomial/binomial distributions for complete and incomplete counts. However, it can be shown that this independency assumption is incorrect and can result in unreliable conclusions because of the under-estimation of the uncertainty. Therefore, the first objective of this paper is to derive the valid joint sampling distribution of the observed counts in a contingency table with incomplete observations in both margins. The second objective is to provide a new framework for analyzing incomplete contingency tables based on the derived joint sampling distribution of the observed counts by developing a Fisher scoring algorithm to calculate maximum likelihood estimates of parameters of interest, the bootstrap confidence interval methods, and the bootstrap testing hypothesis methods. We compare the differences between the valid sampling distribution and the sampling distribution under the independency assumption. Simulation studies showed that average/expected confidence-interval widths of parameters based on the sampling distribution under the independency assumption are shorter than those based on the new sampling distribution, yielding unrealistic results. A real data set is analyzed to illustrate the application of the new sampling distribution for incomplete contingency tables and the analysis results again confirm the conclusions obtained from the simulation studies.

  11. INTERVAL STATE ESTIMATION FOR SINGULAR DIFFERENTIAL EQUATION SYSTEMS WITH DELAYS

    Directory of Open Access Journals (Sweden)

    T. A. Kharkovskaia

    2016-07-01

    Full Text Available The paper deals with linear differential equation systems with algebraic restrictions (singular systems and a method of interval observer design for this kind of systems. The systems contain constant time delay, measurement noise and disturbances. Interval observer synthesis is based on monotone and cooperative systems technique, linear matrix inequations, Lyapunov function theory and interval arithmetic. The set of conditions that gives the possibility for interval observer synthesis is proposed. Results of synthesized observer operation are shown on the example of dynamical interindustry balance model. The advantages of proposed method are that it is adapted to observer design for uncertain systems, if the intervals of admissible values for uncertain parameters are given. The designed observer is capable to provide asymptotically definite limits on the estimation accuracy, since the interval of admissible values for the object state is defined at every instant. The obtained result provides an opportunity to develop the interval estimation theory for complex systems that contain parametric uncertainty, varying delay and nonlinear elements. Interval observers increasingly find applications in economics, electrical engineering, mechanical systems with constraints and optimal flow control.

  12. Modified stochastic fragmentation of an interval as an ageing process

    Science.gov (United States)

    Fortin, Jean-Yves

    2018-02-01

    We study a stochastic model based on modified fragmentation of a finite interval. The mechanism consists of cutting the interval at a random location and substituting a unique fragment on the right of the cut to regenerate and preserve the interval length. This leads to a set of segments of random sizes, with the accumulation of small fragments near the origin. This model is an example of record dynamics, with the presence of ‘quakes’ and slow dynamics. The fragment size distribution is a universal inverse power law with logarithmic corrections. The exact distribution for the fragment number as function of time is simply related to the unsigned Stirling numbers of the first kind. Two-time correlation functions are defined, and computed exactly. They satisfy scaling relations, and exhibit aging phenomena. In particular, the probability that the same number of fragments is found at two different times t>s is asymptotically equal to [4πlog(s)]-1/2 when s\\gg 1 and the ratio t/s is fixed, in agreement with the numerical simulations. The same process with a reset impedes the aging phenomenon-beyond a typical time scale defined by the reset parameter.

  13. Interval timing under a behavioral microscope: Dissociating motivational and timing processes in fixed-interval performance.

    Science.gov (United States)

    Daniels, Carter W; Sanabria, Federico

    2017-03-01

    The distribution of latencies and interresponse times (IRTs) of rats was compared between two fixed-interval (FI) schedules of food reinforcement (FI 30 s and FI 90 s), and between two levels of food deprivation. Computational modeling revealed that latencies and IRTs were well described by mixture probability distributions embodying two-state Markov chains. Analysis of these models revealed that only a subset of latencies is sensitive to the periodicity of reinforcement, and prefeeding only reduces the size of this subset. The distribution of IRTs suggests that behavior in FI schedules is organized in bouts that lengthen and ramp up in frequency with proximity to reinforcement. Prefeeding slowed down the lengthening of bouts and increased the time between bouts. When concatenated, latency and IRT models adequately reproduced sigmoidal FI response functions. These findings suggest that behavior in FI schedules fluctuates in and out of schedule control; an account of such fluctuation suggests that timing and motivation are dissociable components of FI performance. These mixture-distribution models also provide novel insights on the motivational, associative, and timing processes expressed in FI performance. These processes may be obscured, however, when performance in timing tasks is analyzed in terms of mean response rates.

  14. An approach to solve group-decision-making problems with ordinal interval numbers.

    Science.gov (United States)

    Fan, Zhi-Ping; Liu, Yang

    2010-10-01

    The ordinal interval number is a form of uncertain preference information in group decision making (GDM), while it is seldom discussed in the existing research. This paper investigates how the ranking order of alternatives is determined based on preference information of ordinal interval numbers in GDM problems. When ranking a large quantity of ordinal interval numbers, the efficiency and accuracy of the ranking process are critical. A new approach is proposed to rank alternatives using ordinal interval numbers when every ranking ordinal in an ordinal interval number is thought to be uniformly and independently distributed in its interval. First, we give the definition of possibility degree on comparing two ordinal interval numbers and the related theory analysis. Then, to rank alternatives, by comparing multiple ordinal interval numbers, a collective expectation possibility degree matrix on pairwise comparisons of alternatives is built, and an optimization model based on this matrix is constructed. Furthermore, an algorithm is also presented to rank alternatives by solving the model. Finally, two examples are used to illustrate the use of the proposed approach.

  15. Stress-strength reliability for general bivariate distributions

    Directory of Open Access Journals (Sweden)

    Alaa H. Abdel-Hamid

    2016-10-01

    Full Text Available An expression for the stress-strength reliability R=P(X1distribution. Such distribution includes bivariate compound Weibull, bivariate compound Gompertz, bivariate compound Pareto, among others. In the parametric case, the maximum likelihood estimates of the parameters and reliability function R are obtained. In the non-parametric case, point and interval estimates of R are developed using Govindarajulu's asymptotic distribution-free method when X1 and X2 are dependent. An example is given when the population distribution is bivariate compound Weibull. Simulation is performed, based on different sample sizes to study the performance of estimates.

  16. Analysis of the structure of events by the method of rapidity intervals in K-p interactions at 32 GeV/c and pp interactions at 69 GeV/c

    International Nuclear Information System (INIS)

    Babintsev, V.V.; Bumazhnov, V.A.; Kruglov, N.A.; Moiseev, A.M.; Proskuryakov, A.S.; Smirnova, L.N.; Ukhanov, M.N.

    1981-01-01

    We present an analysis of the structure of distributions in the magnitude r/sup n//sub m/ of rapidity intervals containing m charged particles in events with n charged particles in K - p interactions at 32 GeV/c and pp interactions at 69 GeV/c. It is found that all distributions correspond to a smooth curve with a single maximum. A comparison is made between the shape of the experimental distributions for K - p interactions and the shape of the distributions for generated events corresponding to the multi-Regge model

  17. Restricted Interval Valued Neutrosophic Sets and Restricted Interval Valued Neutrosophic Topological Spaces

    Directory of Open Access Journals (Sweden)

    Anjan Mukherjee

    2016-08-01

    Full Text Available In this paper we introduce the concept of restricted interval valued neutrosophic sets (RIVNS in short. Some basic operations and properties of RIVNS are discussed. The concept of restricted interval valued neutrosophic topology is also introduced together with restricted interval valued neutrosophic finer and restricted interval valued neutrosophic coarser topology. We also define restricted interval valued neutrosophic interior and closer of a restricted interval valued neutrosophic set. Some theorems and examples are cites. Restricted interval valued neutrosophic subspace topology is also studied.

  18. Methods of assessing grain-size distribution during grain growth

    DEFF Research Database (Denmark)

    Tweed, Cherry J.; Hansen, Niels; Ralph, Brian

    1985-01-01

    This paper considers methods of obtaining grain-size distributions and ways of describing them. In order to collect statistically useful amounts of data, an automatic image analyzer is used, and the resulting data are subjected to a series of tests that evaluate the differences between two related...... distributions (before and after grain growth). The distributions are measured from two-dimensional sections, and both the data and the corresponding true three-dimensional grain-size distributions (obtained by stereological analysis) are collected. The techniques described here are illustrated by reference...

  19. An appraisal of statistical procedures used in derivation of reference intervals.

    Science.gov (United States)

    Ichihara, Kiyoshi; Boyd, James C

    2010-11-01

    When conducting studies to derive reference intervals (RIs), various statistical procedures are commonly applied at each step, from the planning stages to final computation of RIs. Determination of the necessary sample size is an important consideration, and evaluation of at least 400 individuals in each subgroup has been recommended to establish reliable common RIs in multicenter studies. Multiple regression analysis allows identification of the most important factors contributing to variation in test results, while accounting for possible confounding relationships among these factors. Of the various approaches proposed for judging the necessity of partitioning reference values, nested analysis of variance (ANOVA) is the likely method of choice owing to its ability to handle multiple groups and being able to adjust for multiple factors. Box-Cox power transformation often has been used to transform data to a Gaussian distribution for parametric computation of RIs. However, this transformation occasionally fails. Therefore, the non-parametric method based on determination of the 2.5 and 97.5 percentiles following sorting of the data, has been recommended for general use. The performance of the Box-Cox transformation can be improved by introducing an additional parameter representing the origin of transformation. In simulations, the confidence intervals (CIs) of reference limits (RLs) calculated by the parametric method were narrower than those calculated by the non-parametric approach. However, the margin of difference was rather small owing to additional variability in parametrically-determined RLs introduced by estimation of parameters for the Box-Cox transformation. The parametric calculation method may have an advantage over the non-parametric method in allowing identification and exclusion of extreme values during RI computation.

  20. Analyzed method for calculating the distribution of electrostatic field

    International Nuclear Information System (INIS)

    Lai, W.

    1981-01-01

    An analyzed method for calculating the distribution of electrostatic field under any given axial gradient in tandem accelerators is described. This method possesses satisfactory accuracy compared with the results of numerical calculation

  1. Reference Intervals of Common Clinical Chemistry Analytes for Adults in Hong Kong.

    Science.gov (United States)

    Lo, Y C; Armbruster, David A

    2012-04-01

    Defining reference intervals is a major challenge because of the difficulty in recruiting volunteers to participate and testing samples from a significant number of healthy reference individuals. Historical literature citation intervals are often suboptimal because they're be based on obsolete methods and/or only a small number of poorly defined reference samples. Blood donors in Hong Kong gave permission for additional blood to be collected for reference interval testing. The samples were tested for twenty-five routine analytes on the Abbott ARCHITECT clinical chemistry system. Results were analyzed using the Rhoads EP evaluator software program, which is based on the CLSI/IFCC C28-A guideline, and defines the reference interval as the 95% central range. Method specific reference intervals were established for twenty-five common clinical chemistry analytes for a Chinese ethnic population. The intervals were defined for each gender separately and for genders combined. Gender specific or combined gender intervals were adapted as appropriate for each analyte. A large number of healthy, apparently normal blood donors from a local ethnic population were tested to provide current reference intervals for a new clinical chemistry system. Intervals were determined following an accepted international guideline. Laboratories using the same or similar methodologies may adapt these intervals if deemed validated and deemed suitable for their patient population. Laboratories using different methodologies may be able to successfully adapt the intervals for their facilities using the reference interval transference technique based on a method comparison study.

  2. Localization of a Vehicle: A Dynamic Interval Constraint Satisfaction Problem-Based Approach

    Directory of Open Access Journals (Sweden)

    Kangni Kueviakoe

    2018-01-01

    Full Text Available This paper introduces a new interval constraint propagation (ICP approach dealing with the real-time vehicle localization problem. Bayesian methods like extended Kalman filter (EKF are classically used to achieve vehicle localization. ICP is an alternative which provides guaranteed localization results rather than probabilities. Our approach assumes that all models and measurement errors are bounded within known limits without any other hypotheses on the probability distribution. The proposed algorithm uses a low-level consistency algorithm and has been validated with an outdoor vehicle equipped with a GPS receiver, a gyro, and odometers. Results have been compared to EKF and other ICP methods such as hull consistency (HC4 and 3-bound (3B algorithms. Both consistencies of EKF and our algorithm have been experimentally studied.

  3. An accurate calibration method for high pressure vibrating tube densimeters in the density interval (700 to 1600) kg . m-3

    International Nuclear Information System (INIS)

    Sanmamed, Yolanda A.; Dopazo-Paz, Ana; Gonzalez-Salgado, Diego; Troncoso, Jacobo; Romani, Luis

    2009-01-01

    A calibration procedure of vibrating tube densimeters for density measurement of liquids in the intervals (700 to 1600) kg . m -3 , (283.15 to 323.15) K, and (0.1 to 60) MPa is presented. It is based on the modelization of the vibrating tube as a thick-tube clamped at one end (cantilever) whose stress and thermal behaviour follows the ideas proposed in the Forced Path Mechanical Calibration model (FPMC). Model parameters are determined using two calibration fluids with densities certified at atmospheric pressure (dodecane and tetracholoroethylene) and a third one with densities known as a function of pressure (water). It is applied to the Anton Paar 512P densimeter, obtaining density measurements with an expanded uncertainty less than 0.2 kg . m -3 in the working intervals. This accuracy comes from the combination of several factors: densimeter behaves linearly in the working density interval, densities of both calibration fluids cover that interval and they have a very low uncertainty, and the mechanical behaviour of the tube is well characterized by the considered model. The main application of this method is the precise measurement of high density fluids for which most of the calibration procedures are inaccurate.

  4. Distributed optimization system and method

    Science.gov (United States)

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2003-06-10

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  5. Do Longer Intervals between Challenges Reduce the Risk of Adverse Reactions in Oral Wheat Challenges?

    Directory of Open Access Journals (Sweden)

    Noriyuki Yanagida

    Full Text Available The use of oral food challenges (OFCs in clinics is limited because they are complicated and associated with anaphylactic symptoms. To increase their use, it is necessary to develop novel, effective, and safe methods. However, the effectiveness of different OFCs has not been compared.To investigate the effect of ingestion methods on wheat allergy symptoms and treatment during OFCs.Without changing the total challenge dose, we changed the administration method from a 5-installment dose titration every 15 min (15-min interval method to 3 installments every 30 min (30-min interval method. We retrospectively reviewed and compared the results of 65 positive 15-min interval wheat challenge tests conducted between July 2005 and February 2008 and 87 positive 30-min interval tests conducted between March 2008 and December 2009.A history of immediate symptoms was more common for the 30-min interval method; however, no difference between methods was observed in other background parameters. Switching from the 15-min to the 30-min interval method did not increase symptoms or require treatment. The rate of cardiovascular symptoms (p = 0.032, and adrenaline use (p = 0.017 was significantly lower with the 30-min interval method. The results did not change after adjusting for the effects of immediate symptom history in multivariate analysis.This study suggests that the 30-min interval method reduces the risk of adverse events, compared to the 15-min interval method.

  6. Solvability of linear Interval System of Equations via Oettli-Prager ...

    African Journals Online (AJOL)

    In general, numerical results computed by interval methods tend to grow in diameters as a result of data dependencies and cluster effects which may be traced to error from one source that can affect every other source and thereby drastically lower the efficiency of the interval inclusion methods. We describe in this paper ...

  7. Comparing distribution models for small samples of overdispersed counts of freshwater fish

    Science.gov (United States)

    Vaudor, Lise; Lamouroux, Nicolas; Olivier, Jean-Michel

    2011-05-01

    The study of species abundance often relies on repeated abundance counts whose number is limited by logistic or financial constraints. The distribution of abundance counts is generally right-skewed (i.e. with many zeros and few high values) and needs to be modelled for statistical inference. We used an extensive dataset involving about 100,000 fish individuals of 12 freshwater fish species collected in electrofishing points (7 m 2) during 350 field surveys made in 25 stream sites, in order to compare the performance and the generality of four distribution models of counts (Poisson, negative binomial and their zero-inflated counterparts). The negative binomial distribution was the best model (Bayesian Information Criterion) for 58% of the samples (species-survey combinations) and was suitable for a variety of life histories, habitat, and sample characteristics. The performance of the models was closely related to samples' statistics such as total abundance and variance. Finally, we illustrated the consequences of a distribution assumption by calculating confidence intervals around the mean abundance, either based on the most suitable distribution assumption or on an asymptotical, distribution-free (Student's) method. Student's method generally corresponded to narrower confidence intervals, especially when there were few (≤3) non-null counts in the samples.

  8. Limited Rationality and Its Quantification Through the Interval Number Judgments With Permutations.

    Science.gov (United States)

    Liu, Fang; Pedrycz, Witold; Zhang, Wei-Guo

    2017-12-01

    The relative importance of alternatives expressed in terms of interval numbers in the fuzzy analytic hierarchy process aims to capture the uncertainty experienced by decision makers (DMs) when making a series of comparisons. Under the assumption of full rationality, the judgements of DMs in the typical analytic hierarchy process could be consistent. However, since the uncertainty in articulating the opinions of DMs is unavoidable, the interval number judgements are associated with the limited rationality. In this paper, we investigate the concept of limited rationality by introducing interval multiplicative reciprocal comparison matrices. By analyzing the consistency of interval multiplicative reciprocal comparison matrices, it is observed that the interval number judgements are inconsistent. By considering the permutations of alternatives, the concepts of approximation-consistency and acceptable approximation-consistency of interval multiplicative reciprocal comparison matrices are proposed. The exchange method is designed to generate all the permutations. A novel method of determining the interval weight vector is proposed under the consideration of randomness in comparing alternatives, and a vector of interval weights is determined. A new algorithm of solving decision making problems with interval multiplicative reciprocal preference relations is provided. Two numerical examples are carried out to illustrate the proposed approach and offer a comparison with the methods available in the literature.

  9. 2-D tiles declustering method based on virtual devices

    Science.gov (United States)

    Li, Zhongmin; Gao, Lu

    2009-10-01

    Generally, 2-D spatial data are divided as a series of tiles according to the plane grid. To satisfy the effect of vision, the tiles in the query window including the view point would be displayed quickly at the screen. Aiming at the performance difference of real storage devices, we propose a 2-D tiles declustering method based on virtual device. Firstly, we construct a group of virtual devices which have same storage performance and non-limited capacity, then distribute the tiles into M virtual devices according to the query window of 2-D tiles. Secondly, we equably map the tiles in M virtual devices into M equidistant intervals in [0, 1) using pseudo-random number generator. Finally, we devide [0, 1) into M intervals according to the tiles distribution percentage of every real storage device, and distribute the tiles in each interval in the corresponding real storage device. We have designed and realized a prototype GlobeSIGht, and give some related test results. The results show that the average response time of each tile in the query window including the view point using 2-D tiles declustering method based on virtual device is more efficient than using other methods.

  10. System and Method for Monitoring Distributed Asset Data

    Science.gov (United States)

    Gorinevsky, Dimitry (Inventor)

    2015-01-01

    A computer-based monitoring system and monitoring method implemented in computer software for detecting, estimating, and reporting the condition states, their changes, and anomalies for many assets. The assets are of same type, are operated over a period of time, and outfitted with data collection systems. The proposed monitoring method accounts for variability of working conditions for each asset by using regression model that characterizes asset performance. The assets are of the same type but not identical. The proposed monitoring method accounts for asset-to-asset variability; it also accounts for drifts and trends in the asset condition and data. The proposed monitoring system can perform distributed processing of massive amounts of historical data without discarding any useful information where moving all the asset data into one central computing system might be infeasible. The overall processing is includes distributed preprocessing data records from each asset to produce compressed data.

  11. The frequency-independent control method for distributed generation systems

    DEFF Research Database (Denmark)

    Naderi, Siamak; Pouresmaeil, Edris; Gao, Wenzhong David

    2012-01-01

    In this paper a novel frequency-independent control method suitable for distributed generation (DG) is presented. This strategy is derived based on the . abc/. αβ transformation and . abc/. dq transformation of the ac system variables. The active and reactive currents injected by the DG are contr......In this paper a novel frequency-independent control method suitable for distributed generation (DG) is presented. This strategy is derived based on the . abc/. αβ transformation and . abc/. dq transformation of the ac system variables. The active and reactive currents injected by the DG...

  12. Size distributions of micro-bubbles generated by a pressurized dissolution method

    Science.gov (United States)

    Taya, C.; Maeda, Y.; Hosokawa, S.; Tomiyama, A.; Ito, Y.

    2012-03-01

    Size of micro-bubbles is widely distributed in the range of one to several hundreds micrometers and depends on generation methods, flow conditions and elapsed times after the bubble generation. Although a size distribution of micro-bubbles should be taken into account to improve accuracy in numerical simulations of flows with micro-bubbles, a variety of the size distribution makes it difficult to introduce the size distribution in the simulations. On the other hand, several models such as the Rosin-Rammler equation and the Nukiyama-Tanazawa equation have been proposed to represent the size distribution of particles or droplets. Applicability of these models to the size distribution of micro-bubbles has not been examined yet. In this study, we therefore measure size distribution of micro-bubbles generated by a pressurized dissolution method by using a phase Doppler anemometry (PDA), and investigate the applicability of the available models to the size distributions of micro-bubbles. Experimental apparatus consists of a pressurized tank in which air is dissolved in liquid under high pressure condition, a decompression nozzle in which micro-bubbles are generated due to pressure reduction, a rectangular duct and an upper tank. Experiments are conducted for several liquid volumetric fluxes in the decompression nozzle. Measurements are carried out at the downstream region of the decompression nozzle and in the upper tank. The experimental results indicate that (1) the Nukiyama-Tanasawa equation well represents the size distribution of micro-bubbles generated by the pressurized dissolution method, whereas the Rosin-Rammler equation fails in the representation, (2) the bubble size distribution of micro-bubbles can be evaluated by using the Nukiyama-Tanasawa equation without individual bubble diameters, when mean bubble diameter and skewness of the bubble distribution are given, and (3) an evaluation method of visibility based on the bubble size distribution and bubble

  13. Distribution of conductive minerals as associated with uranium minerals at Dendang Arai sector by induced polarization method

    International Nuclear Information System (INIS)

    Nurdin, M.; Nikijuluw, N.; Subardjo; Sudarto, S.

    2000-01-01

    Based on previous investigation results, a favourable zone of 20-80 meters in wide, 80-240 meters in length and in the direction of East-West to Northwest-Southeast was found. The favourable zone is conductor, associated with sulfide. Induced polarization method has been applied to find vertical and horizontal sulfide distribution. The measurement was conducted in perpendicular to lateral direction of the conductive zone in an interval of 20 meters. Properties measured are apparent resistivity and charge ability. Measurement results indicated the presence of sulfide zone with the position and dip are sub-vertical. Sulfide zones were found on the fault cross-point with the directions being East-West to East South East-West North West by fault is North-South. This anomalies were then represented in 3 (three) dimension tomographic model. (author)

  14. Socioeconomic position and the primary care interval

    DEFF Research Database (Denmark)

    Vedsted, Anders

    2018-01-01

    to the easiness to interpret the symptoms of the underlying cancer. Methods. We conducted a population-based cohort study using survey data on time intervals linked at an individually level to routine collected data on demographics from Danish registries. Using logistic regression we estimated the odds......Introduction. Diagnostic delays affect cancer survival negatively. Thus, the time interval from symptomatic presentation to a GP until referral to secondary care (i.e. primary care interval (PCI)), should be as short as possible. Lower socioeconomic position seems associated with poorer cancer...... younger than 45 years of age and older than 54 years of age had longer primary care interval than patients aged ‘45-54’ years. No other associations for SEP characteristics were observed. The findings may imply that GPs are referring patients regardless of SEP, although some room for improvement prevails...

  15. A Note on Confidence Interval for the Power of the One Sample Test

    Directory of Open Access Journals (Sweden)

    A. Wong

    2010-01-01

    Full Text Available In introductory statistics texts, the power of the test of a one-sample mean when the variance is known is widely discussed. However, when the variance is unknown, the power of the Student's -test is seldom mentioned. In this note, a general methodology for obtaining inference concerning a scalar parameter of interest of any exponential family model is proposed. The method is then applied to the one-sample mean problem with unknown variance to obtain a (1−100% confidence interval for the power of the Student's -test that detects the difference (−0. The calculations require only the density and the cumulative distribution functions of the standard normal distribution. In addition, the methodology presented can also be applied to determine the required sample size when the effect size and the power of a size test of mean are given.

  16. Weighted profile likelihood-based confidence interval for the difference between two proportions with paired binomial data.

    Science.gov (United States)

    Pradhan, Vivek; Saha, Krishna K; Banerjee, Tathagata; Evans, John C

    2014-07-30

    Inference on the difference between two binomial proportions in the paired binomial setting is often an important problem in many biomedical investigations. Tang et al. (2010, Statistics in Medicine) discussed six methods to construct confidence intervals (henceforth, we abbreviate it as CI) for the difference between two proportions in paired binomial setting using method of variance estimates recovery. In this article, we propose weighted profile likelihood-based CIs for the difference between proportions of a paired binomial distribution. However, instead of the usual likelihood, we use weighted likelihood that is essentially making adjustments to the cell frequencies of a 2 × 2 table in the spirit of Agresti and Min (2005, Statistics in Medicine). We then conduct numerical studies to compare the performances of the proposed CIs with that of Tang et al. and Agresti and Min in terms of coverage probabilities and expected lengths. Our numerical study clearly indicates that the weighted profile likelihood-based intervals and Jeffreys interval (cf. Tang et al.) are superior in terms of achieving the nominal level, and in terms of expected lengths, they are competitive. Finally, we illustrate the use of the proposed CIs with real-life examples. Copyright © 2014 John Wiley & Sons, Ltd.

  17. Data Qulaification Report Flowong Interval Data for Use On the Yucca Mountain Project

    Energy Technology Data Exchange (ETDEWEB)

    C.R. Wilson; T.A. Grant

    2000-08-03

    This data qualification report uses technical assessment and corroborating data methods according to Attachment 2 of AP-SIII.2Q, Rev. 0, ICN 2, ''Qualification of Unqualified Data and the Documentation of Rationale for Accepted Data'', to qualify flowing interval data. This report was prepared in accordance with Data Qualification Plan TDP-NBS-GS-000035, Revision 1. Flowing interval location data from borehole tracejector surveys and fracture dip data from acoustic televiewer logging are evaluated in this report. These data were collected under the direction of the U.S. Geological Survey (USGS). The data qualification team considers the data collection protocols of the USGS during the time period of data acquisition to be state-of-the-art. The acoustic televiewer continues to be an important tool for fracture logging and the early methodology for that technique has evolved with little change into the current fracture logging procedure that supports the Yucca Mountain Project (YMP)--approved USGS Quality Assurance Program Plan (QAPP). Consequently, the data collection methods, documentation, and results are reasonable and appropriate in view of standard practice at the time the data were collected. Extensive independent corroborative evidence is available from other borehole survey and logging techniques as well as from independent hydrological pump and injection testing. This evidence supports the conclusion that the tracejector surveys adequately detected the more significant flowing intervals in the system and that the acoustic televiewer data are representative of the dips of major fracture sets at Yucca Mountain. The parallels in time, location, purpose, personnel and technique in testing the boreholes that were sources for the flowing interval data support the application of corroborative evidence to all boreholes in the series. The data qualification team has concluded that the tracejector and acoustic televiewer data are adequate for

  18. Method for calculating the variance and prediction intervals for biomass estimates obtained from allometric equations

    CSIR Research Space (South Africa)

    Kirton, A

    2010-08-01

    Full Text Available for calculating the variance and prediction intervals for biomass estimates obtained from allometric equations A KIRTON B SCHOLES S ARCHIBALD CSIR Ecosystem Processes and Dynamics, Natural Resources and the Environment P.O. BOX 395, Pretoria, 0001, South... intervals (confidence intervals for predicted values) for allometric estimates can be obtained using an example of estimating tree biomass from stem diameter. It explains how to deal with relationships which are in the power function form - a common form...

  19. Effect of nonuniform fuel distribution

    International Nuclear Information System (INIS)

    Katakura, Jun-ichi

    1987-01-01

    In order to ensure the subcriticality of nuclear fuel, the method of controlling the mass, form or dimensions below the limit values and the method of confirming subcriticality by calculation are taken, but at this time, it is often assumed that the concentration of fuel is constant in a fuel region, or fuel rods are arranged at constant intervals. However, in the extraction process in fuel reprocessing or in fuel storage vessels, the concentration distribution may arise in fuel regions even though temporarily. Even if subcriticality is expected in a uniform system, when concentration distribution arises, and an uneven system results in, criticality may occur. Therefore, it is important to grasp the effect of uneven fuel distribution for ensuring the safety against criticality. In this paper, the effect of uneven fuel distribution is discussed, centering around the critical mass. The examples in literatures and the examples of calculation of uneven fuel distribution are shown. As the result of calculation in Japan Atomic Energy Research Institute, in a high enrichment U-235-water system, the critical mass decreased by about 7 % due to uneven distribution, which nearly agreed with the result of Clark of about 6 %. As for a low enrichment system, the conspicuous decrease of the critical mass was not observed. (Kako, I.)

  20. Computerized method for X-ray angular distribution simulation in radiological systems

    International Nuclear Information System (INIS)

    Marques, Marcio A.; Oliveira, Henrique J.Q. de; Frere, Annie F.; Schiabel, Homero; Marques, Paulo M.A.

    1996-01-01

    A method to simulate the changes in X-ray angular distribution (the Heel effect) for radiologic imaging systems is presented. This simulation method is described as to predict images for any exposure technique considering that the distribution is the cause of the intensity variation along the radiation field

  1. Advanced airflow distribution methods for reduction of personal exposure to indoor pollutants

    DEFF Research Database (Denmark)

    Cao, Guangyu; Kosonen, Risto; Melikov, Arsen

    2016-01-01

    The main objective of this study is to recognize possible airflow distribution methods to protect the occupants from exposure to various indoor pollutants. The fact of the increasing exposure of occupants to various indoor pollutants shows that there is an urgent need to develop advanced airflow ...... distribution methods to reduce indoor exposure to various indoor pollutants. This article presents some of the latest development of advanced airflow distribution methods to reduce indoor exposure in various types of buildings.......The main objective of this study is to recognize possible airflow distribution methods to protect the occupants from exposure to various indoor pollutants. The fact of the increasing exposure of occupants to various indoor pollutants shows that there is an urgent need to develop advanced airflow...

  2. A method for statistically comparing spatial distribution maps

    Directory of Open Access Journals (Sweden)

    Reynolds Mary G

    2009-01-01

    Full Text Available Abstract Background Ecological niche modeling is a method for estimation of species distributions based on certain ecological parameters. Thus far, empirical determination of significant differences between independently generated distribution maps for a single species (maps which are created through equivalent processes, but with different ecological input parameters, has been challenging. Results We describe a method for comparing model outcomes, which allows a statistical evaluation of whether the strength of prediction and breadth of predicted areas is measurably different between projected distributions. To create ecological niche models for statistical comparison, we utilized GARP (Genetic Algorithm for Rule-Set Production software to generate ecological niche models of human monkeypox in Africa. We created several models, keeping constant the case location input records for each model but varying the ecological input data. In order to assess the relative importance of each ecological parameter included in the development of the individual predicted distributions, we performed pixel-to-pixel comparisons between model outcomes and calculated the mean difference in pixel scores. We used a two sample Student's t-test, (assuming as null hypothesis that both maps were identical to each other regardless of which input parameters were used to examine whether the mean difference in corresponding pixel scores from one map to another was greater than would be expected by chance alone. We also utilized weighted kappa statistics, frequency distributions, and percent difference to look at the disparities in pixel scores. Multiple independent statistical tests indicated precipitation as the single most important independent ecological parameter in the niche model for human monkeypox disease. Conclusion In addition to improving our understanding of the natural factors influencing the distribution of human monkeypox disease, such pixel-to-pixel comparison

  3. An Introduction to Confidence Intervals for Both Statistical Estimates and Effect Sizes.

    Science.gov (United States)

    Capraro, Mary Margaret

    This paper summarizes methods of estimating confidence intervals, including classical intervals and intervals for effect sizes. The recent American Psychological Association (APA) Task Force on Statistical Inference report suggested that confidence intervals should always be reported, and the fifth edition of the APA "Publication Manual"…

  4. Mathematical methods linear algebra normed spaces distributions integration

    CERN Document Server

    Korevaar, Jacob

    1968-01-01

    Mathematical Methods, Volume I: Linear Algebra, Normed Spaces, Distributions, Integration focuses on advanced mathematical tools used in applications and the basic concepts of algebra, normed spaces, integration, and distributions.The publication first offers information on algebraic theory of vector spaces and introduction to functional analysis. Discussions focus on linear transformations and functionals, rectangular matrices, systems of linear equations, eigenvalue problems, use of eigenvectors and generalized eigenvectors in the representation of linear operators, metric and normed vector

  5. Distributed optimization for systems design : an augmented Lagrangian coordination method

    NARCIS (Netherlands)

    Tosserams, S.

    2008-01-01

    This thesis presents a coordination method for the distributed design optimization of engineering systems. The design of advanced engineering systems such as aircrafts, automated distribution centers, and microelectromechanical systems (MEMS) involves multiple components that together realize the

  6. Confidence Limits for the Indirect Effect: Distribution of the Product and Resampling Methods

    Science.gov (United States)

    MacKinnon, David P.; Lockwood, Chondra M.; Williams, Jason

    2010-01-01

    The most commonly used method to test an indirect effect is to divide the estimate of the indirect effect by its standard error and compare the resulting z statistic with a critical value from the standard normal distribution. Confidence limits for the indirect effect are also typically based on critical values from the standard normal distribution. This article uses a simulation study to demonstrate that confidence limits are imbalanced because the distribution of the indirect effect is normal only in special cases. Two alternatives for improving the performance of confidence limits for the indirect effect are evaluated: (a) a method based on the distribution of the product of two normal random variables, and (b) resampling methods. In Study 1, confidence limits based on the distribution of the product are more accurate than methods based on an assumed normal distribution but confidence limits are still imbalanced. Study 2 demonstrates that more accurate confidence limits are obtained using resampling methods, with the bias-corrected bootstrap the best method overall. PMID:20157642

  7. Modified Taylor series method for solving nonlinear differential equations with mixed boundary conditions defined on finite intervals.

    Science.gov (United States)

    Vazquez-Leal, Hector; Benhammouda, Brahim; Filobello-Nino, Uriel Antonio; Sarmiento-Reyes, Arturo; Jimenez-Fernandez, Victor Manuel; Marin-Hernandez, Antonio; Herrera-May, Agustin Leobardo; Diaz-Sanchez, Alejandro; Huerta-Chua, Jesus

    2014-01-01

    In this article, we propose the application of a modified Taylor series method (MTSM) for the approximation of nonlinear problems described on finite intervals. The issue of Taylor series method with mixed boundary conditions is circumvented using shooting constants and extra derivatives of the problem. In order to show the benefits of this proposal, three different kinds of problems are solved: three-point boundary valued problem (BVP) of third-order with a hyperbolic sine nonlinearity, two-point BVP for a second-order nonlinear differential equation with an exponential nonlinearity, and a two-point BVP for a third-order nonlinear differential equation with a radical nonlinearity. The result shows that the MTSM method is capable to generate easily computable and highly accurate approximations for nonlinear equations. 34L30.

  8. Semiorders, Intervals Orders and Pseudo Orders Preference Structures in Multiple Criteria Decision Aid Methods

    Directory of Open Access Journals (Sweden)

    Fernández Barberis, Gabriela

    2013-06-01

    Full Text Available During the last decades, an important number of Multicriteria Decision Aid Methods (MCDA has been proposed to help the decision maker to select the best compromise alternative. Meanwhile, the PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluations family of outranking method and their applications has attracted much attention from academics and practitioners. In this paper, an extension of these methods is presented, consisting of analyze its functioning under New Preference Structures (NPS. The preference structures taken into account are, namely: semiorders, intervals orders and pseudo orders. These structures outstandingly improve the modelization as they give more flexibility, amplitude and certainty at the preferences formulation, since they tend to abandon the Complete Transitive Comparability Axiom of Preferences in order to substitute it by the Partial Comparability Axiom of Preferences. It must be remarked the introduction of Incomparability relations to the analysis and the consideration of preference structures that accept the Indifference intransitivity. The NPS incorporation is carried out in three phases that the PROMETHEE Methodology takes in: preference structure enrichment, dominance relation enrichment and outranking relation exploitation for decision aid, in order to finally arrive at solving the alternatives ranking problem through the PROMETHEE I or the PROMETHEE II utilization, according to whether a partial ranking or a complete one, is respectively required under the NPS

  9. Method of controlling power distribution in FBR type reactors

    International Nuclear Information System (INIS)

    Sawada, Shusaku; Kaneto, Kunikazu.

    1982-01-01

    Purpose: To attain the power distribution flattening with ease by obtaining a radial power distribution substantially in a constant configuration not depending on the burn-up cycle. Method: As the fuel burning proceeds, the radial power distribution is effected by the accumulation of fission products in the inner blancket fuel assemblies which varies the effect thereof as the neutron absorbing substances. Taking notice of the above fact, the power distribution is controlled in a heterogeneous FBR type reactor by varying the core residence period of the inner blancket assemblies in accordance with the charging density of the inner blancket assemblies in the reactor core. (Kawakami, Y.)

  10. Accelerated life testing design using geometric process for pareto distribution

    OpenAIRE

    Mustafa Kamal; Shazia Zarrin; Arif Ul Islam

    2013-01-01

    In this paper the geometric process is used for the analysis of accelerated life testing under constant stress for Pareto Distribution. Assuming that the lifetimes under increasing stress levels form a geometric process, estimates of the parameters are obtained by using the maximum likelihood method for complete data. In addition, asymptotic interval estimates of the parameters of the distribution using Fisher information matrix are also obtained. The statistical properties of the parameters ...

  11. Interval forecasting of cyber-attacks on industrial control systems

    Science.gov (United States)

    Ivanyo, Y. M.; Krakovsky, Y. M.; Luzgin, A. N.

    2018-03-01

    At present, cyber-security issues of industrial control systems occupy one of the key niches in a state system of planning and management Functional disruption of these systems via cyber-attacks may lead to emergencies related to loss of life, environmental disasters, major financial and economic damage, or disrupted activities of cities and settlements. There is then an urgent need to develop protection methods against cyber-attacks. This paper studied the results of cyber-attack interval forecasting with a pre-set intensity level of cyber-attacks. Interval forecasting is the forecasting of one interval from two predetermined ones in which a future value of the indicator will be obtained. For this, probability estimates of these events were used. For interval forecasting, a probabilistic neural network with a dynamic updating value of the smoothing parameter was used. A dividing bound of these intervals was determined by a calculation method based on statistical characteristics of the indicator. The number of cyber-attacks per hour that were received through a honeypot from March to September 2013 for the group ‘zeppo-norcal’ was selected as the indicator.

  12. New Interval-Valued Intuitionistic Fuzzy Behavioral MADM Method and Its Application in the Selection of Photovoltaic Cells

    Directory of Open Access Journals (Sweden)

    Xiaolu Zhang

    2016-10-01

    Full Text Available As one of the emerging renewable resources, the use of photovoltaic cells has become a promise for offering clean and plentiful energy. The selection of a best photovoltaic cell for a promoter plays a significant role in aspect of maximizing income, minimizing costs and conferring high maturity and reliability, which is a typical multiple attribute decision making (MADM problem. Although many prominent MADM techniques have been developed, most of them are usually to select the optimal alternative under the hypothesis that the decision maker or expert is completely rational and the decision data are represented by crisp values. However, in the selecting processes of photovoltaic cells the decision maker is usually bounded rational and the ratings of alternatives are usually imprecise and vague. To address these kinds of complex and common issues, in this paper we develop a new interval-valued intuitionistic fuzzy behavioral MADM method. We employ interval-valued intuitionistic fuzzy numbers (IVIFNs to express the imprecise ratings of alternatives; and we construct LINMAP-based nonlinear programming models to identify the reference points under IVIFNs contexts, which avoid the subjective randomness of selecting the reference points. Finally we develop a prospect theory-based ranking method to identify the optimal alternative, which takes fully into account the decision maker’s behavioral characteristics such as reference dependence, diminishing sensitivity and loss aversion in the decision making process.

  13. Programming with Intervals

    Science.gov (United States)

    Matsakis, Nicholas D.; Gross, Thomas R.

    Intervals are a new, higher-level primitive for parallel programming with which programmers directly construct the program schedule. Programs using intervals can be statically analyzed to ensure that they do not deadlock or contain data races. In this paper, we demonstrate the flexibility of intervals by showing how to use them to emulate common parallel control-flow constructs like barriers and signals, as well as higher-level patterns such as bounded-buffer producer-consumer. We have implemented intervals as a publicly available library for Java and Scala.

  14. Forecasting overhaul or replacement intervals based on estimated system failure intensity

    Science.gov (United States)

    Gannon, James M.

    1994-12-01

    System reliability can be expressed in terms of the pattern of failure events over time. Assuming a nonhomogeneous Poisson process and Weibull intensity function for complex repairable system failures, the degree of system deterioration can be approximated. Maximum likelihood estimators (MLE's) for the system Rate of Occurrence of Failure (ROCOF) function are presented. Evaluating the integral of the ROCOF over annual usage intervals yields the expected number of annual system failures. By associating a cost of failure with the expected number of failures, budget and program policy decisions can be made based on expected future maintenance costs. Monte Carlo simulation is used to estimate the range and the distribution of the net present value and internal rate of return of alternative cash flows based on the distributions of the cost inputs and confidence intervals of the MLE's.

  15. Cellular Neural Network-Based Methods for Distributed Network Intrusion Detection

    Directory of Open Access Journals (Sweden)

    Kang Xie

    2015-01-01

    Full Text Available According to the problems of current distributed architecture intrusion detection systems (DIDS, a new online distributed intrusion detection model based on cellular neural network (CNN was proposed, in which discrete-time CNN (DTCNN was used as weak classifier in each local node and state-controlled CNN (SCCNN was used as global detection method, respectively. We further proposed a new method for design template parameters of SCCNN via solving Linear Matrix Inequality. Experimental results based on KDD CUP 99 dataset show its feasibility and effectiveness. Emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI implementation which allows the distributed intrusion detection to be performed better.

  16. Complexity of a kind of interval continuous self-map of finite type

    International Nuclear Information System (INIS)

    Wang Lidong; Chu Zhenyan; Liao Gongfu

    2011-01-01

    Highlights: → We find the Hausdorff dimension for an interval continuous self-map f of finite type is s element of (0,1) on a non-wandering set. → f| Ω(f) has positive topological entropy. → f| Ω(f) is chaotic such as Devaney chaos, Kato chaos, two point distributional chaos and so on. - Abstract: An interval map is called finitely typal, if the restriction of the map to non-wandering set is topologically conjugate with a subshift of finite type. In this paper, we prove that there exists an interval continuous self-map of finite type such that the Hausdorff dimension is an arbitrary number in the interval (0, 1), discuss various chaotic properties of the map and the relations between chaotic set and the set of recurrent points.

  17. Complexity of a kind of interval continuous self-map of finite type

    Energy Technology Data Exchange (ETDEWEB)

    Wang Lidong, E-mail: wld@dlnu.edu.cn [Institute of Mathematics, Dalian Nationalities University, Dalian 116600 (China); Institute of Mathematics, Jilin Normal University, Siping 136000 (China); Chu Zhenyan, E-mail: chuzhenyan8@163.com [Institute of Mathematics, Dalian Nationalities University, Dalian 116600 (China) and Institute of Mathematics, Jilin University, Changchun 130023 (China); Liao Gongfu, E-mail: liaogf@email.jlu.edu.cn [Institute of Mathematics, Jilin University, Changchun 130023 (China)

    2011-10-15

    Highlights: > We find the Hausdorff dimension for an interval continuous self-map f of finite type is s element of (0,1) on a non-wandering set. > f|{sub {Omega}(f)} has positive topological entropy. > f|{sub {Omega}(f)} is chaotic such as Devaney chaos, Kato chaos, two point distributional chaos and so on. - Abstract: An interval map is called finitely typal, if the restriction of the map to non-wandering set is topologically conjugate with a subshift of finite type. In this paper, we prove that there exists an interval continuous self-map of finite type such that the Hausdorff dimension is an arbitrary number in the interval (0, 1), discuss various chaotic properties of the map and the relations between chaotic set and the set of recurrent points.

  18. Reference intervals for thyreotropin and thyroid hormones for healthy adults based on the NOBIDA material and determined using a Modular E170

    DEFF Research Database (Denmark)

    Friis-Hansen, Lennart; Hilsted, Linda

    2008-01-01

    BACKGROUND: The aim of the present study was to establish Nordic reference intervals for thyreotropin (TSH) and the thyroid hormones in heparinized plasma. METHODS: We used 489 heparinized blood samples, collected in the morning, from the Nordic NOBIDA reference material, from healthy adults...... for the thyroid hormones, but not TSH, followed a Gaussian distribution. There were more TPO-ab and Tg-ab positive women than men. After exclusion of the TPO-ab and the Tg-ab positive individuals, the reference interval TSH was 0.64 (0.61-0.72) to 4.7 (4.4-5.0) mIU/L. The exclusion of these ab-positive samples...... also minimized the differences in TSH concentrations between the sexes and the different Nordic countries. For the thyroid hormones, there were only minor differences between the reference intervals between the Nordic populations and between men and women. These reference intervals were unaffected...

  19. Non-probabilistic defect assessment for structures with cracks based on interval model

    International Nuclear Information System (INIS)

    Dai, Qiao; Zhou, Changyu; Peng, Jian; Chen, Xiangwei; He, Xiaohua

    2013-01-01

    Highlights: • Non-probabilistic approach is introduced to defect assessment. • Definition and establishment of IFAC are put forward. • Determination of assessment rectangle is proposed. • Solution of non-probabilistic reliability index is presented. -- Abstract: Traditional defect assessment methods conservatively treat uncertainty of parameters as safety factors, while the probabilistic method is based on the clear understanding of detailed statistical information of parameters. In this paper, the non-probabilistic approach is introduced to the failure assessment diagram (FAD) to propose a non-probabilistic defect assessment method for structures with cracks. This novel defect assessment method contains three critical processes: establishment of the interval failure assessment curve (IFAC), determination of the assessment rectangle, and solution of the non-probabilistic reliability degree. Based on the interval theory, uncertain parameters such as crack sizes, material properties and loads are considered as interval variables. As a result, the failure assessment curve (FAC) will vary in a certain range, which is defined as IFAC. And the assessment point will vary within a rectangle zone which is defined as an assessment rectangle. Based on the interval model, the establishment of IFAC and the determination of the assessment rectangle are presented. Then according to the interval possibility degree method, the non-probabilistic reliability degree of IFAC can be determined. Meanwhile, in order to clearly introduce the non-probabilistic defect assessment method, a numerical example for the assessment of a pipe with crack is given. In addition, the assessment result of the proposed method is compared with that of the traditional probabilistic method, which confirms that this non-probabilistic defect assessment can reasonably resolve the practical problem with interval variables

  20. Non-probabilistic defect assessment for structures with cracks based on interval model

    Energy Technology Data Exchange (ETDEWEB)

    Dai, Qiao; Zhou, Changyu, E-mail: changyu_zhou@163.com; Peng, Jian; Chen, Xiangwei; He, Xiaohua

    2013-09-15

    Highlights: • Non-probabilistic approach is introduced to defect assessment. • Definition and establishment of IFAC are put forward. • Determination of assessment rectangle is proposed. • Solution of non-probabilistic reliability index is presented. -- Abstract: Traditional defect assessment methods conservatively treat uncertainty of parameters as safety factors, while the probabilistic method is based on the clear understanding of detailed statistical information of parameters. In this paper, the non-probabilistic approach is introduced to the failure assessment diagram (FAD) to propose a non-probabilistic defect assessment method for structures with cracks. This novel defect assessment method contains three critical processes: establishment of the interval failure assessment curve (IFAC), determination of the assessment rectangle, and solution of the non-probabilistic reliability degree. Based on the interval theory, uncertain parameters such as crack sizes, material properties and loads are considered as interval variables. As a result, the failure assessment curve (FAC) will vary in a certain range, which is defined as IFAC. And the assessment point will vary within a rectangle zone which is defined as an assessment rectangle. Based on the interval model, the establishment of IFAC and the determination of the assessment rectangle are presented. Then according to the interval possibility degree method, the non-probabilistic reliability degree of IFAC can be determined. Meanwhile, in order to clearly introduce the non-probabilistic defect assessment method, a numerical example for the assessment of a pipe with crack is given. In addition, the assessment result of the proposed method is compared with that of the traditional probabilistic method, which confirms that this non-probabilistic defect assessment can reasonably resolve the practical problem with interval variables.

  1. Interval timing in genetically modified mice: a simple paradigm.

    Science.gov (United States)

    Balci, F; Papachristos, E B; Gallistel, C R; Brunner, D; Gibson, J; Shumyatsky, G P

    2008-04-01

    We describe a behavioral screen for the quantitative study of interval timing and interval memory in mice. Mice learn to switch from a short-latency feeding station to a long-latency station when the short latency has passed without a feeding. The psychometric function is the cumulative distribution of switch latencies. Its median measures timing accuracy and its interquartile interval measures timing precision. Next, using this behavioral paradigm, we have examined mice with a gene knockout of the receptor for gastrin-releasing peptide that show enhanced (i.e. prolonged) freezing in fear conditioning. We have tested the hypothesis that the mutants freeze longer because they are more uncertain than wild types about when to expect the electric shock. The knockouts however show normal accuracy and precision in timing, so we have rejected this alternative hypothesis. Last, we conduct the pharmacological validation of our behavioral screen using d-amphetamine and methamphetamine. We suggest including the analysis of interval timing and temporal memory in tests of genetically modified mice for learning and memory and argue that our paradigm allows this to be done simply and efficiently.

  2. Methods for reconstruction of the density distribution of nuclear power

    International Nuclear Information System (INIS)

    Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S.

    2015-01-01

    Highlights: • Two methods for reconstruction of the pin power distribution are presented. • The ARM method uses analytical solution of the 2D diffusion equation. • The PRM method uses polynomial solution without boundary conditions. • The maximum errors in pin power reconstruction occur in the peripheral water region. • The errors are significantly less in the inner area of the core. - Abstract: In analytical reconstruction method (ARM), the two-dimensional (2D) neutron diffusion equation is analytically solved for two energy groups (2G) and homogeneous nodes with dimensions of a fuel assembly (FA). The solution employs a 2D fourth-order expansion for the axial leakage term. The Nodal Expansion Method (NEM) provides the solution average values as the four average partial currents on the surfaces of the node, the average flux in the node and the multiplying factor of the problem. The expansion coefficients for the axial leakage are determined directly from NEM method or can be determined in the reconstruction method. A new polynomial reconstruction method (PRM) is implemented based on the 2D expansion for the axial leakage term. The ARM method use the four average currents on the surfaces of the node and four average fluxes in corners of the node as boundary conditions and the average flux in the node as a consistency condition. To determine the average fluxes in corners of the node an analytical solution is employed. This analytical solution uses the average fluxes on the surfaces of the node as boundary conditions and discontinuities in corners are incorporated. The polynomial and analytical solutions to the PRM and ARM methods, respectively, represent the homogeneous flux distributions. The detailed distributions inside a FA are estimated by product of homogeneous distribution by local heterogeneous form function. Moreover, the form functions of power are used. The results show that the methods have good accuracy when compared with reference values and

  3. An Interval Bound Algorithm of optimizing reactor core loading pattern by using reactivity interval schema

    International Nuclear Information System (INIS)

    Gong Zhaohu; Wang Kan; Yao Dong

    2011-01-01

    Highlights: → We present a new Loading Pattern Optimization method - Interval Bound Algorithm (IBA). → IBA directly uses the reactivity of fuel assemblies and burnable poison. → IBA can optimize fuel assembly orientation in a coupled way. → Numerical experiment shows that IBA outperforms genetic algorithm and engineers. → We devise DDWF technique to deal with multiple objectives and constraints. - Abstract: In order to optimize the core loading pattern in Nuclear Power Plants, the paper presents a new optimization method - Interval Bound Algorithm (IBA). Similar to the typical population based algorithms, e.g. genetic algorithm, IBA maintains a population of solutions and evolves them during the optimization process. IBA acquires the solution by statistical learning and sampling the control variable intervals of the population in each iteration. The control variables are the transforms of the reactivity of fuel assemblies or the worth of burnable poisons, which are the crucial heuristic information for loading pattern optimization problems. IBA can deal with the relationship between the dependent variables by defining the control variables. Based on the IBA algorithm, a parallel Loading Pattern Optimization code, named IBALPO, has been developed. To deal with multiple objectives and constraints, the Dynamic Discontinuous Weight Factors (DDWF) for the fitness function have been used in IBALPO. Finally, the code system has been used to solve a realistic reloading problem and a better pattern has been obtained compared with the ones searched by engineers and genetic algorithm, thus the performance of the code is proved.

  4. Identification of reactor failure states using noise methods, and spatial power distribution

    International Nuclear Information System (INIS)

    Vavrin, J.; Blazek, J.

    1981-01-01

    A survey is given of the results achieved. Methodical means and programs were developed for the control computer which may be used in noise diagnostics and in the control of reactor power distribution. Statistical methods of processing the noise components of the signals of measured variables were used for identifying failures of reactors. The method of the synthesis of the neutron flux was used for modelling and evaluating the reactor power distribution. For monitoring and controlling the power distribution a mathematical model of the reactor was constructed suitable for control computers. The uses of noise analysis methods are recommended and directions of further development shown. (J.P.)

  5. Systematic review of serum steroid reference intervals developed using mass spectrometry.

    Science.gov (United States)

    Tavita, Nevada; Greaves, Ronda F

    2017-12-01

    The aim of this study was to perform a systematic review of the published literature to determine the available serum/plasma steroid reference intervals generated by mass spectrometry (MS) methods across all age groups in healthy subjects and to suggest recommendations to achieve common MS based reference intervals for serum steroids. MEDLINE, EMBASE and PubMed databases were used to conduct a comprehensive search for English language, MS-based reference interval studies for serum/plasma steroids. Selection of steroids to include was based on those listed in the Royal College of Pathologists of Australasia Quality Assurance Programs, Chemical Pathology, Endocrine Program. This methodology has been registered onto the PROSPERO International prospective register of systematic reviews (ID number: CRD42015029637). After accounting for duplicates, a total of 60 manuscripts were identified through the search strategy. Following critical evaluation, a total of 16 studies were selected. Of the 16 studies, 12 reported reference intervals for testosterone, 11 for 17 hydroxy-progesterone, nine for androstenedione, six for cortisol, three for progesterone, two for dihydrotestosterone and only one for aldosterone and dehydroepiandrosterone sulphate. No studies established MS-based reference intervals for oestradiol. As far as we are aware, this report provides the first comparison of the peer reviewed literature for serum/plasma steroid reference intervals generated by MS-based methods. The reference intervals based on these published studies can be used to inform the process to develop common reference intervals, and agreed reporting units for mass spectrometry based steroid methods. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  6. INTERVAL OBSERVER FOR A BIOLOGICAL REACTOR MODEL

    Directory of Open Access Journals (Sweden)

    T. A. Kharkovskaia

    2014-05-01

    Full Text Available The method of an interval observer design for nonlinear systems with parametric uncertainties is considered. The interval observer synthesis problem for systems with varying parameters consists in the following. If there is the uncertainty restraint for the state values of the system, limiting the initial conditions of the system and the set of admissible values for the vector of unknown parameters and inputs, the interval existence condition for the estimations of the system state variables, containing the actual state at a given time, needs to be held valid over the whole considered time segment as well. Conditions of the interval observers design for the considered class of systems are shown. They are: limitation of the input and state, the existence of a majorizing function defining the uncertainty vector for the system, Lipschitz continuity or finiteness of this function, the existence of an observer gain with the suitable Lyapunov matrix. The main condition for design of such a device is cooperativity of the interval estimation error dynamics. An individual observer gain matrix selection problem is considered. In order to ensure the property of cooperativity for interval estimation error dynamics, a static transformation of coordinates is proposed. The proposed algorithm is demonstrated by computer modeling of the biological reactor. Possible applications of these interval estimation systems are the spheres of robust control, where the presence of various types of uncertainties in the system dynamics is assumed, biotechnology and environmental systems and processes, mechatronics and robotics, etc.

  7. Sensitivity Analysis of Dynamic Tariff Method for Congestion Management in Distribution Networks

    DEFF Research Database (Denmark)

    Huang, Shaojun; Wu, Qiuwei; Liu, Zhaoxi

    2015-01-01

    The dynamic tariff (DT) method is designed for the distribution system operator (DSO) to alleviate the congestions that might occur in a distribution network with high penetration of distribute energy resources (DERs). Sensitivity analysis of the DT method is crucial because of its decentralized...... control manner. The sensitivity analysis can obtain the changes of the optimal energy planning and thereby the line loading profiles over the infinitely small changes of parameters by differentiating the KKT conditions of the convex quadratic programming, over which the DT method is formed. Three case...

  8. Test of methods for retrospective activity size distribution determination from filter samples

    International Nuclear Information System (INIS)

    Meisenberg, Oliver; Tschiersch, Jochen

    2015-01-01

    Determining the activity size distribution of radioactive aerosol particles requires sophisticated and heavy equipment, which makes measurements at large number of sites difficult and expensive. Therefore three methods for a retrospective determination of size distributions from aerosol filter samples in the laboratory were tested for their applicability. Extraction into a carrier liquid with subsequent nebulisation showed size distributions with a slight but correctable bias towards larger diameters compared with the original size distribution. Yields in the order of magnitude of 1% could be achieved. Sonication-assisted extraction into a carrier liquid caused a coagulation mode to appear in the size distribution. Sonication-assisted extraction into the air did not show acceptable results due to small yields. The method of extraction into a carrier liquid without sonication was applied to aerosol samples from Chernobyl in order to calculate inhalation dose coefficients for 137 Cs based on the individual size distribution. The effective dose coefficient is about half of that calculated with a default reference size distribution. - Highlights: • Activity size distributions can be recovered after aerosol sampling on filters. • Extraction into a carrier liquid and subsequent nebulisation is appropriate. • This facilitates the determination of activity size distributions for individuals. • Size distributions from this method can be used for individual dose coefficients. • Dose coefficients were calculated for the workers at the new Chernobyl shelter

  9. Interval neutrosophic sets and their application in multicriteria decision making problems.

    Science.gov (United States)

    Zhang, Hong-yu; Wang, Jian-qiang; Chen, Xiao-hong

    2014-01-01

    As a generalization of fuzzy sets and intuitionistic fuzzy sets, neutrosophic sets have been developed to represent uncertain, imprecise, incomplete, and inconsistent information existing in the real world. And interval neutrosophic sets (INSs) have been proposed exactly to address issues with a set of numbers in the real unit interval, not just a specific number. However, there are fewer reliable operations for INSs, as well as the INS aggregation operators and decision making method. For this purpose, the operations for INSs are defined and a comparison approach is put forward based on the related research of interval valued intuitionistic fuzzy sets (IVIFSs) in this paper. On the basis of the operations and comparison approach, two interval neutrosophic number aggregation operators are developed. Then, a method for multicriteria decision making problems is explored applying the aggregation operators. In addition, an example is provided to illustrate the application of the proposed method.

  10. METHODS OF MEASURING THE EFFECTS OF LIGHTNING BY SIMULATING ITS STRIKES WITH THE INTERVAL ASSESSMENT OF THE RESULTS OF MEASUREMENTS

    Directory of Open Access Journals (Sweden)

    P. V. Kriksin

    2017-01-01

    Full Text Available The article presents the results of the development of new methods aimed at more accurate interval estimate of the experimental values of voltages on grounding devices of substations and circuits in the control cables, that occur when lightning strikes to lightning rods; the abovementioned estimate made it possible to increase the accuracy of the results of the study of lightning noise by 28 %. A more accurate value of interval estimation were achieved by developing a measurement model that takes into account, along with the measured values, different measurement errors and includes the special processing of the measurement results. As a result, the interval of finding the true value of the sought voltage is determined with an accuracy of 95 %. The methods can be applied to the IK-1 and IKP-1 measurement complexes, consisting in the aperiodic pulse generator, the generator of high-frequency pulses and selective voltmeters, respectively. To evaluate the effectiveness of the developed methods series of experimental voltage assessments of grounding devices of ten active high-voltage substation have been fulfilled in accordance with the developed methods and traditional techniques. The evaluation results confirmed the possibility of finding the true values of voltage over a wide range, that ought to be considered in the process of technical diagnostics of lightning protection of substations when the analysis of the measurement results and the development of measures to reduce the effects of lightning are being fulfilled. Also, a comparative analysis of the results of measurements made in accordance with the developed methods and traditional techniques has demonstrated that the true value of the sought voltage may exceed the measured value at an average of 28 %, that ought to be considered in the further analysis of the parameters of lightning protection at the facility and in the development of corrective actions. The developed methods have been

  11. Coupled double-distribution-function lattice Boltzmann method for the compressible Navier-Stokes equations.

    Science.gov (United States)

    Li, Q; He, Y L; Wang, Y; Tao, W Q

    2007-11-01

    A coupled double-distribution-function lattice Boltzmann method is developed for the compressible Navier-Stokes equations. Different from existing thermal lattice Boltzmann methods, this method can recover the compressible Navier-Stokes equations with a flexible specific-heat ratio and Prandtl number. In the method, a density distribution function based on a multispeed lattice is used to recover the compressible continuity and momentum equations, while the compressible energy equation is recovered by an energy distribution function. The energy distribution function is then coupled to the density distribution function via the thermal equation of state. In order to obtain an adjustable specific-heat ratio, a constant related to the specific-heat ratio is introduced into the equilibrium energy distribution function. Two different coupled double-distribution-function lattice Boltzmann models are also proposed in the paper. Numerical simulations are performed for the Riemann problem, the double-Mach-reflection problem, and the Couette flow with a range of specific-heat ratios and Prandtl numbers. The numerical results are found to be in excellent agreement with analytical and/or other solutions.

  12. Uncertainty analysis of the radiological characteristics of radioactive waste using a method based on log-normal distributions

    International Nuclear Information System (INIS)

    Gigase, Yves

    2007-01-01

    Available in abstract form only. Full text of publication follows: The uncertainty on characteristics of radioactive LILW waste packages is difficult to determine and often very large. This results from a lack of knowledge of the constitution of the waste package and of the composition of the radioactive sources inside. To calculate a quantitative estimate of the uncertainty on a characteristic of a waste package one has to combine these various uncertainties. This paper discusses an approach to this problem, based on the use of the log-normal distribution, which is both elegant and easy to use. It can provide as example quantitative estimates of uncertainty intervals that 'make sense'. The purpose is to develop a pragmatic approach that can be integrated into existing characterization methods. In this paper we show how our method can be applied to the scaling factor method. We also explain how it can be used when estimating other more complex characteristics such as the total uncertainty of a collection of waste packages. This method could have applications in radioactive waste management, more in particular in those decision processes where the uncertainty on the amount of activity is considered to be important such as in probability risk assessment or the definition of criteria for acceptance or categorization. (author)

  13. The best confidence interval of the failure rate and unavailability per demand when few experimental data are available

    International Nuclear Information System (INIS)

    Goodman, J.

    1985-01-01

    Using a few available data the likelihood functions for the failure rate and unavailability per demand are constructed. These likelihood functions are used to obtain likelihood density functions for the failure rate and unavailability per demand. The best (or shortest) confidence intervals for these functions are provided. The failure rate and unavailability per demand are important characteristics needed for reliability and availability analysis. The methods of estimation of these characteristics when plenty of observed data are available are well known. However, on many occasions when we deal with rare failure modes or with new equipment or components for which sufficient experience has not accumulated, we have scarce data where few or zero failures have occurred. In these cases, a technique which reflects exactly our state of knowledge is required. This technique is based on likelihood density function or Bayesian methods depending on the available prior distribution. To extract the maximum amount of information from the data the best confidence interval is determined

  14. Statistical methods in regression and calibration analysis of chromosome aberration data

    International Nuclear Information System (INIS)

    Merkle, W.

    1983-01-01

    The method of iteratively reweighted least squares for the regression analysis of Poisson distributed chromosome aberration data is reviewed in the context of other fit procedures used in the cytogenetic literature. As an application of the resulting regression curves methods for calculating confidence intervals on dose from aberration yield are described and compared, and, for the linear quadratic model a confidence interval is given. Emphasis is placed on the rational interpretation and the limitations of various methods from a statistical point of view. (orig./MG)

  15. Comparison of ISO-GUM and Monte Carlo Method for Evaluation of Measurement Uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Young-Cheol; Her, Jae-Young; Lee, Seung-Jun; Lee, Kang-Jin [Korea Gas Corporation, Daegu (Korea, Republic of)

    2014-07-15

    To supplement the ISO-GUM method for the evaluation of measurement uncertainty, a simulation program using the Monte Carlo method (MCM) was developed, and the MCM and GUM methods were compared. The results are as follows: (1) Even under a non-normal probability distribution of the measurement, MCM provides an accurate coverage interval; (2) Even if a probability distribution that emerged from combining a few non-normal distributions looks as normal, there are cases in which the actual distribution is not normal and the non-normality can be determined by the probability distribution of the combined variance; and (3) If type-A standard uncertainties are involved in the evaluation of measurement uncertainty, GUM generally offers an under-valued coverage interval. However, this problem can be solved by the Bayesian evaluation of type-A standard uncertainty. In this case, the effective degree of freedom for the combined variance is not required in the evaluation of expanded uncertainty, and the appropriate coverage factor for 95% level of confidence was determined to be 1.96.

  16. Establishment of reference intervals for serum thyroid-stimulating hormone, free and total thyroxine, and free and total triiodothyronine for the Beckman Coulter DxI-800 analyzers by indirect method using data obtained from Chinese population in Zhejiang Province, China.

    Science.gov (United States)

    Wang, Yan; Zhang, Yu-Xia; Zhou, Yong-Lie; Xia, Jun

    2017-07-01

    In order to establish suitable reference intervals of thyroid-stimulating hormone (TSH), free (unbound) T4 (FT4), free triiodothyronine (FT3), total thyroxine (T4), and total triiodothyronine (T3) for the patients collected in Zhejiang, China, an indirect method was developed using the data from the people presented for routine health check-up. Fifteen thousand nine hundred and fifty-six person's results were reviewed. Box-Cox or Case Rank was used to transform the data to normal distribution. Tukey and Box-Plot methods were used to exclude the outliers. Nonparametric method was used to establish the reference intervals following the EP28-A3c guideline. Pearson correlation was used to evaluate the correlation between hormone levels and age, while Mann-Whitney U test was employed for quantification of concentration differences on the people who are younger and older than 50 years old. Reference intervals were 0.66-4.95 mIU/L (TSH), 8.97-14.71 pmol/L (FT4), 3.75-5.81 pmol/L (FT3), 73.45-138.93 nmol/L (total T4), and 1.24-2.18 nmol/L (total T3) in male; conversely, reference intervals for female were 0.72-5.84 mIU/L (TSH), 8.62-14.35 pmol/L (FT4), 3.59-5.56 pmol/L (FT3), 73.45-138.93 nmol/L (total T4), and 1.20-2.10 nmol/L (total T3). FT4, FT3, and total T3 levels in male and FT4 level in female had an inverse correlation with age. Total T4 and TSH levels in female were directly correlated. Significant differences in these hormones were also found between younger and older than 50 years old except FT3 in female. Indirect method can be applied for establishment of reference intervals for TSH, FT4, FT3, total T4, and total T3. The reference intervals are narrower than those previously established. Age factor should also be considered. © 2016 Wiley Periodicals, Inc.

  17. Confidence bounds for normal and lognormal distribution coefficients of variation

    Science.gov (United States)

    Steve Verrill

    2003-01-01

    This paper compares the so-called exact approach for obtaining confidence intervals on normal distribution coefficients of variation to approximate methods. Approximate approaches were found to perform less well than the exact approach for large coefficients of variation and small sample sizes. Web-based computer programs are described for calculating confidence...

  18. [Heart rate variability study based on a novel RdR RR Intervals Scatter Plot].

    Science.gov (United States)

    Lu, Hongwei; Lu, Xiuyun; Wang, Chunfang; Hua, Youyuan; Tian, Jiajia; Liu, Shihai

    2014-08-01

    On the basis of Poincare scatter plot and first order difference scatter plot, a novel heart rate variability (HRV) analysis method based on scatter plots of RR intervals and first order difference of RR intervals (namely, RdR) was proposed. The abscissa of the RdR scatter plot, the x-axis, is RR intervals and the ordinate, y-axis, is the difference between successive RR intervals. The RdR scatter plot includes the information of RR intervals and the difference between successive RR intervals, which captures more HRV information. By RdR scatter plot analysis of some records of MIT-BIH arrhythmias database, we found that the scatter plot of uncoupled premature ventricular contraction (PVC), coupled ventricular bigeminy and ventricular trigeminy PVC had specific graphic characteristics. The RdR scatter plot method has higher detecting performance than the Poincare scatter plot method, and simpler and more intuitive than the first order difference method.

  19. Quantification of transuranic elements by time interval correlation spectroscopy of the detected neutrons

    Science.gov (United States)

    Baeten; Bruggeman; Paepen; Carchon

    2000-03-01

    The non-destructive quantification of transuranic elements in nuclear waste management or in safeguards verifications is commonly performed by passive neutron assay techniques. To minimise the number of unknown sample-dependent parameters, Neutron Multiplicity Counting (NMC) is applied. We developed a new NMC-technique, called Time Interval Correlation Spectroscopy (TICS), which is based on the measurement of Rossi-alpha time interval distributions. Compared to other NMC-techniques, TICS offers several advantages.

  20. Testing independence of bivariate interval-censored data using modified Kendall's tau statistic.

    Science.gov (United States)

    Kim, Yuneung; Lim, Johan; Park, DoHwan

    2015-11-01

    In this paper, we study a nonparametric procedure to test independence of bivariate interval censored data; for both current status data (case 1 interval-censored data) and case 2 interval-censored data. To do it, we propose a score-based modification of the Kendall's tau statistic for bivariate interval-censored data. Our modification defines the Kendall's tau statistic with expected numbers of concordant and disconcordant pairs of data. The performance of the modified approach is illustrated by simulation studies and application to the AIDS study. We compare our method to alternative approaches such as the two-stage estimation method by Sun et al. (Scandinavian Journal of Statistics, 2006) and the multiple imputation method by Betensky and Finkelstein (Statistics in Medicine, 1999b). © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Interval Neutrosophic Sets and Their Application in Multicriteria Decision Making Problems

    Directory of Open Access Journals (Sweden)

    Hong-yu Zhang

    2014-01-01

    Full Text Available As a generalization of fuzzy sets and intuitionistic fuzzy sets, neutrosophic sets have been developed to represent uncertain, imprecise, incomplete, and inconsistent information existing in the real world. And interval neutrosophic sets (INSs have been proposed exactly to address issues with a set of numbers in the real unit interval, not just a specific number. However, there are fewer reliable operations for INSs, as well as the INS aggregation operators and decision making method. For this purpose, the operations for INSs are defined and a comparison approach is put forward based on the related research of interval valued intuitionistic fuzzy sets (IVIFSs in this paper. On the basis of the operations and comparison approach, two interval neutrosophic number aggregation operators are developed. Then, a method for multicriteria decision making problems is explored applying the aggregation operators. In addition, an example is provided to illustrate the application of the proposed method.

  2. Computing Statistics under Interval and Fuzzy Uncertainty Applications to Computer Science and Engineering

    CERN Document Server

    Nguyen, Hung T; Wu, Berlin; Xiang, Gang

    2012-01-01

    In many practical situations, we are interested in statistics characterizing a population of objects: e.g. in the mean height of people from a certain area.   Most algorithms for estimating such statistics assume that the sample values are exact. In practice, sample values come from measurements, and measurements are never absolutely accurate. Sometimes, we know the exact probability distribution of the measurement inaccuracy, but often, we only know the upper bound on this inaccuracy. In this case, we have interval uncertainty: e.g. if the measured value is 1.0, and inaccuracy is bounded by 0.1, then the actual (unknown) value of the quantity can be anywhere between 1.0 - 0.1 = 0.9 and 1.0 + 0.1 = 1.1. In other cases, the values are expert estimates, and we only have fuzzy information about the estimation inaccuracy.   This book shows how to compute statistics under such interval and fuzzy uncertainty. The resulting methods are applied to computer science (optimal scheduling of different processors), to in...

  3. Estimating statistical uncertainty of Monte Carlo efficiency-gain in the context of a correlated sampling Monte Carlo code for brachytherapy treatment planning with non-normal dose distribution.

    Science.gov (United States)

    Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr

    2012-01-01

    Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. Reinterpretation of the results of a pooled analysis of dietary carotenoid intake and breast cancer risk by using the interval collapsing method

    Directory of Open Access Journals (Sweden)

    Jong-Myon Bae

    2016-06-01

    Full Text Available OBJECTIVES: A pooled analysis of 18 prospective cohort studies reported in 2012 for evaluating carotenoid intakes and breast cancer risk defined by estrogen receptor (ER and progesterone receptor (PR statuses by using the “highest versus lowest intake” method (HLM. By applying the interval collapsing method (ICM to maximize the use of the estimated information, we reevaluated the results of the previous analysis in order to reinterpret the inferences made. METHODS: In order to estimate the summary effect size (sES and its 95% confidence interval (CI, meta-analyses with the random-effects model were conducted for adjusted relative risks and their 95% CI from the second to the fifth interval according to five kinds of carotenoids and ER/PR status. RESULTS: The following new findings were identified: α-Carotene and β-cryptoxanthin have protective effects on overall breast cancer. All five kinds of carotenoids showed protective effects on ER− breast cancer. β-Carotene level increased the risk of ER+ or ER+/PR+ breast cancer. α-Carotene, β-carotene, lutein/zeaxanthin, and lycopene showed a protective effect on ER−/PR+ or ER−/PR− breast cancer. CONCLUSIONS: The new facts support the hypothesis that carotenoids that show anticancer effects with anti-oxygen function might reduce the risk of ER− breast cancer. Based on the new facts, the modification of the effects of α-carotene, β-carotene, and β-cryptoxanthin should be evaluated according to PR and ER statuses.

  5. Environmental DNA method for estimating salamander distribution in headwater streams, and a comparison of water sampling methods.

    Science.gov (United States)

    Katano, Izumi; Harada, Ken; Doi, Hideyuki; Souma, Rio; Minamoto, Toshifumi

    2017-01-01

    Environmental DNA (eDNA) has recently been used for detecting the distribution of macroorganisms in various aquatic habitats. In this study, we applied an eDNA method to estimate the distribution of the Japanese clawed salamander, Onychodactylus japonicus, in headwater streams. Additionally, we compared the detection of eDNA and hand-capturing methods used for determining the distribution of O. japonicus. For eDNA detection, we designed a qPCR primer/probe set for O. japonicus using the 12S rRNA region. We detected the eDNA of O. japonicus at all sites (with the exception of one), where we also observed them by hand-capturing. Additionally, we detected eDNA at two sites where we were unable to observe individuals using the hand-capturing method. Moreover, we found that eDNA concentrations and detection rates of the two water sampling areas (stream surface and under stones) were not significantly different, although the eDNA concentration in the water under stones was more varied than that on the surface. We, therefore, conclude that eDNA methods could be used to determine the distribution of macroorganisms inhabiting headwater systems by using samples collected from the surface of the water.

  6. Comparison of Two Methods Used to Model Shape Parameters of Pareto Distributions

    Science.gov (United States)

    Liu, C.; Charpentier, R.R.; Su, J.

    2011-01-01

    Two methods are compared for estimating the shape parameters of Pareto field-size (or pool-size) distributions for petroleum resource assessment. Both methods assume mature exploration in which most of the larger fields have been discovered. Both methods use the sizes of larger discovered fields to estimate the numbers and sizes of smaller fields: (1) the tail-truncated method uses a plot of field size versus size rank, and (2) the log-geometric method uses data binned in field-size classes and the ratios of adjacent bin counts. Simulation experiments were conducted using discovered oil and gas pool-size distributions from four petroleum systems in Alberta, Canada and using Pareto distributions generated by Monte Carlo simulation. The estimates of the shape parameters of the Pareto distributions, calculated by both the tail-truncated and log-geometric methods, generally stabilize where discovered pool numbers are greater than 100. However, with fewer than 100 discoveries, these estimates can vary greatly with each new discovery. The estimated shape parameters of the tail-truncated method are more stable and larger than those of the log-geometric method where the number of discovered pools is more than 100. Both methods, however, tend to underestimate the shape parameter. Monte Carlo simulation was also used to create sequences of discovered pool sizes by sampling from a Pareto distribution with a discovery process model using a defined exploration efficiency (in order to show how biased the sampling was in favor of larger fields being discovered first). A higher (more biased) exploration efficiency gives better estimates of the Pareto shape parameters. ?? 2011 International Association for Mathematical Geosciences.

  7. Distributed Interior-point Method for Loosely Coupled Problems

    DEFF Research Database (Denmark)

    Pakazad, Sina Khoshfetrat; Hansson, Anders; Andersen, Martin Skovgaard

    2014-01-01

    In this paper, we put forth distributed algorithms for solving loosely coupled unconstrained and constrained optimization problems. Such problems are usually solved using algorithms that are based on a combination of decomposition and first order methods. These algorithms are commonly very slow a...

  8. Pediatric Reference Intervals for Free Thyroxine and Free Triiodothyronine

    Science.gov (United States)

    Jang, Megan; Guo, Tiedong; Soldin, Steven J.

    2009-01-01

    Background The clinical value of free thyroxine (FT4) and free triiodothyronine (FT3) analysis depends on the reference intervals with which they are compared. We determined age- and sex-specific reference intervals for neonates, infants, and children 0–18 years of age for FT4 and FT3 using tandem mass spectrometry. Methods Reference intervals were calculated for serum FT4 (n = 1426) and FT3 (n = 1107) obtained from healthy children between January 1, 2008, and June 30, 2008, from Children's National Medical Center and Georgetown University Medical Center Bioanalytical Core Laboratory, Washington, DC. Serum samples were analyzed using isotope dilution liquid chromatography tandem mass spectrometry (LC/MS/MS) with deuterium-labeled internal standards. Results FT4 reference intervals were very similar for males and females of all ages and ranged between 1.3 and 2.4 ng/dL for children 1 to 18 years old. FT4 reference intervals for 1- to 12-month-old infants were 1.3–2.8 ng/dL. These 2.5 to 97.5 percentile intervals were much tighter than reference intervals obtained using immunoassay platforms 0.48–2.78 ng/dL for males and 0.85–2.09 ng/dL for females. Similarly, FT3 intervals were consistent and similar for males and females and for all ages, ranging between 1.5 pg/mL and approximately 6.0 pg/mL for children 1 month of age to 18 years old. Conclusions This is the first study to provide pediatric reference intervals of FT4 and FT3 for children from birth to 18 years of age using LC/MS/MS. Analysis using LC/MS/MS provides more specific quantification of thyroid hormones. A comparison of the ultrafiltration tandem mass spectrometric method with equilibrium dialysis showed very good correlation. PMID:19583487

  9. Uptake of recommended common reference intervals for chemical pathology in Australia.

    Science.gov (United States)

    Jones, Graham Rd; Koetsier, Sabrina

    2017-05-01

    Background Reference intervals are a vital part of reporting numerical pathology results. It is known, however, that variation in reference intervals between laboratories is common, even when analytical methods support common reference intervals. In response to this, in Australia, the Australasian Association of Clinical Biochemists together with the Royal College of Pathologists of Australasia published in 2014 a set of recommended common reference intervals for 11 common serum analytes (sodium, potassium, chloride, bicarbonate, creatinine male, creatinine female, calcium, calcium adjusted for albumin, phosphate, magnesium, lactate dehydrogenase, alkaline phosphatase and total protein). Methods Uptake of recommended common reference intervals in Australian laboratories was assessed using data from four annual cycles of the RCPAQAP reference intervals external quality assurance programme. Results Over three years, from 2013 to 2016, the use of the recommended upper and lower reference limits has increased from 40% to 83%. Nearly half of the intervals in use by enrolled laboratories in 2016 have been changed in this time period, indicating an active response to the guidelines. Conclusions These data support the activities of the Australasian Association of Clinical Biochemists and Royal College of Pathologists of Australasia in demonstrating a change in laboratory behaviour to reduce unnecessary variation in reference intervals and thus provide a consistent message to doctor and patients irrespective of the laboratory used.

  10. Optimal reactive power and voltage control in distribution networks with distributed generators by fuzzy adaptive hybrid particle swarm optimisation method

    DEFF Research Database (Denmark)

    Chen, Shuheng; Hu, Weihao; Su, Chi

    2015-01-01

    A new and efficient methodology for optimal reactive power and voltage control of distribution networks with distributed generators based on fuzzy adaptive hybrid PSO (FAHPSO) is proposed. The objective is to minimize comprehensive cost, consisting of power loss and operation cost of transformers...... that the proposed method can search a more promising control schedule of all transformers, all capacitors and all distributed generators with less time consumption, compared with other listed artificial intelligent methods....... algorithm is implemented in VC++ 6.0 program language and the corresponding numerical experiments are finished on the modified version of the IEEE 33-node distribution system with two newly installed distributed generators and eight newly installed capacitors banks. The numerical results prove...

  11. Generalized Bootstrap Method for Assessment of Uncertainty in Semivariogram Inference

    Science.gov (United States)

    Olea, R.A.; Pardo-Iguzquiza, E.

    2011-01-01

    The semivariogram and its related function, the covariance, play a central role in classical geostatistics for modeling the average continuity of spatially correlated attributes. Whereas all methods are formulated in terms of the true semivariogram, in practice what can be used are estimated semivariograms and models based on samples. A generalized form of the bootstrap method to properly model spatially correlated data is used to advance knowledge about the reliability of empirical semivariograms and semivariogram models based on a single sample. Among several methods available to generate spatially correlated resamples, we selected a method based on the LU decomposition and used several examples to illustrate the approach. The first one is a synthetic, isotropic, exhaustive sample following a normal distribution, the second example is also a synthetic but following a non-Gaussian random field, and a third empirical sample consists of actual raingauge measurements. Results show wider confidence intervals than those found previously by others with inadequate application of the bootstrap. Also, even for the Gaussian example, distributions for estimated semivariogram values and model parameters are positively skewed. In this sense, bootstrap percentile confidence intervals, which are not centered around the empirical semivariogram and do not require distributional assumptions for its construction, provide an achieved coverage similar to the nominal coverage. The latter cannot be achieved by symmetrical confidence intervals based on the standard error, regardless if the standard error is estimated from a parametric equation or from bootstrap. ?? 2010 International Association for Mathematical Geosciences.

  12. Multifactor analysis of multiscaling in volatility return intervals.

    Science.gov (United States)

    Wang, Fengzhong; Yamasaki, Kazuko; Havlin, Shlomo; Stanley, H Eugene

    2009-01-01

    We study the volatility time series of 1137 most traded stocks in the U.S. stock markets for the two-year period 2001-2002 and analyze their return intervals tau , which are time intervals between volatilities above a given threshold q . We explore the probability density function of tau , P_(q)(tau) , assuming a stretched exponential function, P_(q)(tau) approximately e;(-tau;(gamma)) . We find that the exponent gamma depends on the threshold in the range between q=1 and 6 standard deviations of the volatility. This finding supports the multiscaling nature of the return interval distribution. To better understand the multiscaling origin, we study how gamma depends on four essential factors, capitalization, risk, number of trades, and return. We show that gamma depends on the capitalization, risk, and return but almost does not depend on the number of trades. This suggests that gamma relates to the portfolio selection but not on the market activity. To further characterize the multiscaling of individual stocks, we fit the moments of tau , mu_(m) identical with(tautau);(m);(1m) , in the range of 10portfolio optimization.

  13. Multifactor analysis of multiscaling in volatility return intervals

    Science.gov (United States)

    Wang, Fengzhong; Yamasaki, Kazuko; Havlin, Shlomo; Stanley, H. Eugene

    2009-01-01

    We study the volatility time series of 1137 most traded stocks in the U.S. stock markets for the two-year period 2001-2002 and analyze their return intervals τ , which are time intervals between volatilities above a given threshold q . We explore the probability density function of τ , Pq(τ) , assuming a stretched exponential function, Pq(τ)˜e-τγ . We find that the exponent γ depends on the threshold in the range between q=1 and 6 standard deviations of the volatility. This finding supports the multiscaling nature of the return interval distribution. To better understand the multiscaling origin, we study how γ depends on four essential factors, capitalization, risk, number of trades, and return. We show that γ depends on the capitalization, risk, and return but almost does not depend on the number of trades. This suggests that γ relates to the portfolio selection but not on the market activity. To further characterize the multiscaling of individual stocks, we fit the moments of τ , μm≡⟨(τ/⟨τ⟩)m⟩1/m , in the range of 10portfolio optimization.

  14. Oscillatory dynamics of an intravenous glucose tolerance test model with delay interval

    Science.gov (United States)

    Shi, Xiangyun; Kuang, Yang; Makroglou, Athena; Mokshagundam, Sriprakash; Li, Jiaxu

    2017-11-01

    Type 2 diabetes mellitus (T2DM) has become prevalent pandemic disease in view of the modern life style. Both diabetic population and health expenses grow rapidly according to American Diabetes Association. Detecting the potential onset of T2DM is an essential focal point in the research of diabetes mellitus. The intravenous glucose tolerance test (IVGTT) is an effective protocol to determine the insulin sensitivity, glucose effectiveness, and pancreatic β-cell functionality, through the analysis and parameter estimation of a proper differential equation model. Delay differential equations have been used to study the complex physiological phenomena including the glucose and insulin regulations. In this paper, we propose a novel approach to model the time delay in IVGTT modeling. This novel approach uses two parameters to simulate not only both discrete time delay and distributed time delay in the past interval, but also the time delay distributed in a past sub-interval. Normally, larger time delay, either a discrete or a distributed delay, will destabilize the system. However, we find that time delay over a sub-interval might not. We present analytically some basic model properties, which are desirable biologically and mathematically. We show that this relatively simple model provides good fit to fluctuating patient data sets and reveals some intriguing dynamics. Moreover, our numerical simulation results indicate that our model may remove the defect in well known Minimal Model, which often overestimates the glucose effectiveness index.

  15. Experimental uncertainty estimation and statistics for data having interval uncertainty.

    Energy Technology Data Exchange (ETDEWEB)

    Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)

    2007-05-01

    This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.

  16. Three-dimensional visualization and measurement of water distributions in PEFC by dynamic CT method on neutron radiography

    International Nuclear Information System (INIS)

    Hashimoto, Michinori; Murakawa, Hideki; Sugimoto, Katsumi; Asano, Hitoshi; Takenaka, Nobuyuki; Mochiki, Koh-ichi

    2011-01-01

    Visualization of dynamic three-dimensional water behavior in a PEFC stack was carried out by neutron CT for clarifying water effects on performances of a Polymer Electrolyte Fuel Cell (PEFC) stack. Neutron radiography system at JRR-3 in Japan Atomic Energy Agency was used. An operating stack with three cells based on Japan Automobile Research Institute standard was visualized. A consecutive CT reconstruction method by rotating the fuel stack continuously was developed by using a neutron image intensifier and a C-MOS high speed video camera. The dynamic water behavior in channels in the operating PEFC stack was clearly visualized 15 sec in interval by the developed dynamic neutron CT system. From the CT reconstructed images, evaluation of water amount in each cell was carried out. It was shown that the water distribution in each cell was correlated well with power generation characteristics in each cell. (author)

  17. Sample size planning for composite reliability coefficients: accuracy in parameter estimation via narrow confidence intervals.

    Science.gov (United States)

    Terry, Leann; Kelley, Ken

    2012-11-01

    Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.

  18. Multidrug-resistant tuberculosis treatment failure detection depends on monitoring interval and microbiological method

    Science.gov (United States)

    White, Richard A.; Lu, Chunling; Rodriguez, Carly A.; Bayona, Jaime; Becerra, Mercedes C.; Burgos, Marcos; Centis, Rosella; Cohen, Theodore; Cox, Helen; D'Ambrosio, Lia; Danilovitz, Manfred; Falzon, Dennis; Gelmanova, Irina Y.; Gler, Maria T.; Grinsdale, Jennifer A.; Holtz, Timothy H.; Keshavjee, Salmaan; Leimane, Vaira; Menzies, Dick; Milstein, Meredith B.; Mishustin, Sergey P.; Pagano, Marcello; Quelapio, Maria I.; Shean, Karen; Shin, Sonya S.; Tolman, Arielle W.; van der Walt, Martha L.; Van Deun, Armand; Viiklepp, Piret

    2016-01-01

    Debate persists about monitoring method (culture or smear) and interval (monthly or less frequently) during treatment for multidrug-resistant tuberculosis (MDR-TB). We analysed existing data and estimated the effect of monitoring strategies on timing of failure detection. We identified studies reporting microbiological response to MDR-TB treatment and solicited individual patient data from authors. Frailty survival models were used to estimate pooled relative risk of failure detection in the last 12 months of treatment; hazard of failure using monthly culture was the reference. Data were obtained for 5410 patients across 12 observational studies. During the last 12 months of treatment, failure detection occurred in a median of 3 months by monthly culture; failure detection was delayed by 2, 7, and 9 months relying on bimonthly culture, monthly smear and bimonthly smear, respectively. Risk (95% CI) of failure detection delay resulting from monthly smear relative to culture is 0.38 (0.34–0.42) for all patients and 0.33 (0.25–0.42) for HIV-co-infected patients. Failure detection is delayed by reducing the sensitivity and frequency of the monitoring method. Monthly monitoring of sputum cultures from patients receiving MDR-TB treatment is recommended. Expanded laboratory capacity is needed for high-quality culture, and for smear microscopy and rapid molecular tests. PMID:27587552

  19. Frequency interval balanced truncation of discrete-time bilinear systems

    DEFF Research Database (Denmark)

    Jazlan, Ahmad; Sreeram, Victor; Shaker, Hamid Reza

    2016-01-01

    This paper presents the development of a new model reduction method for discrete-time bilinear systems based on the balanced truncation framework. In many model reduction applications, it is advantageous to analyze the characteristics of the system with emphasis on particular frequency intervals...... are the solution to a pair of new generalized Lyapunov equations. The conditions for solvability of these new generalized Lyapunov equations are derived and a numerical solution method for solving these generalized Lyapunov equations is presented. Numerical examples which illustrate the usage of the new...... generalized frequency interval controllability and observability gramians as part of the balanced truncation framework are provided to demonstrate the performance of the proposed method....

  20. [Influence of gender, age and season on thyroid hormone reference interval].

    Science.gov (United States)

    Qiu, L; Wang, D C; Xu, T; Cheng, X Q; Sun, Q; Hu, Y Y; Liu, H C; Lu, S Y; Yang, G H; Wang, Z J

    2018-05-29

    Objective: Using clinical "big data" , to investigate the factors that affect the levels of thyroid hormones, and to explore the partitioning criteria for reference intervals (RI) of these hormones. Methods: An observation study was conducted. Information of 107 107 individuals undergoing routine physical examination in Peking Union Medical College Hospital from September 1(st,) 2013 to August 31(st,) 2016 was collected, thyroid hormone of these subjects were detected. To explore the test results distribution and differences of TSH, FT4 and FT3 by gender and age; according to the seasonal division standard of China Meteorological Administration, the study period was divided into four seasons, and the seasonal fluctuation on TSH was analyzed.To define the appropriate partition by gender, age and season according to significant difference analysis. Results: In male and female, the distributions of TSH were 1.779(0.578-4.758), 2.023(0.420-5.343)mU/L, respectively, and the level of TSH in female was higher than in male ( Z =-37.600, P groups by 65 years old and female were divided by 50 years old, respectively, and the distributions of TSH in male and female of older group were 1.818(0.528-5.240), 2.111(0.348-5.735)mU/L, in younger group were 1.778(0.582-4.696), 1.991(0.427-5.316)mU/L. The level of TSH in older group was significantly higher than in younger group ( Z =-2.269, -10.400, all P group was much wider than in younger. The distribution of whole in spring, summer and autumn was 1.869( 0.510-5.042)mU/L, in winter was 1.978(0.527-5.250) mU/L, and the difference between them had statistical significance ( Z =-15.000, P age significantly affect the serum levels of TSH, FT4, and FT3, the distribution of TSH in female and elder group are wider than in male, and that of FT4, FT3 are lower.Seasons significantly affect the serum TSH level, the peak value is observed in winter. There are obviously differences between "rough" RIs and manufacture recommended RIs. Each

  1. Automatic, time-interval traffic counts for recreation area management planning

    Science.gov (United States)

    D. L. Erickson; C. J. Liu; H. K. Cordell

    1980-01-01

    Automatic, time-interval recorders were used to count directional vehicular traffic on a multiple entry/exit road network in the Red River Gorge Geological Area, Daniel Boone National Forest. Hourly counts of entering and exiting traffic differed according to recorder location, but an aggregated distribution showed a delayed peak in exiting traffic thought to be...

  2. Recommended Nordic paediatric reference intervals for 21 common biochemical properties

    DEFF Research Database (Denmark)

    Hilsted, Linda; Rustad, Pål; Aksglæde, Lise

    2013-01-01

    healthy Danish children were collected for establishing reference intervals for 21 common biochemical properties (Alanine transaminase, Albumin, Alkaline phosphatase, Aspartate transaminase, Bilirubin, Calcium, Cholesterol, Creatinine, Creatine kinase, HDL-Cholesterol, Iron, Lactate dehydrogenase, LDL...... values of X for the properties and statistical calculations carried out as performed in the NORIP study. Thus commutable (regarding analytical method) reference intervals for 20 properties were established and for LDL-Cholesterol reference intervals were reported for the specific analytical method...... employed. The data were compared to previous studies and to those obtained from the youngest age group in the NORIP study. Marked age differences were observed for most of the properties. Several properties also showed gender-related differences, mainly at the onset of puberty. Data are presented...

  3. Fault detection for discrete-time LPV systems using interval observers

    Science.gov (United States)

    Zhang, Zhi-Hui; Yang, Guang-Hong

    2017-10-01

    This paper is concerned with the fault detection (FD) problem for discrete-time linear parameter-varying systems subject to bounded disturbances. A parameter-dependent FD interval observer is designed based on parameter-dependent Lyapunov and slack matrices. The design method is presented by translating the parameter-dependent linear matrix inequalities (LMIs) into finite ones. In contrast to the existing results based on parameter-independent and diagonal Lyapunov matrices, the derived disturbance attenuation, fault sensitivity and nonnegative conditions lead to less conservative LMI characterisations. Furthermore, without the need to design the residual evaluation functions and thresholds, the residual intervals generated by the interval observers are used directly for FD decision. Finally, simulation results are presented for showing the effectiveness and superiority of the proposed method.

  4. Optimal Planning Method of On-load Capacity Regulating Distribution Transformers in Urban Distribution Networks after Electric Energy Replacement Considering Uncertainties

    Directory of Open Access Journals (Sweden)

    Yu Su

    2018-06-01

    Full Text Available Electric energy replacement is the umbrella term for the use of electric energy to replace oil (e.g., electric automobiles, coal (e.g., electric heating, and gas (e.g., electric cooking appliances, which increases the electrical load peak, causing greater valley/peak differences. On-load capacity regulating distribution transformers have been used to deal with loads with great valley/peak differences, so reasonably replacing conventional distribution transformers with on-load capacity regulating distribution transformers can effectively cope with load changes after electric energy replacement and reduce the no-load losses of distribution transformers. Before planning for on-load capacity regulating distribution transformers, the nodal effective load considering uncertainties within the life cycle after electric energy replacement was obtained by a Monte Carlo method. Then, according to the loss relation between on-load capacity regulating distribution transformers and conventional distribution transformers, three characteristic indexes of annual continuous apparent power curve and replacement criteria for on-load capacity regulating distribution transformers were put forward in this paper, and a set of distribution transformer replaceable points was obtained. Next, based on cost benefit analysis, a planning model of on-load capacity regulating distribution transformers which consists of investment profitability index within the life cycle, investment cost recouping index and capacity regulating cost index was put forward. The branch and bound method was used to solve the planning model within replaceable point set to obtain upgrading and reconstruction scheme of distribution transformers under a certain investment. Finally, planning analysis of on-load capacity regulating distribution transformers was carried out for electric energy replacement points in one urban distribution network under three scenes: certain load, uncertain load and nodal

  5. Posterior white matter disease distribution as a predictor of amyloid angiopathy

    Science.gov (United States)

    Thanprasertsuk, Sekh; Martinez-Ramirez, Sergi; Pontes-Neto, Octavio Marques; Ni, Jun; Ayres, Alison; Reed, Anne; Swords, Kyleen; Gurol, M. Edip; Greenberg, Steven M.

    2014-01-01

    Objectives: We sought to examine whether a posterior distribution of white matter hyperintensities (WMH) is an independent predictor of pathologically confirmed cerebral amyloid angiopathy (CAA) and whether it is associated with MRI markers of CAA, in patients without lobar intracerebral hemorrhage. Methods: We developed a quantitative method to measure anteroposterior (AP) distribution of WMH. A retrospective cohort of patients without intracerebral hemorrhage and with pathologic evaluation of CAA was examined to determine whether posterior WMH distribution was an independent predictor of CAA (n = 59). The relationship of AP distributions of WMH to strictly lobar microbleeds (MBs) (n = 259) and location of dilated perivascular spaces (DPVS) (n = 85) was examined in a separate cohort of patients evaluated in a memory clinic. Results: A more posterior WMH distribution was found to be an independent predictor of pathologic evidence of CAA (p = 0.001, odds ratio [95% confidence interval] = 1.19 [1.07–1.32]), even in the subgroup without lobar MBs (p = 0.016, odds ratio [95% confidence interval] = 1.18 [1.03–1.36]). In the memory clinic cohort, strictly lobar MBs were independently associated with more posterior WMH distribution (p = 0.009). AP distribution of WMH was also associated with location of DPVS (p = 0.001), in that patients with predominant DPVS in the white matter over the basal ganglia harbored a more posterior WMH distribution. Conclusions: Our results suggest that AP distribution of WMH may represent an additional marker of CAA, irrespective of the presence of lobar hemorrhages. Classification of evidence: This study provides Class III evidence that there is a significant association between the AP distribution of WMH on MRI with the presence of pathologically confirmed CAA pathology. PMID:25063759

  6. Serial binary interval ratios improve rhythm reproduction

    Directory of Open Access Journals (Sweden)

    Xiang eWu

    2013-08-01

    Full Text Available Musical rhythm perception is a natural human ability that involves complex cognitive processes. Rhythm refers to the organization of events in time, and musical rhythms have an underlying hierarchical metrical structure. The metrical structure induces the feeling of a beat and the extent to which a rhythm induces the feeling of a beat is referred to as its metrical strength. Binary ratios are the most frequent interval ratio in musical rhythms. Rhythms with hierarchical binary ratios are better discriminated and reproduced than rhythms with hierarchical non-binary ratios. However, it remains unclear whether a superiority of serial binary over non-binary ratios in rhythm perception and reproduction exists. In addition, how different types of serial ratios influence the metrical strength of rhythms remains to be elucidated. The present study investigated serial binary vs. non-binary ratios in a reproduction task. Rhythms formed with exclusively binary (1:2:4:8, non-binary integer (1:3:5:6, and non-integer (1:2.3:5.3:6.4 ratios were examined within a constant meter. The results showed that the 1:2:4:8 rhythm type was more accurately reproduced than the 1:3:5:6 and 1:2.3:5.3:6.4 rhythm types, and the 1:2.3:5.3:6.4 rhythm type was more accurately reproduced than the 1:3:5:6 rhythm type. Further analyses showed that reproduction performance was better predicted by the distribution pattern of event occurrences within an inter-beat interval, than by the coincidence of events with beats, or the magnitude and complexity of interval ratios. Whereas rhythm theories and empirical data emphasize the role of the coincidence of events with beats in determining metrical strength and predicting rhythm performance, the present results suggest that rhythm processing may be better understood when the distribution pattern of event occurrences is taken into account. These results provide new insights into the mechanisms underlining musical rhythm perception.

  7. Serial binary interval ratios improve rhythm reproduction.

    Science.gov (United States)

    Wu, Xiang; Westanmo, Anders; Zhou, Liang; Pan, Junhao

    2013-01-01

    Musical rhythm perception is a natural human ability that involves complex cognitive processes. Rhythm refers to the organization of events in time, and musical rhythms have an underlying hierarchical metrical structure. The metrical structure induces the feeling of a beat and the extent to which a rhythm induces the feeling of a beat is referred to as its metrical strength. Binary ratios are the most frequent interval ratio in musical rhythms. Rhythms with hierarchical binary ratios are better discriminated and reproduced than rhythms with hierarchical non-binary ratios. However, it remains unclear whether a superiority of serial binary over non-binary ratios in rhythm perception and reproduction exists. In addition, how different types of serial ratios influence the metrical strength of rhythms remains to be elucidated. The present study investigated serial binary vs. non-binary ratios in a reproduction task. Rhythms formed with exclusively binary (1:2:4:8), non-binary integer (1:3:5:6), and non-integer (1:2.3:5.3:6.4) ratios were examined within a constant meter. The results showed that the 1:2:4:8 rhythm type was more accurately reproduced than the 1:3:5:6 and 1:2.3:5.3:6.4 rhythm types, and the 1:2.3:5.3:6.4 rhythm type was more accurately reproduced than the 1:3:5:6 rhythm type. Further analyses showed that reproduction performance was better predicted by the distribution pattern of event occurrences within an inter-beat interval, than by the coincidence of events with beats, or the magnitude and complexity of interval ratios. Whereas rhythm theories and empirical data emphasize the role of the coincidence of events with beats in determining metrical strength and predicting rhythm performance, the present results suggest that rhythm processing may be better understood when the distribution pattern of event occurrences is taken into account. These results provide new insights into the mechanisms underlining musical rhythm perception.

  8. Construction of prediction intervals for Palmer Drought Severity Index using bootstrap

    Science.gov (United States)

    Beyaztas, Ufuk; Bickici Arikan, Bugrayhan; Beyaztas, Beste Hamiye; Kahya, Ercan

    2018-04-01

    In this study, we propose an approach based on the residual-based bootstrap method to obtain valid prediction intervals using monthly, short-term (three-months) and mid-term (six-months) drought observations. The effects of North Atlantic and Arctic Oscillation indexes on the constructed prediction intervals are also examined. Performance of the proposed approach is evaluated for the Palmer Drought Severity Index (PDSI) obtained from Konya closed basin located in Central Anatolia, Turkey. The finite sample properties of the proposed method are further illustrated by an extensive simulation study. Our results revealed that the proposed approach is capable of producing valid prediction intervals for future PDSI values.

  9. Estimation and interpretation of keff confidence intervals in MCNP

    International Nuclear Information System (INIS)

    Urbatsch, T.J.

    1995-11-01

    MCNP's criticality methodology and some basic statistics are reviewed. Confidence intervals are discussed, as well as how to build them and their importance in the presentation of a Monte Carlo result. The combination of MCNP's three k eff estimators is shown, theoretically and empirically, by statistical studies and examples, to be the best k eff estimator. The method of combining estimators is based on a solid theoretical foundation, namely, the Gauss-Markov Theorem in regard to the least squares method. The confidence intervals of the combined estimator are also shown to have correct coverage rates for the examples considered

  10. Evaluating FOMC forecast ranges: an interval data approach

    OpenAIRE

    Henning Fischer; Marta García-Bárzana; Peter Tillmann; Peter Winker

    2012-01-01

    The Federal Open Market Committee (FOMC) of the U.S. Federal Reserve publishes the range of members' forecasts for key macroeconomic variables, but not the distribution of forecasts within this range. To evaluate these projections, previous papers compare the midpoint of the ranges with the realized outcome. This paper proposes a new approach to forecast evaluation that takes account of the interval nature of projections. It is shown that using the conventional Mincer-Zarnowitz approach to ev...

  11. Notes on testing equality and interval estimation in Poisson frequency data under a three-treatment three-period crossover trial.

    Science.gov (United States)

    Lui, Kung-Jong; Chang, Kuang-Chao

    2016-10-01

    When the frequency of event occurrences follows a Poisson distribution, we develop procedures for testing equality of treatments and interval estimators for the ratio of mean frequencies between treatments under a three-treatment three-period crossover design. Using Monte Carlo simulations, we evaluate the performance of these test procedures and interval estimators in various situations. We note that all test procedures developed here can perform well with respect to Type I error even when the number of patients per group is moderate. We further note that the two weighted-least-squares (WLS) test procedures derived here are generally preferable to the other two commonly used test procedures in the contingency table analysis. We also demonstrate that both interval estimators based on the WLS method and interval estimators based on Mantel-Haenszel (MH) approach can perform well, and are essentially of equal precision with respect to the average length. We use a double-blind randomized three-treatment three-period crossover trial comparing salbutamol and salmeterol with a placebo with respect to the number of exacerbations of asthma to illustrate the use of these test procedures and estimators. © The Author(s) 2014.

  12. On the Distribution of Zeros and Poles of Rational Approximants on Intervals

    Directory of Open Access Journals (Sweden)

    V. V. Andrievskii

    2012-01-01

    Full Text Available The distribution of zeros and poles of best rational approximants is well understood for the functions (=||, >0. If ∈[−1,1] is not holomorphic on [−1,1], the distribution of the zeros of best rational approximants is governed by the equilibrium measure of [−1,1] under the additional assumption that the rational approximants are restricted to a bounded degree of the denominator. This phenomenon was discovered first for polynomial approximation. In this paper, we investigate the asymptotic distribution of zeros, respectively, -values, and poles of best real rational approximants of degree at most to a function ∈[−1,1] that is real-valued, but not holomorphic on [−1,1]. Generalizations to the lower half of the Walsh table are indicated.

  13. A novel approach based on preference-based index for interval bilevel linear programming problem.

    Science.gov (United States)

    Ren, Aihong; Wang, Yuping; Xue, Xingsi

    2017-01-01

    This paper proposes a new methodology for solving the interval bilevel linear programming problem in which all coefficients of both objective functions and constraints are considered as interval numbers. In order to keep as much uncertainty of the original constraint region as possible, the original problem is first converted into an interval bilevel programming problem with interval coefficients in both objective functions only through normal variation of interval number and chance-constrained programming. With the consideration of different preferences of different decision makers, the concept of the preference level that the interval objective function is preferred to a target interval is defined based on the preference-based index. Then a preference-based deterministic bilevel programming problem is constructed in terms of the preference level and the order relation [Formula: see text]. Furthermore, the concept of a preference δ -optimal solution is given. Subsequently, the constructed deterministic nonlinear bilevel problem is solved with the help of estimation of distribution algorithm. Finally, several numerical examples are provided to demonstrate the effectiveness of the proposed approach.

  14. A novel approach based on preference-based index for interval bilevel linear programming problem

    Directory of Open Access Journals (Sweden)

    Aihong Ren

    2017-05-01

    Full Text Available Abstract This paper proposes a new methodology for solving the interval bilevel linear programming problem in which all coefficients of both objective functions and constraints are considered as interval numbers. In order to keep as much uncertainty of the original constraint region as possible, the original problem is first converted into an interval bilevel programming problem with interval coefficients in both objective functions only through normal variation of interval number and chance-constrained programming. With the consideration of different preferences of different decision makers, the concept of the preference level that the interval objective function is preferred to a target interval is defined based on the preference-based index. Then a preference-based deterministic bilevel programming problem is constructed in terms of the preference level and the order relation ⪯ m w $\\preceq_{mw}$ . Furthermore, the concept of a preference δ-optimal solution is given. Subsequently, the constructed deterministic nonlinear bilevel problem is solved with the help of estimation of distribution algorithm. Finally, several numerical examples are provided to demonstrate the effectiveness of the proposed approach.

  15. Analytical method for reconstruction pin to pin of the nuclear power density distribution

    International Nuclear Information System (INIS)

    Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S.

    2013-01-01

    An accurate and efficient method for reconstructing pin to pin of the nuclear power density distribution, involving the analytical solution of the diffusion equation for two-dimensional neutron energy groups in homogeneous nodes, is presented. The boundary conditions used for analytic as solution are the four currents or fluxes on the surface of the node, which are obtained by Nodal Expansion Method (known as NEM) and four fluxes at the vertices of a node calculated using the finite difference method. The analytical solution found is the homogeneous distribution of neutron flux. Detailed distributions pin to pin inside a fuel assembly are estimated by the product of homogeneous flux distribution by local heterogeneous form function. Furthermore, the form functions of flux and power are used. The results obtained with this method have a good accuracy when compared with reference values. (author)

  16. Analytical method for reconstruction pin to pin of the nuclear power density distribution

    Energy Technology Data Exchange (ETDEWEB)

    Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S., E-mail: ppessoa@con.ufrj.br, E-mail: fernando@con.ufrj.br, E-mail: aquilino@imp.ufrj.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil)

    2013-07-01

    An accurate and efficient method for reconstructing pin to pin of the nuclear power density distribution, involving the analytical solution of the diffusion equation for two-dimensional neutron energy groups in homogeneous nodes, is presented. The boundary conditions used for analytic as solution are the four currents or fluxes on the surface of the node, which are obtained by Nodal Expansion Method (known as NEM) and four fluxes at the vertices of a node calculated using the finite difference method. The analytical solution found is the homogeneous distribution of neutron flux. Detailed distributions pin to pin inside a fuel assembly are estimated by the product of homogeneous flux distribution by local heterogeneous form function. Furthermore, the form functions of flux and power are used. The results obtained with this method have a good accuracy when compared with reference values. (author)

  17. Uncertainty Management of Dynamic Tariff Method for Congestion Management in Distribution Networks

    DEFF Research Database (Denmark)

    Huang, Shaojun; Wu, Qiuwei; Cheng, Lin

    2016-01-01

    The dynamic tariff (DT) method is designed for the distribution system operator (DSO) to alleviate congestions that might occur in a distribution network with high penetration of distributed energy resources (DERs). Uncertainty management is required for the decentralized DT method because the DT...... is de- termined based on optimal day-ahead energy planning with forecasted parameters such as day-ahead energy prices and en- ergy needs which might be different from the parameters used by aggregators. The uncertainty management is to quantify and mitigate the risk of the congestion when employing...

  18. Calculation of depletion with optimal distribution of initial control poison

    International Nuclear Information System (INIS)

    Castro Lobo, P.D. de.

    1978-03-01

    The spatial depletion equations are linearized within the time intervals and their solution is obtained by modal analysis. At the beginning of life an optimal poison distribution that maximizes neutron economy and the corresponding flux is determined. At the start of the subsequent time steps the flux distributions are obtained by pertubation method in relation to the start of the previous time steps. The problem was studied with constant poison distribution in order to evaluate the influence of the poison at the beginning of life. The results obtained by the modal expansion techniques are satisfactory. However, the optimization of the initial distribution of the control poison does not indicate any significant effect on the core life [pt

  19. Community Based Distribution of Child Spacing Methods at ...

    African Journals Online (AJOL)

    uses volunteer CBD agents. Mrs. E.F. Pelekamoyo. Service Delivery Officer. National Family Welfare Council of Malawi. Private Bag 308. Lilongwe 3. Malawi. Community Based Distribution of. Child Spacing Methods ... than us at the Hospital; male motivators by talking to their male counterparts help them to accept that their ...

  20. Information-theoretic methods for estimating of complicated probability distributions

    CERN Document Server

    Zong, Zhi

    2006-01-01

    Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur

  1. An Interactive Signed Distance Approach for Multiple Criteria Group Decision-Making Based on Simple Additive Weighting Method with Incomplete Preference Information Defined by Interval Type-2 Fuzzy Sets

    OpenAIRE

    Ting-Yu Chen

    2014-01-01

    Interval type-2 fuzzy sets (T2FSs) with interval membership grades are suitable for dealing with imprecision or uncertainties in many real-world problems. In the Interval type-2 fuzzy context, the aim of this paper is to develop an interactive signed distance-based simple additive weighting (SAW) method for solving multiple criteria group decision-making problems with linguistic ratings and incomplete preference information. This paper first formulates a group decision-making problem with unc...

  2. A simple nodal force distribution method in refined finite element meshes

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jai Hak [Chungbuk National University, Chungju (Korea, Republic of); Shin, Kyu In [Gentec Co., Daejeon (Korea, Republic of); Lee, Dong Won [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Cho, Seungyon [National Fusion Research Institute, Daejeon (Korea, Republic of)

    2017-05-15

    In finite element analyses, mesh refinement is frequently performed to obtain accurate stress or strain values or to accurately define the geometry. After mesh refinement, equivalent nodal forces should be calculated at the nodes in the refined mesh. If field variables and material properties are available at the integration points in each element, then the accurate equivalent nodal forces can be calculated using an adequate numerical integration. However, in certain circumstances, equivalent nodal forces cannot be calculated because field variable data are not available. In this study, a very simple nodal force distribution method was proposed. Nodal forces of the original finite element mesh are distributed to the nodes of refined meshes to satisfy the equilibrium conditions. The effect of element size should also be considered in determining the magnitude of the distributing nodal forces. A program was developed based on the proposed method, and several example problems were solved to verify the accuracy and effectiveness of the proposed method. From the results, accurate stress field can be recognized to be obtained from refined meshes using the proposed nodal force distribution method. In example problems, the difference between the obtained maximum stress and target stress value was less than 6 % in models with 8-node hexahedral elements and less than 1 % in models with 20-node hexahedral elements or 10-node tetrahedral elements.

  3. A swarm-trained k-nearest prototypes adaptive classifier with automatic feature selection for interval data.

    Science.gov (United States)

    Silva Filho, Telmo M; Souza, Renata M C R; Prudêncio, Ricardo B C

    2016-08-01

    Some complex data types are capable of modeling data variability and imprecision. These data types are studied in the symbolic data analysis field. One such data type is interval data, which represents ranges of values and is more versatile than classic point data for many domains. This paper proposes a new prototype-based classifier for interval data, trained by a swarm optimization method. Our work has two main contributions: a swarm method which is capable of performing both automatic selection of features and pruning of unused prototypes and a generalized weighted squared Euclidean distance for interval data. By discarding unnecessary features and prototypes, the proposed algorithm deals with typical limitations of prototype-based methods, such as the problem of prototype initialization. The proposed distance is useful for learning classes in interval datasets with different shapes, sizes and structures. When compared to other prototype-based methods, the proposed method achieves lower error rates in both synthetic and real interval datasets. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. A simulation-based interval two-stage stochastic model for agricultural nonpoint source pollution control through land retirement

    International Nuclear Information System (INIS)

    Luo, B.; Li, J.B.; Huang, G.H.; Li, H.L.

    2006-01-01

    This study presents a simulation-based interval two-stage stochastic programming (SITSP) model for agricultural nonpoint source (NPS) pollution control through land retirement under uncertain conditions. The modeling framework was established by the development of an interval two-stage stochastic program, with its random parameters being provided by the statistical analysis of the simulation outcomes of a distributed water quality approach. The developed model can deal with the tradeoff between agricultural revenue and 'off-site' water quality concern under random effluent discharge for a land retirement scheme through minimizing the expected value of long-term total economic and environmental cost. In addition, the uncertainties presented as interval numbers in the agriculture-water system can be effectively quantified with the interval programming. By subdividing the whole agricultural watershed into different zones, the most pollution-related sensitive cropland can be identified and an optimal land retirement scheme can be obtained through the modeling approach. The developed method was applied to the Swift Current Creek watershed in Canada for soil erosion control through land retirement. The Hydrological Simulation Program-FORTRAN (HSPF) was used to simulate the sediment information for this case study. Obtained results indicate that the total economic and environmental cost of the entire agriculture-water system can be limited within an interval value for the optimal land retirement schemes. Meanwhile, a best and worst land retirement scheme was obtained for the study watershed under various uncertainties

  5. An analyzer for pulse-interval times to study high-order effects in the processing of nuclear detector signals

    International Nuclear Information System (INIS)

    Denecke, B.; Jonge, S. de

    1998-01-01

    An electronic device to measure interval time density distributions of subsequent pulses in nuclear detectors and their electronics is described. The device has a pair-pulse resolution of 10 ns and 25 ns for 3 subsequent input signals. The conversion range is 4096 channels and the lowest channel width is 10 ns. Counter dead times, single and in series were studied and compared with the statistical model. True count rates were obtained from an exponential fit through the interval-time distribution

  6. A new variable interval schedule with constant hazard rate and finite time range.

    Science.gov (United States)

    Bugallo, Mehdi; Machado, Armando; Vasconcelos, Marco

    2018-05-27

    We propose a new variable interval (VI) schedule that achieves constant probability of reinforcement in time while using a bounded range of intervals. By sampling each trial duration from a uniform distribution ranging from 0 to 2 T seconds, and then applying a reinforcement rule that depends linearly on trial duration, the schedule alternates reinforced and unreinforced trials, each less than 2 T seconds, while preserving a constant hazard function. © 2018 Society for the Experimental Analysis of Behavior.

  7. Rock sampling. [method for controlling particle size distribution

    Science.gov (United States)

    Blum, P. (Inventor)

    1971-01-01

    A method for sampling rock and other brittle materials and for controlling resultant particle sizes is described. The method involves cutting grooves in the rock surface to provide a grouping of parallel ridges and subsequently machining the ridges to provide a powder specimen. The machining step may comprise milling, drilling, lathe cutting or the like; but a planing step is advantageous. Control of the particle size distribution is effected primarily by changing the height and width of these ridges. This control exceeds that obtainable by conventional grinding.

  8. Finite difference applied to the reconstruction method of the nuclear power density distribution

    International Nuclear Information System (INIS)

    Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S.

    2016-01-01

    Highlights: • A method for reconstruction of the power density distribution is presented. • The method uses discretization by finite differences of 2D neutrons diffusion equation. • The discretization is performed homogeneous meshes with dimensions of a fuel cell. • The discretization is combined with flux distributions on the four node surfaces. • The maximum errors in reconstruction occur in the peripheral water region. - Abstract: In this reconstruction method the two-dimensional (2D) neutron diffusion equation is discretized by finite differences, employed to two energy groups (2G) and meshes with fuel-pin cell dimensions. The Nodal Expansion Method (NEM) makes use of surface discontinuity factors of the node and provides for reconstruction method the effective multiplication factor of the problem and the four surface average fluxes in homogeneous nodes with size of a fuel assembly (FA). The reconstruction process combines the discretized 2D diffusion equation by finite differences with fluxes distribution on four surfaces of the nodes. These distributions are obtained for each surfaces from a fourth order one-dimensional (1D) polynomial expansion with five coefficients to be determined. The conditions necessary for coefficients determination are three average fluxes on consecutive surfaces of the three nodes and two fluxes in corners between these three surface fluxes. Corner fluxes of the node are determined using a third order 1D polynomial expansion with four coefficients. This reconstruction method uses heterogeneous nuclear parameters directly providing the heterogeneous neutron flux distribution and the detailed nuclear power density distribution within the FAs. The results obtained with this method has good accuracy and efficiency when compared with reference values.

  9. 用Delta法估计多维测验合成信度的置信区间%Estimating the Confidence Interval of Composite Reliability of a Multidimensional Test With the Delta Method

    Institute of Scientific and Technical Information of China (English)

    叶宝娟; 温忠麟

    2012-01-01

    Reliability is very important in evaluating the quality of a test. Based on the confirmatory factor analysis, composite reliabili- ty is a good index to estimate the test reliability for general applications. As is well known, point estimate contains limited information a- bout a population parameter and cannot indicate how far it can be from the population parameter. The confidence interval of the parame- ter can provide more information. In evaluating the quality of a test, the confidence interval of composite reliability has received atten- tion in recent years. There are three approaches to estimating the confidence interval of composite reliability of an unidimensional test: the Bootstrap method, the Delta method, and the direct use of the standard error of a software output (e. g. , LISREL). The Bootstrap method pro- vides empirical results of the standard error, and is the most credible method. But it needs data simulation techniques, and its computa- tion process is rather complex. The Delta method computes the standard error of composite reliability by approximate calculation. It is simpler than the Bootstrap method. The LISREL software can directly prompt the standard error, and it is the easiest among the three methods. By simulation study, it had been found that the interval estimates obtained by the Delta method and the Bootstrap method were almost identical, whereas the results obtained by LISREL and by the Bootstrap method were substantially different ( Ye & Wen, 2011 ). The Delta method is recommended when the confidence interval of composite reliability of a unidimensional test is estimated, because the Delta method is simpler than the Bootstrap method. There was little research about how to compute the confidence interval of composite reliability of a multidimensional test. We de- duced a formula by using the Delta method for computing the standard error of composite reliability of a multidimensional test. Based on the standard error, the

  10. On interval and cyclic interval edge colorings of (3,5)-biregular graphs

    DEFF Research Database (Denmark)

    Casselgren, Carl Johan; Petrosyan, Petros; Toft, Bjarne

    2017-01-01

    A proper edge coloring f of a graph G with colors 1,2,3,…,t is called an interval coloring if the colors on the edges incident to every vertex of G form an interval of integers. The coloring f is cyclic interval if for every vertex v of G, the colors on the edges incident to v either form an inte...

  11. Statistical inferences with jointly type-II censored samples from two Pareto distributions

    Science.gov (United States)

    Abu-Zinadah, Hanaa H.

    2017-08-01

    In the several fields of industries the product comes from more than one production line, which is required to work the comparative life tests. This problem requires sampling of the different production lines, then the joint censoring scheme is appeared. In this article we consider the life time Pareto distribution with jointly type-II censoring scheme. The maximum likelihood estimators (MLE) and the corresponding approximate confidence intervals as well as the bootstrap confidence intervals of the model parameters are obtained. Also Bayesian point and credible intervals of the model parameters are presented. The life time data set is analyzed for illustrative purposes. Monte Carlo results from simulation studies are presented to assess the performance of our proposed method.

  12. An interval-possibilistic basic-flexible programming method for air quality management of municipal energy system through introducing electric vehicles.

    Science.gov (United States)

    Yu, L; Li, Y P; Huang, G H; Shan, B G

    2017-09-01

    Contradictions of sustainable transportation development and environmental issues have been aggravated significantly and been one of the major concerns for energy systems planning and management. A heavy emphasis is placed on stimulation of electric vehicles (EVs) to handle these problems associated with various complexities and uncertainties in municipal energy system (MES). In this study, an interval-possibilistic basic-flexible programming (IPBFP) method is proposed for planning MES of Qingdao, where uncertainties expressed as interval-flexible variables and interval-possibilistic parameters can be effectively reflected. Support vector regression (SVR) is used for predicting electricity demand of the city under various scenarios. Solutions of EVs stimulation levels and satisfaction levels in association with flexible constraints and predetermined necessity degrees are analyzed, which can help identify the optimized energy-supply patterns that could plunk for improvement of air quality and hedge against violation of soft constraints. Results disclose that largely developing EVs can help facilitate the city's energy system with an environment-effective way. However, compared to the rapid growth of transportation, the EVs' contribution of improving the city's air quality is limited. It is desired that, to achieve an environmentally sustainable MES, more concerns should be focused on the integration of increasing renewable energy resources, stimulating EVs as well as improving energy transmission, transport and storage. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Distributed Cooperative Search Control Method of Multiple UAVs for Moving Target

    Directory of Open Access Journals (Sweden)

    Chang-jian Ru

    2015-01-01

    Full Text Available To reduce the impact of uncertainties caused by unknown motion parameters on searching plan of moving targets and improve the efficiency of UAV’s searching, a novel distributed Multi-UAVs cooperative search control method for moving target is proposed in this paper. Based on detection results of onboard sensors, target probability map is updated using Bayesian theory. A Gaussian distribution of target transition probability density function is introduced to calculate prediction probability of moving target existence, and then target probability map can be further updated in real-time. A performance index function combining with target cost, environment cost, and cooperative cost is constructed, and the cooperative searching problem can be transformed into a central optimization problem. To improve computational efficiency, the distributed model predictive control method is presented, and thus the control command of each UAV can be obtained. The simulation results have verified that the proposed method can avoid the blindness of UAV searching better and improve overall efficiency of the team effectively.

  14. Neutron distribution modeling based on integro-probabilistic approach of discrete ordinates method

    International Nuclear Information System (INIS)

    Khromov, V.V.; Kryuchkov, E.F.; Tikhomirov, G.V.

    1992-01-01

    In this paper is described the universal nodal method for the neutron distribution calculation in reactor and shielding problems, based on using of influence functions and factors of local-integrated volume and surface neutron sources in phase subregions. This method permits to avoid the limited capabilities of collision-probability method concerning with the detailed calculation of angular neutron flux dependence, scattering anisotropy and empty channels. The proposed method may be considered as modification of S n - method with advantage of ray-effects elimination. There are presented the description of method theory and algorithm following by the examples of method applications for calculation of neutron distribution in three-dimensional model of fusion reactor blanket and in highly heterogeneous reactor with empty channel

  15. Event- and interval-based measurement of stuttering: a review.

    Science.gov (United States)

    Valente, Ana Rita S; Jesus, Luis M T; Hall, Andreia; Leahy, Margaret

    2015-01-01

    an acceptable agreement. Explanation for high reproducibility values as well as parameter choice to report those data are discussed. Both interval- and event-based methodologies used trained or experienced judges for inter- and intra-judge determination and data were beyond the references for good reproducibility values. Inter- and intra-judge values were reported in different metric scales among event- and interval-based methods studies, making it unfeasible to quantify the agreement between the two methods. © 2014 Royal College of Speech and Language Therapists.

  16. Optimal test intervals for shutdown systems for the Cernavoda nuclear power station

    International Nuclear Information System (INIS)

    Negut, Gh.; Laslau, F.

    1993-01-01

    Cernavoda nuclear power station required a complete PSA study. As a part of this study, an important goal to enhance the effectiveness of the plant operation is to establish optimal test intervals for the important engineering safety systems. The paper presents, briefly, the current methods to optimize the test intervals. For this reason it was used Vesely methods to establish optimal test intervals and Frantic code to survey the influence of the test intervals on system availability. The applications were done on the Shutdown System no. 1, a shutdown system provided whit solid rods and on Shutdown System no. 2 provided with injecting poison. The shutdown systems receive nine total independent scram signals that dictate the test interval. Fault trees for the both safety systems were developed. For the fault tree solutions an original code developed in our Institute was used. The results, intended to be implemented in the technical specifications for test and operation of Cernavoda NPS are presented

  17. Leontief Input-Output Method for The Fresh Milk Distribution Linkage Analysis

    Directory of Open Access Journals (Sweden)

    Riski Nur Istiqomah

    2016-11-01

    Full Text Available This research discusses about linkage analysis and identifies the key sector in the fresh milk distribution using Leontief Input-Output method. This method is one of the application of Mathematics in economy. The current fresh milk distribution system includes dairy farmers →collectors→fresh milk processing industries→processed milk distributors→consumers. Then, the distribution is merged between the collectors’ axctivity and the fresh milk processing industry. The data used are primary and secondary data taken in June 2016 in Kecamatan Jabung Kabupaten Malang. The collected data are then analysed using Leontief Input-Output Matriks and Python (PYIO 2.1 software. The result is that the merging of the collectors’ and the fresh milk processing industry’s activities shows high indices of forward linkages and backward linkages. It is shown that merging of the two activities is the key sector which has an important role in developing the whole activities in the fresh milk distribution.

  18. PARALLEL AND ADAPTIVE UNIFORM-DISTRIBUTED REGISTRATION METHOD FOR CHANG’E-1 LUNAR REMOTE SENSED IMAGERY

    Directory of Open Access Journals (Sweden)

    X. Ning

    2012-08-01

    To resolve the above-mentioned registration difficulties, a parallel and adaptive uniform-distributed registration method for CE-1 lunar remote sensed imagery is proposed in this paper. Based on 6 pairs of randomly selected images, both the standard SIFT algorithm and the parallel and adaptive uniform-distributed registration method were executed, the versatility and effectiveness were assessed. The experimental results indicate that: by applying the parallel and adaptive uniform-distributed registration method, the efficiency of CE-1 lunar remote sensed imagery registration were increased dramatically. Therefore, the proposed method in the paper could acquire uniform-distributed registration results more effectively, the registration difficulties including difficult to obtain results, time-consuming, non-uniform distribution could be successfully solved.

  19. Standardization of 32P activity determination method in soil-root cores for root distribution studies

    International Nuclear Information System (INIS)

    Sharma, R.B.; Ghildyal, B.P.

    1976-01-01

    The root distribution of wheat variety UP 301 was obtained by determining the 32 P activity in soil-root cores by two methods, viz., ignition and triacid digestion. Root distribution obtained by these two methods was compared with that by standard root core washing procedure. The percent error in root distribution as determined by triacid digestion method was within +- 2.1 to +- 9.0 as against +- 5.5 to +- 21.2 by ignition method. Thus triacid digestion method proved better over the ignition method. (author)

  20. Circadian profile of QT interval and QT interval variability in 172 healthy volunteers

    DEFF Research Database (Denmark)

    Bonnemeier, Hendrik; Wiegand, Uwe K H; Braasch, Wiebke

    2003-01-01

    of sleep. QT and R-R intervals revealed a characteristic day-night-pattern. Diurnal profiles of QT interval variability exhibited a significant increase in the morning hours (6-9 AM; P ... lower at day- and nighttime. Aging was associated with an increase of QT interval mainly at daytime and a significant shift of the T wave apex towards the end of the T wave. The circadian profile of ventricular repolarization is strongly related to the mean R-R interval, however, there are significant...

  1. Advanced airflow distribution methods for reducing exposure of indoor pollution

    DEFF Research Database (Denmark)

    Cao, Guangyu; Nielsen, Peter Vilhelm; Melikov, Arsen

    2017-01-01

    The adverse effect of various indoor pollutants on occupants’ health have been recognized. In public spaces flu viruses may spread from person to person by airflow generated by various traditional ventilation methods, like natural ventilation and mixing ventilation (MV Personalized ventilation (PV......) supplies clean air close to the occupant and directly into the breathing zone. Studies show that it improves the inhaled air quality and reduces the risk of airborne cross-infection in comparison with total volume (TV) ventilation. However, it is still challenging for PV and other advanced air distribution...... methods to reduce the exposure to gaseous and particulate pollutants under disturbed conditions and to ensure thermal comfort at the same time. The objective of this study is to analyse the performance of different advanced airflow distribution methods for protection of occupants from exposure to indoor...

  2. Advanced airflow distribution methods for reducing exposure of indoor pollution

    DEFF Research Database (Denmark)

    Cao, Guangyu; Nielsen, Peter Vilhelm; Melikov, Arsen Krikor

    methods to reduce the exposure to gaseous and particulate pollutants under disturbed conditions and to ensure thermal comfort at the same time. The objective of this study is to analyse the performance of different advanced airflow distribution methods for protection of occupants from exposure to indoor......The adverse effect of various indoor pollutants on occupants’ health have been recognized. In public spaces flu viruses may spread from person to person by airflow generated by various traditional ventilation methods, like natural ventilation and mixing ventilation (MV Personalized ventilation (PV......) supplies clean air close to the occupant and directly into the breathing zone. Studies show that it improves the inhaled air quality and reduces the risk of airborne cross-infection in comparison with total volume (TV) ventilation. However, it is still challenging for PV and other advanced air distribution...

  3. Another method of dead time correction

    International Nuclear Information System (INIS)

    Sabol, J.

    1988-01-01

    A new method of the correction of counting losses caused by a non-extended dead time of pulse detection systems is presented. The approach is based on the distribution of time intervals between pulses at the output of the system. The method was verified both experimentally and by using the Monte Carlo simulations. The results show that the suggested technique is more reliable and accurate than other methods based on a separate measurement of the dead time. (author) 5 refs

  4. Likelihood inference for COM-Poisson cure rate model with interval-censored data and Weibull lifetimes.

    Science.gov (United States)

    Pal, Suvra; Balakrishnan, N

    2017-10-01

    In this paper, we consider a competing cause scenario and assume the number of competing causes to follow a Conway-Maxwell Poisson distribution which can capture both over and under dispersion that is usually encountered in discrete data. Assuming the population of interest having a component cure and the form of the data to be interval censored, as opposed to the usually considered right-censored data, the main contribution is in developing the steps of the expectation maximization algorithm for the determination of the maximum likelihood estimates of the model parameters of the flexible Conway-Maxwell Poisson cure rate model with Weibull lifetimes. An extensive Monte Carlo simulation study is carried out to demonstrate the performance of the proposed estimation method. Model discrimination within the Conway-Maxwell Poisson distribution is addressed using the likelihood ratio test and information-based criteria to select a suitable competing cause distribution that provides the best fit to the data. A simulation study is also carried out to demonstrate the loss in efficiency when selecting an improper competing cause distribution which justifies the use of a flexible family of distributions for the number of competing causes. Finally, the proposed methodology and the flexibility of the Conway-Maxwell Poisson distribution are illustrated with two known data sets from the literature: smoking cessation data and breast cosmesis data.

  5. Incorporating Wind Power Forecast Uncertainties Into Stochastic Unit Commitment Using Neural Network-Based Prediction Intervals.

    Science.gov (United States)

    Quan, Hao; Srinivasan, Dipti; Khosravi, Abbas

    2015-09-01

    Penetration of renewable energy resources, such as wind and solar power, into power systems significantly increases the uncertainties on system operation, stability, and reliability in smart grids. In this paper, the nonparametric neural network-based prediction intervals (PIs) are implemented for forecast uncertainty quantification. Instead of a single level PI, wind power forecast uncertainties are represented in a list of PIs. These PIs are then decomposed into quantiles of wind power. A new scenario generation method is proposed to handle wind power forecast uncertainties. For each hour, an empirical cumulative distribution function (ECDF) is fitted to these quantile points. The Monte Carlo simulation method is used to generate scenarios from the ECDF. Then the wind power scenarios are incorporated into a stochastic security-constrained unit commitment (SCUC) model. The heuristic genetic algorithm is utilized to solve the stochastic SCUC problem. Five deterministic and four stochastic case studies incorporated with interval forecasts of wind power are implemented. The results of these cases are presented and discussed together. Generation costs, and the scheduled and real-time economic dispatch reserves of different unit commitment strategies are compared. The experimental results show that the stochastic model is more robust than deterministic ones and, thus, decreases the risk in system operations of smart grids.

  6. Distributed AC power flow method for AC and AC-DC hybrid ...

    African Journals Online (AJOL)

    ... on voltage level and R/X ratio in the formulation itself. DPFM is applied on a 10 bus, low voltage, microgrid system giving a better voltage profile.. Keywords: Microgrid (MG), Distributed Energy Resources (DER), Particle Swarm Optimization (OPF), Time varying inertia weight (TVIW), Distributed power flow method (DPFM) ...

  7. Synchronization Methods for Three Phase Distributed Power Generation Systems

    DEFF Research Database (Denmark)

    Timbus, Adrian Vasile; Teodorescu, Remus; Blaabjerg, Frede

    2005-01-01

    Nowadays, it is a general trend to increase the electricity production using Distributed Power Generation Systems (DPGS) based on renewable energy resources such as wind, sun or hydrogen. If these systems are not properly controlled, their connection to the utility network can generate problems...... on the grid side. Therefore, considerations about power generation, safe running and grid synchronization must be done before connecting these systems to the utility network. This paper is mainly dealing with the grid synchronization issues of distributed systems. An overview of the synchronization methods...

  8. Time interval between successive trading in foreign currency market: from microscopic to macroscopic

    Science.gov (United States)

    Sato, Aki-Hiro

    2004-12-01

    Recently, it has been shown that inter-transaction interval (ITI) distribution of foreign currency rates has a fat tail. In order to understand the statistical property of the ITI dealer model with N interactive agents is proposed. From numerical simulations it is confirmed that the ITI distribution of the dealer model has a power law tail. The random multiplicative process (RMP) can be approximately derived from the ITI of the dealer model. Consequently, we conclude that the power law tail of the ITI distribution of the dealer model is a result of the RMP.

  9. On-line reconstruction of in-core power distribution by harmonics expansion method

    International Nuclear Information System (INIS)

    Wang Changhui; Wu Hongchun; Cao Liangzhi; Yang Ping

    2011-01-01

    Highlights: → A harmonics expansion method for the on-line in-core power reconstruction is proposed. → A harmonics data library is pre-generated off-line and a code named COMS is developed. → Numerical results show that the maximum relative error of the reconstruction is less than 5.5%. → This method has a high computational speed compared to traditional methods. - Abstract: Fixed in-core detectors are most suitable in real-time response to in-core power distributions in pressurized water reactors (PWRs). In this paper, a harmonics expansion method is used to reconstruct the in-core power distribution of a PWR on-line. In this method, the in-core power distribution is expanded by the harmonics of one reference case. The expansion coefficients are calculated using signals provided by fixed in-core detectors. To conserve computing time and improve reconstruction precision, a harmonics data library containing the harmonics of different reference cases is constructed. Upon reconstruction of the in-core power distribution on-line, the two closest reference cases are searched from the harmonics data library to produce expanded harmonics by interpolation. The Unit 1 reactor of DayaBay Nuclear Power Plant (DayaBay NPP) in China is considered for verification. The maximum relative error between the measurement and reconstruction results is less than 5.5%, and the computing time is about 0.53 s for a single reconstruction, indicating that this method is suitable for the on-line monitoring of PWRs.

  10. An improved in situ method for determining depth distributions of gamma-ray emitting radionuclides

    International Nuclear Information System (INIS)

    Benke, R.R.; Kearfott, K.J.

    2001-01-01

    In situ gamma-ray spectrometry determines the quantities of radionuclides in some medium with a portable detector. The main limitation of in situ gamma-ray spectrometry lies in determining the depth distribution of radionuclides. This limitation is addressed by developing an improved in situ method for determining the depth distributions of gamma-ray emitting radionuclides in large area sources. This paper implements a unique collimator design with conventional radiation detection equipment. Cylindrically symmetric collimators were fabricated to allow only those gamma-rays emitted from a selected range of polar angles (measured off the detector axis) to be detected. Positioned with its axis normal to surface of the media, each collimator enables the detection of gamma-rays emitted from a different range of polar angles and preferential depths. Previous in situ methods require a priori knowledge of the depth distribution shape. However, the absolute method presented in this paper determines the depth distribution as a histogram and does not rely on such assumptions. Other advantages over previous in situ methods are that this method only requires a single gamma-ray emission, provides more detailed depth information, and offers a superior ability for characterizing complex depth distributions. Collimated spectrometer measurements of buried area sources demonstrated the ability of the method to yield accurate depth information. Based on the results of actual measurements, this method increases the potential of in situ gamma-ray spectrometry as an independent characterization tool in situations with unknown radionuclide depth distributions

  11. Multiscale multifractal DCCA and complexity behaviors of return intervals for Potts price model

    Science.gov (United States)

    Wang, Jie; Wang, Jun; Stanley, H. Eugene

    2018-02-01

    To investigate the characteristics of extreme events in financial markets and the corresponding return intervals among these events, we use a Potts dynamic system to construct a random financial time series model of the attitudes of market traders. We use multiscale multifractal detrended cross-correlation analysis (MM-DCCA) and Lempel-Ziv complexity (LZC) perform numerical research of the return intervals for two significant China's stock market indices and for the proposed model. The new MM-DCCA method is based on the Hurst surface and provides more interpretable cross-correlations of the dynamic mechanism between different return interval series. We scale the LZC method with different exponents to illustrate the complexity of return intervals in different scales. Empirical studies indicate that the proposed return intervals from the Potts system and the real stock market indices hold similar statistical properties.

  12. The "Interval Walking in Colorectal Cancer" (I-WALK-CRC) study: Design, methods and recruitment results of a randomized controlled feasibility trial.

    Science.gov (United States)

    Banck-Petersen, Anna; Olsen, Cecilie K; Djurhuus, Sissal S; Herrstedt, Anita; Thorsen-Streit, Sarah; Ried-Larsen, Mathias; Østerlind, Kell; Osterkamp, Jens; Krarup, Peter-Martin; Vistisen, Kirsten; Mosgaard, Camilla S; Pedersen, Bente K; Højman, Pernille; Christensen, Jesper F

    2018-03-01

    Low physical activity level is associated with poor prognosis in patients with colorectal cancer (CRC). To increase physical activity, technology-based platforms are emerging and provide intriguing opportunities to prescribe and monitor active lifestyle interventions. The "Interval Walking in Colorectal Cancer"(I-WALK-CRC) study explores the feasibility and efficacy a home-based interval-walking intervention delivered by a smart-phone application in order to improve cardio-metabolic health profile among CRC survivors. The aim of the present report is to describe the design, methods and recruitment results of the I-WALK-CRC study.Methods/Results: The I-WALK-CRC study is a randomized controlled trial designed to evaluate the feasibility and efficacy of a home-based interval walking intervention compared to a waiting-list control group for physiological and patient-reported outcomes. Patients who had completed surgery for local stage disease and patients who had completed surgery and any adjuvant chemotherapy for locally advanced stage disease were eligible for inclusion. Between October 1st , 2015, and February 1st , 2017, 136 inquiries were recorded; 83 patients were eligible for enrollment, and 42 patients accepted participation. Age and employment status were associated with participation, as participants were significantly younger (60.5 vs 70.8 years, P CRC survivors was feasible but we aim to better the recruitment rate in future studies. Further, the study clearly favored younger participants. The I-WALK-CRC study will provide important information regarding feasibility and efficacy of a home-based walking exercise program in CRC survivors.

  13. Higher moments method for generalized Pareto distribution in flood frequency analysis

    Science.gov (United States)

    Zhou, C. R.; Chen, Y. F.; Huang, Q.; Gu, S. H.

    2017-08-01

    The generalized Pareto distribution (GPD) has proven to be the ideal distribution in fitting with the peak over threshold series in flood frequency analysis. Several moments-based estimators are applied to estimating the parameters of GPD. Higher linear moments (LH moments) and higher probability weighted moments (HPWM) are the linear combinations of Probability Weighted Moments (PWM). In this study, the relationship between them will be explored. A series of statistical experiments and a case study are used to compare their performances. The results show that if the same PWM are used in LH moments and HPWM methods, the parameter estimated by these two methods is unbiased. Particularly, when the same PWM are used, the PWM method (or the HPWM method when the order equals 0) shows identical results in parameter estimation with the linear Moments (L-Moments) method. Additionally, this phenomenon is significant when r ≥ 1 that the same order PWM are used in HPWM and LH moments method.

  14. Analysis and synthesis for interval type-2 fuzzy-model-based systems

    CERN Document Server

    Li, Hongyi; Lam, Hak-Keung; Gao, Yabin

    2016-01-01

    This book develops a set of reference methods capable of modeling uncertainties existing in membership functions, and analyzing and synthesizing the interval type-2 fuzzy systems with desired performances. It also provides numerous simulation results for various examples, which fill certain gaps in this area of research and may serve as benchmark solutions for the readers. Interval type-2 T-S fuzzy models provide a convenient and flexible method for analysis and synthesis of complex nonlinear systems with uncertainties.

  15. Combination of the method of basic precipitation of lanthanons with the ion exchange distribution method by means of ammonium acetate

    International Nuclear Information System (INIS)

    Hubicki, W.; Hubicka, H.

    1980-01-01

    The method of basic precipitation of lanthanons was combined with the ion exchange distribution method using ammonium acetate. As a result of chromatogram development 1:2 the good results of distribution of Sm -Nd, the fractions 99,9% Nd 2 O 3 and Pr 6 O 11 and 99,5% La 2 O 3 were obtained. It was found that the way of packing the column influenced greatly the efficiency of ion distribution. (author)

  16. Parameter Estimations and Optimal Design of Simple Step-Stress Model for Gamma Dual Weibull Distribution

    Directory of Open Access Journals (Sweden)

    Hamdy Mohamed Salem

    2018-03-01

    Full Text Available This paper considers life-testing experiments and how it is effected by stress factors: namely temperature, electricity loads, cycling rate and pressure. A major type of accelerated life tests is a step-stress model that allows the experimenter to increase stress levels more than normal use during the experiment to see the failure items. The test items are assumed to follow Gamma Dual Weibull distribution. Different methods for estimating the parameters are discussed. These include Maximum Likelihood Estimations and Confidence Interval Estimations which is based on asymptotic normality generate narrow intervals to the unknown distribution parameters with high probability. MathCAD (2001 program is used to illustrate the optimal time procedure through numerical examples.

  17. Methods to determine fast-ion distribution functions from multi-diagnostic measurements

    DEFF Research Database (Denmark)

    Jacobsen, Asger Schou; Salewski, Mirko

    -ion diagnostic views, it is possible to infer the distribution function using a tomography approach. Several inversion methods for solving this tomography problem in velocity space are implemented and compared. It is found that the best quality it obtained when using inversion methods which penalise steep......Understanding the behaviour of fast ions in a fusion plasma is very important, since the fusion-born alpha particles are expected to be the main source of heating in a fusion power plant. Preferably, the entire fast-ion velocity-space distribution function would be measured. However, no fast...

  18. Time interval measurement between two emissions: Kr + Au; Mesure de l`intervalle de temps entre deux emissions: Kr + Au

    Energy Technology Data Exchange (ETDEWEB)

    Aboufirassi, M; Bougault, R.; Brou, R.; Colin, J.; Durand, D.; Genoux-Lubain, A.; Horn, D.; Laville, J.L.; Le Brun, C.; Lecolley, J.F.; Lefebvres, F.; Lopez, O.; Louvel, M.; Mahi, M.; Steckmeyer, J.C.; Tamain, B. [Lab. de Physique Corpusculaire, Caen Univ., 14 (France); LPC (Caen) - CRN (Strasbourg) - GANIL Collaboration

    1998-04-01

    To indicate the method allowing the determination of the emission intervals, the results obtained with the Kr + Au system at 43 and 60 A.MeV are presented. The experiments were performed with the NAUTILUS exclusive detectors. Central collisions were selected by means of a relative velocity criterion to reject the events containing a forward emitted fragment. For the two bombardment energies the data analysis shows that the formation of a compound of mass around A = 200. By comparing the fragment dynamical variables with simulations one can conclude about the simultaneity of the compound deexcitation processes. It was found that a 5 MeV/A is able to reproduce the characteristics of the detected fragments. Also, it was found that to reproduce the dynamical characteristics of the fragments issued from central collisions it was not necessary to superimpose a radial collective energy upon the Coulomb and thermal motion. The distribution of the relative angles between detected fragments is used here as a chronometer. For simultaneous ruptures the small relative angles are forbidden by the Coulomb repulsion, while for sequential processes this interdiction is the more lifted the longer the interval between the two emissions is. For the system discussed here the comparison between simulation and data has been carried out for the extreme cases, i.e. for a vanishing and infinite time interval between the two emissions, respectively. More sophisticated simulations to describe angular distributions between the emitted fragments were also developed 2 refs.

  19. NHSBSP type 1 interval cancers: a scientifically valid grouping?

    International Nuclear Information System (INIS)

    Porter, G.J.R.; Evans, A.J.; Burrell, H.C.; Lee, A.H.S.; Chakrabarti, J.

    2007-01-01

    Aim: To assess whether there are differences in the pathological features or survival between the new National Health Service Breast Screening Programme (NHSBSP) interval cancer classification system category of type 1 interval cancers, and the previously used, separate categories of occult, unclassified, and true interval cancers. Materials and methods: The prognostic pathological features (grade, lymph node stage, size, vascular invasion, oestrogen receptor status, and histological type) and survival of 428 type 1 interval invasive breast cancers were analysed by subgroup (occult, unclassified and true interval). Results: Occult cancers compared with other type 1 interval cancers were of significantly lower grade [38 of 52 (73%) versus 151 of 340 (44%) grade 1 or 2, p = 0.0005], more likely to be smaller size [37 of 51 (73%) versus 158 of 341 (46%) <20 mm, p = 0.0003] and more frequently of lobular type at histology [14 of 42 (32%) versus 50 of 286 (17%), p = 0.03]. There was no significant difference in pathological features of unclassified tumours compared with other type 1 tumours. There was no significant survival difference between different type 1 subgroups (p = 0.12). Conclusion: The NHSBSP type 1 interval cancers are a heterogeneous grouping with markedly differing pathological features. However, no significant survival difference is seen between the different type 1 subgroups

  20. NHSBSP type 1 interval cancers: a scientifically valid grouping?

    Energy Technology Data Exchange (ETDEWEB)

    Porter, G.J.R. [Nottingham Breast Institute, City Hospital, Nottingham (United Kingdom)]. E-mail: garethporter@doctors.org.uk; Evans, A.J. [Nottingham Breast Institute, City Hospital, Nottingham (United Kingdom); Burrell, H.C. [Nottingham Breast Institute, City Hospital, Nottingham (United Kingdom); Lee, A.H.S. [Nottingham Breast Institute, City Hospital, Nottingham (United Kingdom); Chakrabarti, J. [Nottingham Breast Institute, City Hospital, Nottingham (United Kingdom)

    2007-03-15

    Aim: To assess whether there are differences in the pathological features or survival between the new National Health Service Breast Screening Programme (NHSBSP) interval cancer classification system category of type 1 interval cancers, and the previously used, separate categories of occult, unclassified, and true interval cancers. Materials and methods: The prognostic pathological features (grade, lymph node stage, size, vascular invasion, oestrogen receptor status, and histological type) and survival of 428 type 1 interval invasive breast cancers were analysed by subgroup (occult, unclassified and true interval). Results: Occult cancers compared with other type 1 interval cancers were of significantly lower grade [38 of 52 (73%) versus 151 of 340 (44%) grade 1 or 2, p = 0.0005], more likely to be smaller size [37 of 51 (73%) versus 158 of 341 (46%) <20 mm, p = 0.0003] and more frequently of lobular type at histology [14 of 42 (32%) versus 50 of 286 (17%), p = 0.03]. There was no significant difference in pathological features of unclassified tumours compared with other type 1 tumours. There was no significant survival difference between different type 1 subgroups (p = 0.12). Conclusion: The NHSBSP type 1 interval cancers are a heterogeneous grouping with markedly differing pathological features. However, no significant survival difference is seen between the different type 1 subgroups.

  1. Sparsity-weighted outlier FLOODing (OFLOOD) method: Efficient rare event sampling method using sparsity of distribution.

    Science.gov (United States)

    Harada, Ryuhei; Nakamura, Tomotake; Shigeta, Yasuteru

    2016-03-30

    As an extension of the Outlier FLOODing (OFLOOD) method [Harada et al., J. Comput. Chem. 2015, 36, 763], the sparsity of the outliers defined by a hierarchical clustering algorithm, FlexDice, was considered to achieve an efficient conformational search as sparsity-weighted "OFLOOD." In OFLOOD, FlexDice detects areas of sparse distribution as outliers. The outliers are regarded as candidates that have high potential to promote conformational transitions and are employed as initial structures for conformational resampling by restarting molecular dynamics simulations. When detecting outliers, FlexDice defines a rank in the hierarchy for each outlier, which relates to sparsity in the distribution. In this study, we define a lower rank (first ranked), a medium rank (second ranked), and the highest rank (third ranked) outliers, respectively. For instance, the first-ranked outliers are located in a given conformational space away from the clusters (highly sparse distribution), whereas those with the third-ranked outliers are nearby the clusters (a moderately sparse distribution). To achieve the conformational search efficiently, resampling from the outliers with a given rank is performed. As demonstrations, this method was applied to several model systems: Alanine dipeptide, Met-enkephalin, Trp-cage, T4 lysozyme, and glutamine binding protein. In each demonstration, the present method successfully reproduced transitions among metastable states. In particular, the first-ranked OFLOOD highly accelerated the exploration of conformational space by expanding the edges. In contrast, the third-ranked OFLOOD reproduced local transitions among neighboring metastable states intensively. For quantitatively evaluations of sampled snapshots, free energy calculations were performed with a combination of umbrella samplings, providing rigorous landscapes of the biomolecules. © 2015 Wiley Periodicals, Inc.

  2. QT Interval in Pregnant and Non-pregnant Women

    Directory of Open Access Journals (Sweden)

    Majid Zamani

    2014-03-01

    Full Text Available Introduction: Prolongation of QT interval might result in dangerous cardiac arrhythmias, including Torsades de Pointes (TdP, consequently leading to syncope or death. A limited number of studies carried out in this respect to date have shown that QT interval might increase during pregnancy. On the other hand, it has been shown that each pregnancy might result in an increase in the risk of cardiac accidents in patients with long QT interval. Therefore, the present study was undertaken to compare QT intervals in pregnant and non-pregnant women. Methods: Pregnant women group consisted of 40 women in the second and third trimesters of pregnancy and the non-pregnant control group consisted of healthy women 18-35 years of age. All the patients underwent standard 12-lead electrocardiogram (ECG. The QT interval was measured for each patient at lead II. The mean corrected QT interval (QTc and QT dispersions (QTd were compared between the two groups. Results: Mean heart rates in the pregnant and non-pregnant groups were 98.55±14.09 and 72.53±13.17 beats/minutes (P<0.001. QTd and QTc means were in the normal range in both groups; however, these variables were 49.50±12.80 and 43.03±18.47 milliseconds in the pregnant group and 39.5±9.59 and 40.38±17.20 milliseconds in the control group, respectively (P<0.001. Conclusion: The QT interval was longer in pregnant women compared to non-pregnant women; however, it was in the normal range in both groups. Therefore, it is important to monitor and manage risk factors involved in prolongation of QT interval and prevent concurrence of these factors with pregnancy.

  3. Construction of Interval Wavelet Based on Restricted Variational Principle and Its Application for Solving Differential Equations

    OpenAIRE

    Mei, Shu-Li; Lv, Hong-Liang; Ma, Qin

    2008-01-01

    Based on restricted variational principle, a novel method for interval wavelet construction is proposed. For the excellent local property of quasi-Shannon wavelet, its interval wavelet is constructed, and then applied to solve ordinary differential equations. Parameter choices for the interval wavelet method are discussed and its numerical performance is demonstrated.

  4. Score Function of Distribution and Revival of the Moment Method

    Czech Academy of Sciences Publication Activity Database

    Fabián, Zdeněk

    2016-01-01

    Roč. 45, č. 4 (2016), s. 1118-1136 ISSN 0361-0926 R&D Projects: GA MŠk(CZ) LG12020 Institutional support: RVO:67985807 Keywords : characteristics of distributions * data characteristics * general moment method * Huber moment estimator * parametric methods * score function Subject RIV: BB - Applied Statistics , Operational Research Impact factor: 0.311, year: 2016

  5. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    of generic run-time error types, design of methods of observing application software behaviorduring execution and design of methods of evaluating run time constraints. In the definition of error types it is attempted to cover all relevant aspects of the application softwaree behavior. Methods of observation......In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...

  6. Thermodynamic method for generating random stress distributions on an earthquake fault

    Science.gov (United States)

    Barall, Michael; Harris, Ruth A.

    2012-01-01

    This report presents a new method for generating random stress distributions on an earthquake fault, suitable for use as initial conditions in a dynamic rupture simulation. The method employs concepts from thermodynamics and statistical mechanics. A pattern of fault slip is considered to be analogous to a micro-state of a thermodynamic system. The energy of the micro-state is taken to be the elastic energy stored in the surrounding medium. Then, the Boltzmann distribution gives the probability of a given pattern of fault slip and stress. We show how to decompose the system into independent degrees of freedom, which makes it computationally feasible to select a random state. However, due to the equipartition theorem, straightforward application of the Boltzmann distribution leads to a divergence which predicts infinite stress. To avoid equipartition, we show that the finite strength of the fault acts to restrict the possible states of the system. By analyzing a set of earthquake scaling relations, we derive a new formula for the expected power spectral density of the stress distribution, which allows us to construct a computer algorithm free of infinities. We then present a new technique for controlling the extent of the rupture by generating a random stress distribution thousands of times larger than the fault surface, and selecting a portion which, by chance, has a positive stress perturbation of the desired size. Finally, we present a new two-stage nucleation method that combines a small zone of forced rupture with a larger zone of reduced fracture energy.

  7. Design and Analysis of Schemes for Adapting Migration Intervals in Parallel Evolutionary Algorithms.

    Science.gov (United States)

    Mambrini, Andrea; Sudholt, Dirk

    2015-01-01

    The migration interval is one of the fundamental parameters governing the dynamic behaviour of island models. Yet, there is little understanding on how this parameter affects performance, and how to optimally set it given a problem in hand. We propose schemes for adapting the migration interval according to whether fitness improvements have been found. As long as no improvement is found, the migration interval is increased to minimise communication. Once the best fitness has improved, the migration interval is decreased to spread new best solutions more quickly. We provide a method for obtaining upper bounds on the expected running time and the communication effort, defined as the expected number of migrants sent. Example applications of this method to common example functions show that our adaptive schemes are able to compete with, or even outperform, the optimal fixed choice of the migration interval, with regard to running time and communication effort.

  8. Cardiac time intervals by tissue Doppler imaging M-mode echocardiography

    DEFF Research Database (Denmark)

    Biering-Sørensen, Tor

    2016-01-01

    for myocardial myocytes to achieve an LV pressure equal to that of aorta increases, resulting in a prolongation of the isovolumic contraction time (IVCT). Furthermore, the ability of myocardial myocytes to maintain the LV pressure decreases, resulting in reduction in the ejection time (ET). As LV diastolic...... of whether the LV is suffering from impaired systolic or diastolic function. A novel method of evaluating the cardiac time intervals has recently evolved. Using tissue Doppler imaging (TDI) M-mode through the mitral valve (MV) to estimate the cardiac time intervals may be an improved method reflecting global...

  9. Risk prediction of cardiovascular death based on the QTc interval

    DEFF Research Database (Denmark)

    Nielsen, Jonas B; Graff, Claus; Rasmussen, Peter V

    2014-01-01

    electrocardiograms from 173 529 primary care patients aged 50-90 years were collected during 2001-11. The Framingham formula was used for heart rate-correction of the QT interval. Data on medication, comorbidity, and outcomes were retrieved from administrative registries. During a median follow-up period of 6......AIMS: Using a large, contemporary primary care population we aimed to provide absolute long-term risks of cardiovascular death (CVD) based on the QTc interval and to test whether the QTc interval is of value in risk prediction of CVD on an individual level. METHODS AND RESULTS: Digital...

  10. An interval-based possibilistic programming method for waste management with cost minimization and environmental-impact abatement under uncertainty.

    Science.gov (United States)

    Li, Y P; Huang, G H

    2010-09-15

    Considerable public concerns have been raised in the past decades since a large amount of pollutant emissions from municipal solid waste (MSW) disposal of processes pose risks on surrounding environment and human health. Moreover, in MSW management, various uncertainties exist in the related costs, impact factors and objectives, which can affect the optimization processes and the decision schemes generated. In this study, an interval-based possibilistic programming (IBPP) method is developed for planning the MSW management with minimized system cost and environmental impact under uncertainty. The developed method can deal with uncertainties expressed as interval values and fuzzy sets in the left- and right-hand sides of constraints and objective function. An interactive algorithm is provided for solving the IBPP problem, which does not lead to more complicated intermediate submodels and has a relatively low computational requirement. The developed model is applied to a case study of planning a MSW management system, where mixed integer linear programming (MILP) technique is introduced into the IBPP framework to facilitate dynamic analysis for decisions of timing, sizing and siting in terms of capacity expansion for waste-management facilities. Three cases based on different waste-management policies are examined. The results obtained indicate that inclusion of environmental impacts in the optimization model can change the traditional waste-allocation pattern merely based on the economic-oriented planning approach. The results obtained can help identify desired alternatives for managing MSW, which has advantages in providing compromised schemes under an integrated consideration of economic efficiency and environmental impact under uncertainty. Copyright 2010 Elsevier B.V. All rights reserved.

  11. Two new bivariate zero-inflated generalized Poisson distributions with a flexible correlation structure

    Directory of Open Access Journals (Sweden)

    Chi Zhang

    2015-05-01

    Full Text Available To model correlated bivariate count data with extra zero observations, this paper proposes two new bivariate zero-inflated generalized Poisson (ZIGP distributions by incorporating a multiplicative factor (or dependency parameter λ, named as Type I and Type II bivariate ZIGP distributions, respectively. The proposed distributions possess a flexible correlation structure and can be used to fit either positively or negatively correlated and either over- or under-dispersed count data, comparing to the existing models that can only fit positively correlated count data with over-dispersion. The two marginal distributions of Type I bivariate ZIGP share a common parameter of zero inflation while the two marginal distributions of Type II bivariate ZIGP have their own parameters of zero inflation, resulting in a much wider range of applications. The important distributional properties are explored and some useful statistical inference methods including maximum likelihood estimations of parameters, standard errors estimation, bootstrap confidence intervals and related testing hypotheses are developed for the two distributions. A real data are thoroughly analyzed by using the proposed distributions and statistical methods. Several simulation studies are conducted to evaluate the performance of the proposed methods.

  12. Parameter identification for structural dynamics based on interval analysis algorithm

    Science.gov (United States)

    Yang, Chen; Lu, Zixing; Yang, Zhenyu; Liang, Ke

    2018-04-01

    A parameter identification method using interval analysis algorithm for structural dynamics is presented in this paper. The proposed uncertain identification method is investigated by using central difference method and ARMA system. With the help of the fixed memory least square method and matrix inverse lemma, a set-membership identification technology is applied to obtain the best estimation of the identified parameters in a tight and accurate region. To overcome the lack of insufficient statistical description of the uncertain parameters, this paper treats uncertainties as non-probabilistic intervals. As long as we know the bounds of uncertainties, this algorithm can obtain not only the center estimations of parameters, but also the bounds of errors. To improve the efficiency of the proposed method, a time-saving algorithm is presented by recursive formula. At last, to verify the accuracy of the proposed method, two numerical examples are applied and evaluated by three identification criteria respectively.

  13. FSILP: fuzzy-stochastic-interval linear programming for supporting municipal solid waste management.

    Science.gov (United States)

    Li, Pu; Chen, Bing

    2011-04-01

    Although many studies on municipal solid waste management (MSW management) were conducted under uncertain conditions of fuzzy, stochastic, and interval coexistence, the solution to the conventional linear programming problems of integrating fuzzy method with the other two was inefficient. In this study, a fuzzy-stochastic-interval linear programming (FSILP) method is developed by integrating Nguyen's method with conventional linear programming for supporting municipal solid waste management. The Nguyen's method was used to convert the fuzzy and fuzzy-stochastic linear programming problems into the conventional linear programs, by measuring the attainment values of fuzzy numbers and/or fuzzy random variables, as well as superiority and inferiority between triangular fuzzy numbers/triangular fuzzy-stochastic variables. The developed method can effectively tackle uncertainties described in terms of probability density functions, fuzzy membership functions, and discrete intervals. Moreover, the method can also improve upon the conventional interval fuzzy programming and two-stage stochastic programming approaches, with advantageous capabilities that are easily achieved with fewer constraints and significantly reduces consumption time. The developed model was applied to a case study of municipal solid waste management system in a city. The results indicated that reasonable solutions had been generated. The solution can help quantify the relationship between the change of system cost and the uncertainties, which could support further analysis of tradeoffs between the waste management cost and the system failure risk. Copyright © 2010 Elsevier Ltd. All rights reserved.

  14. New method for extracting tumors in PET/CT images based on the probability distribution

    International Nuclear Information System (INIS)

    Nitta, Shuhei; Hontani, Hidekata; Hukami, Tadanori

    2006-01-01

    In this report, we propose a method for extracting tumors from PET/CT images by referring to the probability distribution of pixel values in the PET image. In the proposed method, first, the organs that normally take up fluorodeoxyglucose (FDG) (e.g., the liver, kidneys, and brain) are extracted. Then, the tumors are extracted from the images. The distribution of pixel values in PET images differs in each region of the body. Therefore, the threshold for detecting tumors is adaptively determined by referring to the distribution. We applied the proposed method to 37 cases and evaluated its performance. This report also presents the results of experiments comparing the proposed method and another method in which the pixel values are normalized for extracting tumors. (author)

  15. QT Interval Adaption to RR Interval Changes and Correlation Analysis between Heart Rate Variability and QT Interval Variability%QT间期对RR间期变化的响应及HRV与QTV的关联性分析

    Institute of Scientific and Technical Information of China (English)

    朱逸; 杨啸林; 彭屹

    2013-01-01

    体表心电图作为无创和连续的监测手段,在心脏安全评测方面具有重要而不可替代的位置.心电图中的间期时间序列包含着重要信息,其中以反映心动周期的RR间期序列,以及以QT间期为代表的反映心室复极化时程的间期序列,在临床上最为基础,相关的研究具有更为重要的意义.文中从应用基础研究的角度,就QT间期对于RR间期变化的响应,以及心率变异性(HRV)和QT间期变异性(QTV)关联性分析,分别介绍了其分析方法和研究进展,并且展望了可能的应用前景.%As a noninvasive and continuous monitoring method, surface electrocardiogram is significant in evaluating cardiac safety. The time interval series extracted from surface electrocardiogran contain important information. The RR and QT interval series, representing the cardiac cycle and the duration of ventricular repolarization, are with more research value. From the perspective of applied basic research, the methods and progress, concerning the QT interval adaption to RR interval changes and correlation analysis between heart rale variability (HRV) and QT interval variability (QTV) , were introduced here. And their potential applications were discussed as well.

  16. An optimal dynamic interval preventive maintenance scheduling for series systems

    International Nuclear Information System (INIS)

    Gao, Yicong; Feng, Yixiong; Zhang, Zixian; Tan, Jianrong

    2015-01-01

    This paper studies preventive maintenance (PM) with dynamic interval for a multi-component system. Instead of equal interval, the time of PM period in the proposed dynamic interval model is not a fixed constant, which varies from interval-down to interval-up. It is helpful to reduce the outage loss on frequent repair parts and avoid lack of maintenance of the equipment by controlling the equipment maintenance frequency, when compared to a periodic PM scheme. According to the definition of dynamic interval, the reliability of system is analyzed from the failure mechanisms of its components and the different effects of non-periodic PM actions on the reliability of the components. Following the proposed model of reliability, a novel framework for solving the non-periodical PM schedule with dynamic interval based on the multi-objective genetic algorithm is proposed. The framework denotes the strategies include updating strategy, deleting strategy, inserting strategy and moving strategy, which is set to correct the invalid population individuals of the algorithm. The values of the dynamic interval and the selections of PM action for the components on every PM stage are determined by achieving a certain level of system availability with the minimum total PM-related cost. Finally, a typical rotary table system of NC machine tool is used as an example to describe the proposed method. - Highlights: • A non-periodic preventive maintenance scheduling model is proposed. • A framework for solving the non-periodical PM schedule problem is developed. • The interval of non-periodic PM is flexible and schedule can be better adjusted. • Dynamic interval leads to more efficient solutions than fixed interval does

  17. Generalized Confidence Intervals and Fiducial Intervals for Some Epidemiological Measures

    Directory of Open Access Journals (Sweden)

    Ionut Bebu

    2016-06-01

    Full Text Available For binary outcome data from epidemiological studies, this article investigates the interval estimation of several measures of interest in the absence or presence of categorical covariates. When covariates are present, the logistic regression model as well as the log-binomial model are investigated. The measures considered include the common odds ratio (OR from several studies, the number needed to treat (NNT, and the prevalence ratio. For each parameter, confidence intervals are constructed using the concepts of generalized pivotal quantities and fiducial quantities. Numerical results show that the confidence intervals so obtained exhibit satisfactory performance in terms of maintaining the coverage probabilities even when the sample sizes are not large. An appealing feature of the proposed solutions is that they are not based on maximization of the likelihood, and hence are free from convergence issues associated with the numerical calculation of the maximum likelihood estimators, especially in the context of the log-binomial model. The results are illustrated with a number of examples. The overall conclusion is that the proposed methodologies based on generalized pivotal quantities and fiducial quantities provide an accurate and unified approach for the interval estimation of the various epidemiological measures in the context of binary outcome data with or without covariates.

  18. Ventricular Cycle Length Characteristics Estimative of Prolonged RR Interval during Atrial Fibrillation

    Science.gov (United States)

    CIACCIO, EDWARD J.; BIVIANO, ANGELO B.; GAMBHIR, ALOK; EINSTEIN, ANDREW J.; GARAN, HASAN

    2014-01-01

    Background When atrial fibrillation (AF) is incessant, imaging during a prolonged ventricular RR interval may improve image quality. It was hypothesized that long RR intervals could be predicted from preceding RR values. Methods From the PhysioNet database, electrocardiogram RR intervals were obtained from 74 persistent AF patients. An RR interval lengthened by at least 250 ms beyond the immediately preceding RR interval (termed T0 and T1, respectively) was considered prolonged. A two-parameter scatterplot was used to predict the occurrence of a prolonged interval T0. The scatterplot parameters were: (1) RR variability (RRv) estimated as the average second derivative from 10 previous pairs of RR differences, T13–T2, and (2) Tm–T1, the difference between Tm, the mean from T13 to T2, and T1. For each patient, scatterplots were constructed using preliminary data from the first hour. The ranges of parameters 1 and 2 were adjusted to maximize the proportion of prolonged RR intervals within range. These constraints were used for prediction of prolonged RR in test data collected during the second hour. Results The mean prolonged event was 1.0 seconds in duration. Actual prolonged events were identified with a mean positive predictive value (PPV) of 80% in the test set. PPV was >80% in 36 of 74 patients. An average of 10.8 prolonged RR intervals per 60 minutes was correctly identified. Conclusions A method was developed to predict prolonged RR intervals using two parameters and prior statistical sampling for each patient. This or similar methodology may help improve cardiac imaging in many longstanding persistent AF patients. PMID:23998759

  19. Application of autoradiographic methods for contaminant distribution studies in soils

    International Nuclear Information System (INIS)

    Povetko, O.G.; Higley, K.A.

    2000-01-01

    In order to determine physical location of contaminants in soil, solidified soil 'thin' sections, which preserve the undisturbed structural characteristics of the original soil, were prepared. This paper describes an application of different autoradiographic methods to identify the distribution of selected nuclides along key structural features of sample soils and sizes of 'hot particles' of contaminant. These autoradiographic methods included contact autoradiography using CR-39 (Homalite Plastics) plastic alpha track detectors and neutron-induced autoradiography that produced fission fragment tracks in Lexan (Thrust Industries, Inc.) plastic detectors. Intact soil samples containing weapons-grade plutonium from Rocky Flats Environmental Test Site and control samples from outside the site location were used in thin soil section preparation. Distribution of particles of actinides was observed and analyzed through the soil section depth profile from the surface to the 15-cm depth. The combination of two autoradiographic methods allowed to distinguish alpha- emitting particles of natural U, 239+240 Pu and non-fissile alpha-emitters. Locations of 990 alpha 'stars' caused by 239+240 Pu and 241 Am 'hot particles' were recorded, particles were sized, their size-frequency, depth and activity distributions were analyzed. Several large colloidal conglomerates of 239+240 Pu and 241 Am 'hot particles' were found in soil profile. Their alpha and fission fragment 'star' images were micro photographed. (author)

  20. Development of advanced methods for planning electric energy distribution systems. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Goenen, T.; Foote, B.L.; Thompson, J.C.; Fagan, J.E.

    1979-10-01

    An extensive search was made for the identification and collection of reports published in the open literature which describes distribution planning methods and techniques. In addition, a questionnaire has been prepared and sent to a large number of electric power utility companies. A large number of these companies were visited and/or their distribution planners interviewed for the identification and description of distribution system planning methods and techniques used by these electric power utility companies and other commercial entities. Distribution systems planning models were reviewed and a set of new mixed-integer programming models were developed for the optimal expansion of the distribution systems. The models help the planner to select: (1) optimum substation locations; (2) optimum substation expansions; (3) optimum substation transformer sizes; (4) optimum load transfers between substations; (5) optimum feeder routes and sizes subject to a set of specified constraints. The models permit following existing right-of-ways and avoid areas where feeders and substations cannot be constructed. The results of computer runs were analyzed for adequacy in serving projected loads within regulation limits for both normal and emergency operation.

  1. Predictive Distribution of the Dirichlet Mixture Model by the Local Variational Inference Method

    DEFF Research Database (Denmark)

    Ma, Zhanyu; Leijon, Arne; Tan, Zheng-Hua

    2014-01-01

    the predictive likelihood of the new upcoming data, especially when the amount of training data is small. The Bayesian estimation of a Dirichlet mixture model (DMM) is, in general, not analytically tractable. In our previous work, we have proposed a global variational inference-based method for approximately...... calculating the posterior distributions of the parameters in the DMM analytically. In this paper, we extend our previous study for the DMM and propose an algorithm to calculate the predictive distribution of the DMM with the local variational inference (LVI) method. The true predictive distribution of the DMM...... is analytically intractable. By considering the concave property of the multivariate inverse beta function, we introduce an upper-bound to the true predictive distribution. As the global minimum of this upper-bound exists, the problem is reduced to seek an approximation to the true predictive distribution...

  2. Robust Control with Enlaeged Interval of Uncertain Parameters

    Directory of Open Access Journals (Sweden)

    Marek Keresturi

    2002-01-01

    Full Text Available Robust control is advantageous for systems with defined interval of uncertain parameters. This can be substantially enlarged dividing it into a few sub-intervals. Corresponding controllers for each of them may be set after approximate identification of some uncertain plant parameters. The paper deals with application of the pole region assignment method for position control of the crane crab. The same track form is required for uncertain burden mass and approximate value of rope length. Measurement of crab position and speed is supposed, burden deviation angle is observed. Simulation results have verified feasibility of this design procedure.

  3. Multilayer perceptron for robust nonlinear interval regression analysis using genetic algorithms.

    Science.gov (United States)

    Hu, Yi-Chung

    2014-01-01

    On the basis of fuzzy regression, computational models in intelligence such as neural networks have the capability to be applied to nonlinear interval regression analysis for dealing with uncertain and imprecise data. When training data are not contaminated by outliers, computational models perform well by including almost all given training data in the data interval. Nevertheless, since training data are often corrupted by outliers, robust learning algorithms employed to resist outliers for interval regression analysis have been an interesting area of research. Several approaches involving computational intelligence are effective for resisting outliers, but the required parameters for these approaches are related to whether the collected data contain outliers or not. Since it seems difficult to prespecify the degree of contamination beforehand, this paper uses multilayer perceptron to construct the robust nonlinear interval regression model using the genetic algorithm. Outliers beyond or beneath the data interval will impose slight effect on the determination of data interval. Simulation results demonstrate that the proposed method performs well for contaminated datasets.

  4. Continuous Exercise but Not High Intensity Interval Training Improves Fat Distribution in Overweight Adults

    Directory of Open Access Journals (Sweden)

    Shelley E. Keating

    2014-01-01

    Full Text Available Objective. The purpose of this study was to assess the effect of high intensity interval training (HIIT versus continuous aerobic exercise training (CONT or placebo (PLA on body composition by randomized controlled design. Methods. Work capacity and body composition (dual-energy X-ray absorptiometry were measured before and after 12 weeks of intervention in 38 previously inactive overweight adults. Results. There was a significant group × time interaction for change in work capacity (P<0.001, which increased significantly in CONT (23.8±3.0% and HIIT (22.3±3.5% but not PLA (3.1±5.0%. There was a near-significant main effect for percentage trunk fat, with trunk fat reducing in CONT by 3.1±1.6% and in PLA by 1.1±0.4%, but not in HIIT (increase of 0.7±1.0% (P=0.07. There was a significant reduction in android fat percentage in CONT (2.7±1.3% and PLA (1.4±0.8% but not HIIT (increase of 0.8±0.7% (P=0.04. Conclusion. These data suggest that HIIT may be advocated as a time-efficient strategy for eliciting comparable fitness benefits to traditional continuous exercise in inactive, overweight adults. However, in this population HIIT does not confer the same benefit to body fat levels as continuous exercise training.

  5. Communication Systems and Study Method for Active Distribution Power systems

    DEFF Research Database (Denmark)

    Wei, Mu; Chen, Zhe

    Due to the involvement and evolvement of communication technologies in contemporary power systems, the applications of modern communication technologies in distribution power system are becoming increasingly important. In this paper, the International Organization for Standardization (ISO......) reference seven-layer model of communication systems, and the main communication technologies and protocols on each corresponding layer are introduced. Some newly developed communication techniques, like Ethernet, are discussed with reference to the possible applications in distributed power system....... The suitability of the communication technology to the distribution power system with active renewable energy based generation units is discussed. Subsequently the typical possible communication systems are studied by simulation. In this paper, a novel method of integrating communication system impact into power...

  6. Distribution Route Planning of Clean Coal Based on Nearest Insertion Method

    Science.gov (United States)

    Wang, Yunrui

    2018-01-01

    Clean coal technology has made some achievements for several ten years, but the research in its distribution field is very small, the distribution efficiency would directly affect the comprehensive development of clean coal technology, it is the key to improve the efficiency of distribution by planning distribution route rationally. The object of this paper was a clean coal distribution system which be built in a county. Through the surveying of the customer demand and distribution route, distribution vehicle in previous years, it was found that the vehicle deployment was only distributed by experiences, and the number of vehicles which used each day changed, this resulted a waste of transport process and an increase in energy consumption. Thus, the mathematical model was established here in order to aim at shortest path as objective function, and the distribution route was re-planned by using nearest-insertion method which been improved. The results showed that the transportation distance saved 37 km and the number of vehicles used had also been decreased from the past average of 5 to fixed 4 every day, as well the real loading of vehicles increased by 16.25% while the current distribution volume staying same. It realized the efficient distribution of clean coal, achieved the purpose of saving energy and reducing consumption.

  7. Multiplicity distributions in small phase-space domains in central nucleus-nucleus collisions

    International Nuclear Information System (INIS)

    Baechler, J.; Hoffmann, M.; Runge, K.; Schmoetten, E.; Bartke, J.; Gladysz, E.; Kowalski, M.; Stefanski, P.; Bialkowska, H.; Bock, R.; Brockmann, R.; Sandoval, A.; Buncic, P.; Ferenc, D.; Kadija, K.; Ljubicic, A. Jr.; Vranic, D.; Chase, S.I.; Harris, J.W.; Odyniec, G.; Pugh, H.G.; Rai, G.; Teitelbaum, L.; Tonse, S.; Derado, I.; Eckardt, V.; Gebauer, H.J.; Rauch, W.; Schmitz, N.; Seyboth, P.; Seyerlein, J.; Vesztergombi, G.; Eschke, J.; Heck, W.; Kabana, S.; Kuehmichel, A.; Lahanas, M.; Lee, Y.; Le Vine, M.; Margetis, S.; Renfordt, R.; Roehrich, D.; Rothard, H.; Schmidt, E.; Schneider, I.; Stock, R.; Stroebele, H.; Wenig, S.; Fleischmann, B.; Fuchs, M.; Gazdzicki, M.; Kosiec, J.; Skrzypczak, E.; Keidel, R.; Piper, A.; Puehlhofer, F.; Nappi, E.; Posa, F.; Paic, G.; Panagiotou, A.D.; Petridis, A.; Vassileiadis, G.; Pfenning, J.; Wosiek, B.

    1992-10-01

    Multiplicity distributions of negatively charged particles have been studied in restricted phase space intervals for central S + S, O + Au and S + Au collisions at 200 GeV/nucleon. It is shown that multiplicity distributions are well described by a negative binomial form irrespectively of the size and dimensionality of phase space domain. A clan structure analysis reveals interesting similarities between complex nuclear collisions and a simple partonic shower. The lognormal distribution agrees reasonably well with the multiplicity data in large domains, but fails in the case of small intervals. No universal scaling function was found to describe the shape of multiplicity distributions in phase space intervals of varying size. (orig.)

  8. Five-Year Risk of Interval-Invasive Second Breast Cancer

    Science.gov (United States)

    Buist, Diana S. M.; Houssami, Nehmat; Dowling, Emily C.; Halpern, Elkan F.; Gazelle, G. Scott; Lehman, Constance D.; Henderson, Louise M.; Hubbard, Rebecca A.

    2015-01-01

    Background: Earlier detection of second breast cancers after primary breast cancer (PBC) treatment improves survival, yet mammography is less accurate in women with prior breast cancer. The purpose of this study was to examine women presenting clinically with second breast cancers after negative surveillance mammography (interval cancers), and to estimate the five-year risk of interval-invasive second cancers for women with varying risk profiles. Methods: We evaluated a prospective cohort of 15 114 women with 47 717 surveillance mammograms diagnosed with stage 0-II unilateral PBC from 1996 through 2008 at facilities in the Breast Cancer Surveillance Consortium. We used discrete time survival models to estimate the association between odds of an interval-invasive second breast cancer and candidate predictors, including demographic, PBC, and imaging characteristics. All statistical tests were two-sided. Results: The cumulative incidence of second breast cancers after five years was 54.4 per 1000 women, with 325 surveillance-detected and 138 interval-invasive second breast cancers. The five-year risk of interval-invasive second cancer for women with referent category characteristics was 0.60%. For women with the most and least favorable profiles, the five-year risk ranged from 0.07% to 6.11%. Multivariable modeling identified grade II PBC (odds ratio [OR] = 1.95, 95% confidence interval [CI] = 1.15 to 3.31), treatment with lumpectomy without radiation (OR = 3.27, 95% CI = 1.91 to 5.62), interval PBC presentation (OR = 2.01, 95% CI 1.28 to 3.16), and heterogeneously dense breasts on mammography (OR = 1.54, 95% CI = 1.01 to 2.36) as independent predictors of interval-invasive second breast cancers. Conclusions: PBC diagnosis and treatment characteristics contribute to variation in subsequent-interval second breast cancer risk. Consideration of these factors may be useful in developing tailored post-treatment imaging surveillance plans. PMID:25904721

  9. Hypotensive Response Magnitude and Duration in Hypertensives: Continuous and Interval Exercise

    Directory of Open Access Journals (Sweden)

    Raphael Santos Teodoro de Carvalho

    2015-03-01

    Full Text Available Background: Although exercise training is known to promote post-exercise hypotension, there is currently no consistent argument about the effects of manipulating its various components (intensity, duration, rest periods, types of exercise, training methods on the magnitude and duration of hypotensive response. Objective: To compare the effect of continuous and interval exercises on hypotensive response magnitude and duration in hypertensive patients by using ambulatory blood pressure monitoring (ABPM. Methods: The sample consisted of 20 elderly hypertensives. Each participant underwent three ABPM sessions: one control ABPM, without exercise; one ABPM after continuous exercise; and one ABPM after interval exercise. Systolic blood pressure (SBP, diastolic blood pressure (DBP, mean arterial pressure (MAP, heart rate (HR and double product (DP were monitored to check post-exercise hypotension and for comparison between each ABPM. Results: ABPM after continuous exercise and after interval exercise showed post-exercise hypotension and a significant reduction (p < 0.05 in SBP, DBP, MAP and DP for 20 hours as compared with control ABPM. Comparing ABPM after continuous and ABPM after interval exercise, a significant reduction (p < 0.05 in SBP, DBP, MAP and DP was observed in the latter. Conclusion: Continuous and interval exercise trainings promote post-exercise hypotension with reduction in SBP, DBP, MAP and DP in the 20 hours following exercise. Interval exercise training causes greater post-exercise hypotension and lower cardiovascular overload as compared with continuous exercise.

  10. A new hydraulic regulation method on district heating system with distributed variable-speed pumps

    International Nuclear Information System (INIS)

    Wang, Hai; Wang, Haiying; Zhu, Tong

    2017-01-01

    Highlights: • A hydraulic regulation method was presented for district heating with distributed variable speed pumps. • Information and automation technologies were utilized to support the proposed method. • A new hydraulic model was developed for distributed variable speed pumps. • A new optimization model was developed based on genetic algorithm. • Two scenarios of a multi-source looped system was illustrated to validate the method. - Abstract: Compared with the hydraulic configuration based on the conventional central circulating pump, a district heating system with distributed variable-speed-pumps configuration can often save 30–50% power consumption on circulating pumps with frequency inverters. However, the hydraulic regulations on distributed variable-speed-pumps configuration could be more complicated than ever while all distributed pumps need to be adjusted to their designated flow rates. Especially in a multi-source looped structure heating network where the distributed pumps have strongly coupled and severe non-linear hydraulic connections with each other, it would be rather difficult to maintain the hydraulic balance during the regulations. In this paper, with the help of the advanced automation and information technologies, a new hydraulic regulation method was proposed to achieve on-site hydraulic balance for the district heating systems with distributed variable-speed-pumps configuration. The proposed method was comprised of a new hydraulic model, which was developed to adapt the distributed variable-speed-pumps configuration, and a calibration model with genetic algorithm. By carrying out the proposed method step by step, the flow rates of all distributed pumps can be progressively adjusted to their designated values. A hypothetic district heating system with 2 heat sources and 10 substations was taken as a case study to illustrate the feasibility of the proposed method. Two scenarios were investigated respectively. In Scenario I, the

  11. Probability evolution method for exit location distribution

    Science.gov (United States)

    Zhu, Jinjie; Chen, Zhen; Liu, Xianbin

    2018-03-01

    The exit problem in the framework of the large deviation theory has been a hot topic in the past few decades. The most probable escape path in the weak-noise limit has been clarified by the Freidlin-Wentzell action functional. However, noise in real physical systems cannot be arbitrarily small while noise with finite strength may induce nontrivial phenomena, such as noise-induced shift and noise-induced saddle-point avoidance. Traditional Monte Carlo simulation of noise-induced escape will take exponentially large time as noise approaches zero. The majority of the time is wasted on the uninteresting wandering around the attractors. In this paper, a new method is proposed to decrease the escape simulation time by an exponentially large factor by introducing a series of interfaces and by applying the reinjection on them. This method can be used to calculate the exit location distribution. It is verified by examining two classical examples and is compared with theoretical predictions. The results show that the method performs well for weak noise while may induce certain deviations for large noise. Finally, some possible ways to improve our method are discussed.

  12. Short-Term Wind Power Interval Forecasting Based on an EEMD-RT-RVM Model

    Directory of Open Access Journals (Sweden)

    Haixiang Zang

    2016-01-01

    Full Text Available Accurate short-term wind power forecasting is important for improving the security and economic success of power grids. Existing wind power forecasting methods are mostly types of deterministic point forecasting. Deterministic point forecasting is vulnerable to forecasting errors and cannot effectively deal with the random nature of wind power. In order to solve the above problems, we propose a short-term wind power interval forecasting model based on ensemble empirical mode decomposition (EEMD, runs test (RT, and relevance vector machine (RVM. First, in order to reduce the complexity of data, the original wind power sequence is decomposed into a plurality of intrinsic mode function (IMF components and residual (RES component by using EEMD. Next, we use the RT method to reconstruct the components and obtain three new components characterized by the fine-to-coarse order. Finally, we obtain the overall forecasting results (with preestablished confidence levels by superimposing the forecasting results of each new component. Our results show that, compared with existing methods, our proposed short-term interval forecasting method has less forecasting errors, narrower interval widths, and larger interval coverage percentages. Ultimately, our forecasting model is more suitable for engineering applications and other forecasting methods for new energy.

  13. Interval stability for complex systems

    Science.gov (United States)

    Klinshov, Vladimir V.; Kirillov, Sergey; Kurths, Jürgen; Nekorkin, Vladimir I.

    2018-04-01

    Stability of dynamical systems against strong perturbations is an important problem of nonlinear dynamics relevant to many applications in various areas. Here, we develop a novel concept of interval stability, referring to the behavior of the perturbed system during a finite time interval. Based on this concept, we suggest new measures of stability, namely interval basin stability (IBS) and interval stability threshold (IST). IBS characterizes the likelihood that the perturbed system returns to the stable regime (attractor) in a given time. IST provides the minimal magnitude of the perturbation capable to disrupt the stable regime for a given interval of time. The suggested measures provide important information about the system susceptibility to external perturbations which may be useful for practical applications. Moreover, from a theoretical viewpoint the interval stability measures are shown to bridge the gap between linear and asymptotic stability. We also suggest numerical algorithms for quantification of the interval stability characteristics and demonstrate their potential for several dynamical systems of various nature, such as power grids and neural networks.

  14. A decision making method based on interval type-2 fuzzy sets: An approach for ambulance location preference

    Directory of Open Access Journals (Sweden)

    Lazim Abdullah

    2018-01-01

    Full Text Available Selecting the best solution to deploy an ambulance in a strategic location is of the important variables that need to be accounted for improving the emergency medical services. The selection requires both quantitative and qualitative evaluation. Fuzzy set based approach is one of the well-known theories that help decision makers to handle fuzziness, uncertainty in decision making and vagueness of information. This paper proposes a new decision making method of Interval Type-2 Fuzzy Simple Additive Weighting (IT2 FSAW as to deal with uncertainty and vagueness. The new IT2 FSAW is applied to establish a preference in ambulance location. The decision making framework defines four criteria and five alternatives of ambulance location preference. Four experts attached to a Malaysian government hospital and a university medical center were interviewed to provide linguistic evaluation prior to analyzing with the new IT2 FSAW. Implementation of the proposed method in the case of ambulance location preference suggests that the ‘road network’ is the best alternative for ambulance location. The results indicate that the proposed method offers a consensus solution for handling the vague and qualitative criteria of ambulance location preference.

  15. Reference intervals for selected serum biochemistry analytes in cheetahs Acinonyx jubatus.

    Science.gov (United States)

    Hudson-Lamb, Gavin C; Schoeman, Johan P; Hooijberg, Emma H; Heinrich, Sonja K; Tordiffe, Adrian S W

    2016-02-26

    Published haematologic and serum biochemistry reference intervals are very scarce for captive cheetahs and even more for free-ranging cheetahs. The current study was performed to establish reference intervals for selected serum biochemistry analytes in cheetahs. Baseline serum biochemistry analytes were analysed from 66 healthy Namibian cheetahs. Samples were collected from 30 captive cheetahs at the AfriCat Foundation and 36 free-ranging cheetahs from central Namibia. The effects of captivity-status, age, sex and haemolysis score on the tested serum analytes were investigated. The biochemistry analytes that were measured were sodium, potassium, magnesium, chloride, urea and creatinine. The 90% confidence interval of the reference limits was obtained using the non-parametric bootstrap method. Reference intervals were preferentially determined by the non-parametric method and were as follows: sodium (128 mmol/L - 166 mmol/L), potassium (3.9 mmol/L - 5.2 mmol/L), magnesium (0.8 mmol/L - 1.2 mmol/L), chloride (97 mmol/L - 130 mmol/L), urea (8.2 mmol/L - 25.1 mmol/L) and creatinine (88 µmol/L - 288 µmol/L). Reference intervals from the current study were compared with International Species Information System values for cheetahs and found to be narrower. Moreover, age, sex and haemolysis score had no significant effect on the serum analytes in this study. Separate reference intervals for captive and free-ranging cheetahs were also determined. Captive cheetahs had higher urea values, most likely due to dietary factors. This study is the first to establish reference intervals for serum biochemistry analytes in cheetahs according to international guidelines. These results can be used for future health and disease assessments in both captive and free-ranging cheetahs.

  16. Statistical methods for estimating normal blood chemistry ranges and variance in rainbow trout (Salmo gairdneri), Shasta Strain

    Science.gov (United States)

    Wedemeyer, Gary A.; Nelson, Nancy C.

    1975-01-01

    Gaussian and nonparametric (percentile estimate and tolerance interval) statistical methods were used to estimate normal ranges for blood chemistry (bicarbonate, bilirubin, calcium, hematocrit, hemoglobin, magnesium, mean cell hemoglobin concentration, osmolality, inorganic phosphorus, and pH for juvenile rainbow (Salmo gairdneri, Shasta strain) trout held under defined environmental conditions. The percentile estimate and Gaussian methods gave similar normal ranges, whereas the tolerance interval method gave consistently wider ranges for all blood variables except hemoglobin. If the underlying frequency distribution is unknown, the percentile estimate procedure would be the method of choice.

  17. Comparison of non-parametric methods for ungrouping coarsely aggregated data

    DEFF Research Database (Denmark)

    Rizzi, Silvia; Thinggaard, Mikael; Engholm, Gerda

    2016-01-01

    group at the highest ages. When histogram intervals are too coarse, information is lost and comparison between histograms with different boundaries is arduous. In these cases it is useful to estimate detailed distributions from grouped data. Methods From an extensive literature search we identify five...

  18. Some simple applications of probability models to birth intervals

    International Nuclear Information System (INIS)

    Shrestha, G.

    1987-07-01

    An attempt has been made in this paper to apply some simple probability models to birth intervals under the assumption of constant fecundability and varying fecundability among women. The parameters of the probability models are estimated by using the method of moments and the method of maximum likelihood. (author). 9 refs, 2 tabs

  19. Networked and Distributed Control Method with Optimal Power Dispatch for Islanded Microgrids

    DEFF Research Database (Denmark)

    Li, Qiang; Peng, Congbo; Chen, Minyou

    2017-01-01

    of controllable agents. The distributed control laws derived from the first subgraph guarantee the supply-demand balance, while further control laws from the second subgraph reassign the outputs of controllable distributed generators, which ensure active and reactive power are dispatched optimally. However...... according to our proposition. Finally, the method is evaluated over seven cases via simulation. The results show that the system performs as desired, even if environmental conditions and load demand fluctuate significantly. In summary, the method can rapidly respond to fluctuations resulting in optimal...

  20. Determinants of birth interval in a rural Mediterranean population (La Alpujarra, Spain).

    Science.gov (United States)

    Polo, V; Luna, F; Fuster, V

    2000-10-01

    The fertility pattern, in terms of birth intervals, in a rural population not practicing contraception belonging to La Alta Alpujarra Oriental (southeast Spain) is analyzed. During the first half of the 20th century, this population experienced a considerable degree of geographical and cultural isolation. Because of this population's high variability in fertility and therefore in birth intervals, the analysis was limited to a homogenous subsample of 154 families, each with at least five pregnancies. This limitation allowed us to analyze, among and within families, effects of a set of variables on the interbirth pattern, and to avoid possible problems of pseudoreplication. Information on birth date of the mother, age at marriage, children's birth date and death date, birth order, and frequency of miscarriages was collected. Our results indicate that interbirth intervals depend on an exponential effect of maternal age, especially significant after the age of 35. This effect is probably related to the biological degenerative processes of female fertility with age. A linear increase of birth intervals with birth order within families was found as well as a reduction of intervals among families experiencing an infant death. Our sample size was insufficient to detect a possible replacement behavior in the case of infant death. High natality and mortality rates, a secular decrease of natality rates, a log-normal birth interval, and family-size distributions suggest that La Alpujarra has been a natural fertility population following a demographic transition process.

  1. Projection methods for the analysis of molecular-frame photoelectron angular distributions

    International Nuclear Information System (INIS)

    Grum-Grzhimailo, A.N.; Lucchese, R.R.; Liu, X.-J.; Pruemper, G.; Morishita, Y.; Saito, N.; Ueda, K.

    2007-01-01

    A projection method is developed for extracting the nondipole contribution from the molecular frame photoelectron angular distributions of linear molecules. A corresponding convenient parametric form for the angular distributions is derived. The analysis was performed for the N 1s photoionization of the NO molecule a few eV above the ionization threshold. No detectable nondipole contribution was found for the photon energy of 412 eV

  2. Effects of mixing methods on phase distribution in vertical bubble flow

    International Nuclear Information System (INIS)

    Monji, Hideaki; Matsui, Goichi; Sugiyama, Takayuki.

    1992-01-01

    The mechanism of the phase distribution formation in a bubble flow is one of the most important problems in the control of two-phase flow systems. The effect of mixing methods on the phase distribution was experimentally investigated by using upward nitrogen gas-water bubble flow under the condition of fixed flow rates. The experimental results show that the diameter of the gas injection hole influences the phase distribution through the bubble size. The location of the injection hole and the direction of injection do not influence the phase distribution of fully developed bubble flow. The transitive equivalent bubble size from the coring bubble flow to the sliding bubble flow corresponds to the bubble shape transition. The analytical results show that the phase distribution may be predictable if the phase profile is judged from the bubble size. (author)

  3. Nationwide Multicenter Reference Interval Study for 28 Common Biochemical Analytes in China.

    Science.gov (United States)

    Xia, Liangyu; Chen, Ming; Liu, Min; Tao, Zhihua; Li, Shijun; Wang, Liang; Cheng, Xinqi; Qin, Xuzhen; Han, Jianhua; Li, Pengchang; Hou, Li'an; Yu, Songlin; Ichihara, Kiyoshi; Qiu, Ling

    2016-03-01

    A nationwide multicenter study was conducted in the China to explore sources of variation of reference values and establish reference intervals for 28 common biochemical analytes, as a part of the International Federation of Clinical Chemistry and Laboratory Medicine, Committee on Reference Intervals and Decision Limits (IFCC/C-RIDL) global study on reference values. A total of 3148 apparently healthy volunteers were recruited in 6 cities covering a wide area in China. Blood samples were tested in 2 central laboratories using Beckman Coulter AU5800 chemistry analyzers. Certified reference materials and value-assigned serum panel were used for standardization of test results. Multiple regression analysis was performed to explore sources of variation. Need for partition of reference intervals was evaluated based on 3-level nested ANOVA. After secondary exclusion using the latent abnormal values exclusion method, reference intervals were derived by a parametric method using the modified Box-Cox formula. Test results of 20 analytes were made traceable to reference measurement procedures. By the ANOVA, significant sex-related and age-related differences were observed in 12 and 12 analytes, respectively. A small regional difference was observed in the results for albumin, glucose, and sodium. Multiple regression analysis revealed BMI-related changes in results of 9 analytes for man and 6 for woman. Reference intervals of 28 analytes were computed with 17 analytes partitioned by sex and/or age. In conclusion, reference intervals of 28 common chemistry analytes applicable to Chinese Han population were established by use of the latest methodology. Reference intervals of 20 analytes traceable to reference measurement procedures can be used as common reference intervals, whereas others can be used as the assay system-specific reference intervals in China.

  4. A convenient method of obtaining percentile norms and accompanying interval estimates for self-report mood scales (DASS, DASS-21, HADS, PANAS, and sAD).

    Science.gov (United States)

    Crawford, John R; Garthwaite, Paul H; Lawrie, Caroline J; Henry, Julie D; MacDonald, Marie A; Sutherland, Jane; Sinha, Priyanka

    2009-06-01

    A series of recent papers have reported normative data from the general adult population for commonly used self-report mood scales. To bring together and supplement these data in order to provide a convenient means of obtaining percentile norms for the mood scales. A computer program was developed that provides point and interval estimates of the percentile rank corresponding to raw scores on the various self-report scales. The program can be used to obtain point and interval estimates of the percentile rank of an individual's raw scores on the DASS, DASS-21, HADS, PANAS, and sAD mood scales, based on normative sample sizes ranging from 758 to 3822. The interval estimates can be obtained using either classical or Bayesian methods as preferred. The computer program (which can be downloaded at www.abdn.ac.uk/~psy086/dept/MoodScore.htm) provides a convenient and reliable means of supplementing existing cut-off scores for self-report mood scales.

  5. The T-peak–T-end Interval as a Marker of Repolarization Abnormality

    DEFF Research Database (Denmark)

    Bhuiyan, Tanveer A.; Graff, Claus; Kanters, Jørgen K.

    2015-01-01

    BACKGROUND AND OBJECTIVE: The T-peak to T-end (TpTe) interval has been suggested as an index of transmural dispersion and as a marker of drug-induced abnormal repolarization. In this study, we investigate the relation between TpTe and the QT interval. METHODS: Electrocardiograms (ECGs) from five...

  6. Statistical intervals a guide for practitioners

    CERN Document Server

    Hahn, Gerald J

    2011-01-01

    Presents a detailed exposition of statistical intervals and emphasizes applications in industry. The discussion differentiates at an elementary level among different kinds of statistical intervals and gives instruction with numerous examples and simple math on how to construct such intervals from sample data. This includes confidence intervals to contain a population percentile, confidence intervals on probability of meeting specified threshold value, and prediction intervals to include observation in a future sample. Also has an appendix containing computer subroutines for nonparametric stati

  7. A Dynamic Interval-Valued Intuitionistic Fuzzy Sets Applied to Pattern Recognition

    Directory of Open Access Journals (Sweden)

    Zhenhua Zhang

    2013-01-01

    Full Text Available We present dynamic interval-valued intuitionistic fuzzy sets (DIVIFS, which can improve the recognition accuracy when they are applied to pattern recognition. By analyzing the degree of hesitancy, we propose some DIVIFS models from intuitionistic fuzzy sets (IFS and interval-valued IFS (IVIFS. And then we present a novel ranking condition on the distance of IFS and IVIFS and introduce some distance measures of DIVIFS satisfying the ranking condition. Finally, a pattern recognition example applied to medical diagnosis decision making is given to demonstrate the application of DIVIFS and its distances. The simulation results show that the DIVIFS method is more comprehensive and flexible than the IFS method and the IVIFS method.

  8. Quantifying uncertainty on sediment loads using bootstrap confidence intervals

    Science.gov (United States)

    Slaets, Johanna I. F.; Piepho, Hans-Peter; Schmitter, Petra; Hilger, Thomas; Cadisch, Georg

    2017-01-01

    Load estimates are more informative than constituent concentrations alone, as they allow quantification of on- and off-site impacts of environmental processes concerning pollutants, nutrients and sediment, such as soil fertility loss, reservoir sedimentation and irrigation channel siltation. While statistical models used to predict constituent concentrations have been developed considerably over the last few years, measures of uncertainty on constituent loads are rarely reported. Loads are the product of two predictions, constituent concentration and discharge, integrated over a time period, which does not make it straightforward to produce a standard error or a confidence interval. In this paper, a linear mixed model is used to estimate sediment concentrations. A bootstrap method is then developed that accounts for the uncertainty in the concentration and discharge predictions, allowing temporal correlation in the constituent data, and can be used when data transformations are required. The method was tested for a small watershed in Northwest Vietnam for the period 2010-2011. The results showed that confidence intervals were asymmetric, with the highest uncertainty in the upper limit, and that a load of 6262 Mg year-1 had a 95 % confidence interval of (4331, 12 267) in 2010 and a load of 5543 Mg an interval of (3593, 8975) in 2011. Additionally, the approach demonstrated that direct estimates from the data were biased downwards compared to bootstrap median estimates. These results imply that constituent loads predicted from regression-type water quality models could frequently be underestimating sediment yields and their environmental impact.

  9. Research on distributed optical fiber sensing data processing method based on LabVIEW

    Science.gov (United States)

    Li, Zhonghu; Yang, Meifang; Wang, Luling; Wang, Jinming; Yan, Junhong; Zuo, Jing

    2018-01-01

    The pipeline leak detection and leak location problem have gotten extensive attention in the industry. In this paper, the distributed optical fiber sensing system is designed based on the heat supply pipeline. The data processing method of distributed optical fiber sensing based on LabVIEW is studied emphatically. The hardware system includes laser, sensing optical fiber, wavelength division multiplexer, photoelectric detector, data acquisition card and computer etc. The software system is developed using LabVIEW. The software system adopts wavelet denoising method to deal with the temperature information, which improved the SNR. By extracting the characteristic value of the fiber temperature information, the system can realize the functions of temperature measurement, leak location and measurement signal storage and inquiry etc. Compared with traditional negative pressure wave method or acoustic signal method, the distributed optical fiber temperature measuring system can measure several temperatures in one measurement and locate the leak point accurately. It has a broad application prospect.

  10. Method to measure autonomic control of cardiac function using time interval parameters from impedance cardiography

    International Nuclear Information System (INIS)

    Meijer, Jan H; Boesveldt, Sanne; Elbertse, Eskeline; Berendse, H W

    2008-01-01

    The time difference between the electrocardiogram and impedance cardiogram can be considered as a measure for the time delay between the electrical and mechanical activities of the heart. This time interval, characterized by the pre-ejection period (PEP), is related to the sympathetic autonomous nervous control of cardiac activity. PEP, however, is difficult to measure in practice. Therefore, a novel parameter, the initial systolic time interval (ISTI), is introduced to provide a more practical measure. The use of ISTI instead of PEP was evaluated in three groups: young healthy subjects, patients with Parkinson's disease, and a group of elderly, healthy subjects of comparable age. PEP and ISTI were studied under two conditions: at rest and after an exercise stimulus. Under both conditions, PEP and ISTI behaved largely similarly in the three groups and were significantly correlated. It is concluded that ISTI can be used as a substitute for PEP and, therefore, to evaluate autonomic neuropathy both in clinical and extramural settings. Measurement of ISTI can also be used to non-invasively monitor the electromechanical cardiac time interval, and the associated autonomic activity, under physiological circumstances

  11. Synchronization of Switched Interval Networks and Applications to Chaotic Neural Networks

    OpenAIRE

    Cao, Jinde; Alofi, Abdulaziz; Al-Mazrooei, Abdullah; Elaiw, Ahmed

    2013-01-01

    This paper investigates synchronization problem of switched delay networks with interval parameters uncertainty, based on the theories of the switched systems and drive-response technique, a mathematical model of the switched interval drive-response error system is established. Without constructing Lyapunov-Krasovskii functions, introducing matrix measure method for the first time to switched time-varying delay networks, combining Halanay inequality technique, synchroniza...

  12. Advanced Interval Type-2 Fuzzy Sliding Mode Control for Robot Manipulator

    Directory of Open Access Journals (Sweden)

    Ji-Hwan Hwang

    2017-01-01

    Full Text Available In this paper, advanced interval type-2 fuzzy sliding mode control (AIT2FSMC for robot manipulator is proposed. The proposed AIT2FSMC is a combination of interval type-2 fuzzy system and sliding mode control. For resembling a feedback linearization (FL control law, interval type-2 fuzzy system is designed. For compensating the approximation error between the FL control law and interval type-2 fuzzy system, sliding mode controller is designed, respectively. The tuning algorithms are derived in the sense of Lyapunov stability theorem. Two-link rigid robot manipulator with nonlinearity is used to test and the simulation results are presented to show the effectiveness of the proposed method that can control unknown system well.

  13. Concurrent variable-interval variable-ratio schedules in a dynamic choice environment.

    Science.gov (United States)

    Bell, Matthew C; Baum, William M

    2017-11-01

    Most studies of operant choice have focused on presenting subjects with a fixed pair of schedules across many experimental sessions. Using these methods, studies of concurrent variable- interval variable-ratio schedules helped to evaluate theories of choice. More recently, a growing literature has focused on dynamic choice behavior. Those dynamic choice studies have analyzed behavior on a number of different time scales using concurrent variable-interval schedules. Following the dynamic choice approach, the present experiment examined performance on concurrent variable-interval variable-ratio schedules in a rapidly changing environment. Our objectives were to compare performance on concurrent variable-interval variable-ratio schedules with extant data on concurrent variable-interval variable-interval schedules using a dynamic choice procedure and to extend earlier work on concurrent variable-interval variable-ratio schedules. We analyzed performances at different time scales, finding strong similarities between concurrent variable-interval variable-interval and concurrent variable-interval variable- ratio performance within dynamic choice procedures. Time-based measures revealed almost identical performance in the two procedures compared with response-based measures, supporting the view that choice is best understood as time allocation. Performance at the smaller time scale of visits accorded with the tendency seen in earlier research toward developing a pattern of strong preference for and long visits to the richer alternative paired with brief "samples" at the leaner alternative ("fix and sample"). © 2017 Society for the Experimental Analysis of Behavior.

  14. Two-sorted Point-Interval Temporal Logics

    DEFF Research Database (Denmark)

    Balbiani, Philippe; Goranko, Valentin; Sciavicco, Guido

    2011-01-01

    There are two natural and well-studied approaches to temporal ontology and reasoning: point-based and interval-based. Usually, interval-based temporal reasoning deals with points as particular, duration-less intervals. Here we develop explicitly two-sorted point-interval temporal logical framework...... whereby time instants (points) and time periods (intervals) are considered on a par, and the perspective can shift between them within the formal discourse. We focus on fragments involving only modal operators that correspond to the inter-sort relations between points and intervals. We analyze...

  15. Multiparticle distributions in limited rapidity intervals and the violation of asymptotic KNO scaling

    International Nuclear Information System (INIS)

    De Dias Deus, J.

    1986-03-01

    A simple model independent analysis of UA5 collaboration pantip collider data on rapidity cut charged particle distributions strongly suggests: i) independent particle emission from sources or clusters; ii) exact negative binomial multiparticle distributions. The violation of asymptotic KNO scaling is shown to arise from the fast growth with energy of the reduced correlations C K (O)/C 1 k (0). A comparison with recently published e + e - √s = 29 GeV data is presented

  16. Influence of sampling interval and number of projections on the quality of SR-XFMT reconstruction

    International Nuclear Information System (INIS)

    Deng Biao; Yu Xiaohan; Xu Hongjie

    2007-01-01

    Synchrotron Radiation based X-ray Fluorescent Microtomography (SR-XFMT) is a nondestructive technique for detecting elemental composition and distribution inside a specimen with high spatial resolution and sensitivity. In this paper, computer simulation of SR-XFMT experiment is performed. The influence of the sampling interval and the number of projections on the quality of SR-XFMT image reconstruction is analyzed. It is found that the sampling interval has greater effect on the quality of reconstruction than the number of projections. (authors)

  17. High-intensity interval training: Modulating interval duration in overweight/obese men.

    Science.gov (United States)

    Smith-Ryan, Abbie E; Melvin, Malia N; Wingfield, Hailee L

    2015-05-01

    High-intensity interval training (HIIT) is a time-efficient strategy shown to induce various cardiovascular and metabolic adaptations. Little is known about the optimal tolerable combination of intensity and volume necessary for adaptations, especially in clinical populations. In a randomized controlled pilot design, we evaluated the effects of two types of interval training protocols, varying in intensity and interval duration, on clinical outcomes in overweight/obese men. Twenty-five men [body mass index (BMI) > 25 kg · m(2)] completed baseline body composition measures: fat mass (FM), lean mass (LM) and percent body fat (%BF) and fasting blood glucose, lipids and insulin (IN). A graded exercise cycling test was completed for peak oxygen consumption (VO2peak) and power output (PO). Participants were randomly assigned to high-intensity short interval (1MIN-HIIT), high-intensity interval (2MIN-HIIT) or control groups. 1MIN-HIIT and 2MIN-HIIT completed 3 weeks of cycling interval training, 3 days/week, consisting of either 10 × 1 min bouts at 90% PO with 1 min rests (1MIN-HIIT) or 5 × 2 min bouts with 1 min rests at undulating intensities (80%-100%) (2MIN-HIIT). There were no significant training effects on FM (Δ1.06 ± 1.25 kg) or %BF (Δ1.13% ± 1.88%), compared to CON. Increases in LM were not significant but increased by 1.7 kg and 2.1 kg for 1MIN and 2MIN-HIIT groups, respectively. Increases in VO2peak were also not significant for 1MIN (3.4 ml·kg(-1) · min(-1)) or 2MIN groups (2.7 ml · kg(-1) · min(-1)). IN sensitivity (HOMA-IR) improved for both training groups (Δ-2.78 ± 3.48 units; p < 0.05) compared to CON. HIIT may be an effective short-term strategy to improve cardiorespiratory fitness and IN sensitivity in overweight males.

  18. Adjusted Wald Confidence Interval for a Difference of Binomial Proportions Based on Paired Data

    Science.gov (United States)

    Bonett, Douglas G.; Price, Robert M.

    2012-01-01

    Adjusted Wald intervals for binomial proportions in one-sample and two-sample designs have been shown to perform about as well as the best available methods. The adjusted Wald intervals are easy to compute and have been incorporated into introductory statistics courses. An adjusted Wald interval for paired binomial proportions is proposed here and…

  19. Uniform distribution and quasi-Monte Carlo methods discrepancy, integration and applications

    CERN Document Server

    Kritzer, Peter; Pillichshammer, Friedrich; Winterhof, Arne

    2014-01-01

    The survey articles in this book focus on number theoretic point constructions, uniform distribution theory, and quasi-Monte Carlo methods. As deterministic versions of the Monte Carlo method, quasi-Monte Carlo rules enjoy increasing popularity, with many fruitful applications in mathematical practice, as for example in finance, computer graphics, and biology.

  20. Modeling error distributions of growth curve models through Bayesian methods.

    Science.gov (United States)

    Zhang, Zhiyong

    2016-06-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.

  1. An interval-valued reliability model with bounded failure rates

    DEFF Research Database (Denmark)

    Kozine, Igor; Krymsky, Victor

    2012-01-01

    The approach to deriving interval-valued reliability measures described in this paper is distinctive from other imprecise reliability models in that it overcomes the issue of having to impose an upper bound on time to failure. It rests on the presupposition that a constant interval-valued failure...... rate is known possibly along with other reliability measures, precise or imprecise. The Lagrange method is used to solve the constrained optimization problem to derive new reliability measures of interest. The obtained results call for an exponential-wise approximation of failure probability density...

  2. Correction of measured multiplicity distributions by the simulated annealing method

    International Nuclear Information System (INIS)

    Hafidouni, M.

    1993-01-01

    Simulated annealing is a method used to solve combinatorial optimization problems. It is used here for the correction of the observed multiplicity distribution from S-Pb collisions at 200 GeV/c per nucleon. (author) 11 refs., 2 figs

  3. Computation of robustly stabilizing PID controllers for interval systems.

    Science.gov (United States)

    Matušů, Radek; Prokop, Roman

    2016-01-01

    The paper is focused on the computation of all possible robustly stabilizing Proportional-Integral-Derivative (PID) controllers for plants with interval uncertainty. The main idea of the proposed method is based on Tan's (et al.) technique for calculation of (nominally) stabilizing PI and PID controllers or robustly stabilizing PI controllers by means of plotting the stability boundary locus in either P-I plane or P-I-D space. Refinement of the existing method by consideration of 16 segment plants instead of 16 Kharitonov plants provides an elegant and efficient tool for finding all robustly stabilizing PID controllers for an interval system. The validity and relatively effortless application of presented theoretical concepts are demonstrated through a computation and simulation example in which the uncertain mathematical model of an experimental oblique wing aircraft is robustly stabilized.

  4. Fitting statistical distributions the generalized lambda distribution and generalized bootstrap methods

    CERN Document Server

    Karian, Zaven A

    2000-01-01

    Throughout the physical and social sciences, researchers face the challenge of fitting statistical distributions to their data. Although the study of statistical modelling has made great strides in recent years, the number and variety of distributions to choose from-all with their own formulas, tables, diagrams, and general properties-continue to create problems. For a specific application, which of the dozens of distributions should one use? What if none of them fit well?Fitting Statistical Distributions helps answer those questions. Focusing on techniques used successfully across many fields, the authors present all of the relevant results related to the Generalized Lambda Distribution (GLD), the Generalized Bootstrap (GB), and Monte Carlo simulation (MC). They provide the tables, algorithms, and computer programs needed for fitting continuous probability distributions to data in a wide variety of circumstances-covering bivariate as well as univariate distributions, and including situations where moments do...

  5. Method of imaging the electrical conductivity distribution of a subsurface

    Science.gov (United States)

    Johnson, Timothy C.

    2017-09-26

    A method of imaging electrical conductivity distribution of a subsurface containing metallic structures with known locations and dimensions is disclosed. Current is injected into the subsurface to measure electrical potentials using multiple sets of electrodes, thus generating electrical resistivity tomography measurements. A numeric code is applied to simulate the measured potentials in the presence of the metallic structures. An inversion code is applied that utilizes the electrical resistivity tomography measurements and the simulated measured potentials to image the subsurface electrical conductivity distribution and remove effects of the subsurface metallic structures with known locations and dimensions.

  6. New method for exact measurement of thermal neutron distribution in elementary cell

    International Nuclear Information System (INIS)

    Takac, S.M.; Krcevinac, S.B.

    1966-06-01

    Exact measurement of thermal neutron density distribution in an elementary cell necessitates the knowledge of the perturbations involved in the cell by the measuring device. A new method has been developed in which a special stress is made to evaluate these perturbations by measuring the response from the perturbations introduced in the elementary cell. The unperturbed distribution was obtained by extrapolation to zero perturbation. The final distributions for different lattice pitches were compared with a THERMOS-type calculation. As a pleasing fact a very good agreement has been reached, which dissolves the long existing disagreement between THERMOS calculations and measured density distribution (author)

  7. New method for exact measurement of thermal neutron distribution in elementary cell

    Energy Technology Data Exchange (ETDEWEB)

    Takac, S M; Krcevinac, S B [Institute of nuclear sciences Boris Kidric, Vinca, Beograd (Yugoslavia)

    1966-06-15

    Exact measurement of thermal neutron density distribution in an elementary cell necessitates the knowledge of the perturbations involved in the cell by the measuring device. A new method has been developed in which a special stress is made to evaluate these perturbations by measuring the response from the perturbations introduced in the elementary cell. The unperturbed distribution was obtained by extrapolation to zero perturbation. The final distributions for different lattice pitches were compared with a THERMOS-type calculation. As a pleasing fact a very good agreement has been reached, which dissolves the long existing disagreement between THERMOS calculations and measured density distribution (author)

  8. A code for obtaining temperature distribution by finite element method

    International Nuclear Information System (INIS)

    Bloch, M.

    1984-01-01

    The ELEFIB Fortran language computer code using finite element method for calculating temperature distribution of linear and two dimensional problems, in permanent region or in the transient phase of heat transfer, is presented. The formulation of equations uses the Galerkin method. Some examples are shown and the results are compared with other papers. The comparative evaluation shows that the elaborated code gives good values. (M.C.K.) [pt

  9. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...... of error detection methods includes a high level software specification. this has the purpose of illustrating that the designed can be used in practice....

  10. Association between the physical activity and heart rate corrected-QT interval in older adults.

    Science.gov (United States)

    Michishita, Ryoma; Fukae, Chika; Mihara, Rikako; Ikenaga, Masahiro; Morimura, Kazuhiro; Takeda, Noriko; Yamada, Yosuke; Higaki, Yasuki; Tanaka, Hiroaki; Kiyonaga, Akira

    2015-07-01

    Increased physical activity can reduce the incidence of cardiovascular disease and the mortality rate. In contrast, a prolonged heart rate corrected-QT (QTc) interval is associated with an increased risk of arrhythmias, sudden cardiac death and coronary artery disease. The present cross-sectional study was designed to clarify the association between the physical activity level and the QTc interval in older adults. The participants included 586 older adults (267 men and 319 women, age 71.2 ± 4.7 years) without a history of cardiovascular disease, who were taking cardioactive drugs. Electrocardiography was recorded with a standard resting 12-lead electrocardiograph, while the QTc interval was calculated according to Hodges' formula. The physical activity level was assessed using a triaxial accelerometer. The participants were divided into four categories, which were defined equally quartile distributions of the QTc interval. After adjusting for age, body mass index, waist circumference and the number of steps, the time spent in inactivity was higher and the time spent in light physical activity was significantly lower in the longest QTc interval group than in the shortest QTc interval group in both sexes (P physical activities among the four groups in either sex. These results suggest that a decreased physical activity level, especially inactivity and light intensity physical activity, were associated with QTc interval in older adults. © 2014 Japan Geriatrics Society.

  11. Reference intervals for selected serum biochemistry analytes in cheetahs (Acinonyx jubatus

    Directory of Open Access Journals (Sweden)

    Gavin C. Hudson-Lamb

    2016-02-01

    Full Text Available Published haematologic and serum biochemistry reference intervals are very scarce for captive cheetahs and even more for free-ranging cheetahs. The current study was performed to establish reference intervals for selected serum biochemistry analytes in cheetahs. Baseline serum biochemistry analytes were analysed from 66 healthy Namibian cheetahs. Samples were collected from 30 captive cheetahs at the AfriCat Foundation and 36 free-ranging cheetahs from central Namibia. The effects of captivity-status, age, sex and haemolysis score on the tested serum analytes were investigated. The biochemistry analytes that were measured were sodium, potassium, magnesium, chloride, urea and creatinine. The 90% confidence interval of the reference limits was obtained using the non-parametric bootstrap method. Reference intervals were preferentially determined by the non-parametric method and were as follows: sodium (128 mmol/L – 166 mmol/L, potassium (3.9 mmol/L – 5.2 mmol/L, magnesium (0.8 mmol/L – 1.2 mmol/L, chloride (97 mmol/L – 130 mmol/L, urea (8.2 mmol/L – 25.1 mmol/L and creatinine (88 µmol/L – 288 µmol/L. Reference intervals from the current study were compared with International Species Information System values for cheetahs and found to be narrower. Moreover, age, sex and haemolysis score had no significant effect on the serum analytes in this study. Separate reference intervals for captive and free-ranging cheetahs were also determined. Captive cheetahs had higher urea values, most likely due to dietary factors. This study is the first to establish reference intervals for serum biochemistry analytes in cheetahs according to international guidelines. These results can be used for future health and disease assessments in both captive and free-ranging cheetahs.

  12. Some Characterizations of Convex Interval Games

    NARCIS (Netherlands)

    Brânzei, R.; Tijs, S.H.; Alparslan-Gok, S.Z.

    2008-01-01

    This paper focuses on new characterizations of convex interval games using the notions of exactness and superadditivity. We also relate big boss interval games with concave interval games and obtain characterizations of big boss interval games in terms of exactness and subadditivity.

  13. Mean-field approximation for spacing distribution functions in classical systems

    Science.gov (United States)

    González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.

    2012-01-01

    We propose a mean-field method to calculate approximately the spacing distribution functions p(n)(s) in one-dimensional classical many-particle systems. We compare our method with two other commonly used methods, the independent interval approximation and the extended Wigner surmise. In our mean-field approach, p(n)(s) is calculated from a set of Langevin equations, which are decoupled by using a mean-field approximation. We find that in spite of its simplicity, the mean-field approximation provides good results in several systems. We offer many examples illustrating that the three previously mentioned methods give a reasonable description of the statistical behavior of the system. The physical interpretation of each method is also discussed.

  14. Reference interval of thyroxine and thyrotropin of healthyterm ...

    African Journals Online (AJOL)

    Objective: To establish a local Reference Interval of Serum Thyroxine (T4) and Serum Thyroid stimulating Hormone(TSH) of healthy Nigerian Newborns in Jos University Teaching Hospital Jos. Materials and Methods: One hundred and sixty healthy term Nigerian Newborns who fulfilled the criteria for inclusion were ...

  15. Integral equations with difference kernels on finite intervals

    CERN Document Server

    Sakhnovich, Lev A

    2015-01-01

    This book focuses on solving integral equations with difference kernels on finite intervals. The corresponding problem on the semiaxis was previously solved by N. Wiener–E. Hopf and by M.G. Krein. The problem on finite intervals, though significantly more difficult, may be solved using our method of operator identities. This method is also actively employed in inverse spectral problems, operator factorization and nonlinear integral equations. Applications of the obtained results to optimal synthesis, light scattering, diffraction, and hydrodynamics problems are discussed in this book, which also describes how the theory of operators with difference kernels is applied to stable processes and used to solve the famous M. Kac problems on stable processes. In this second edition these results are extensively generalized and include the case of all Levy processes. We present the convolution expression for the well-known Ito formula of the generator operator, a convolution expression that has proven to be fruitful...

  16. Most probable dimension value and most flat interval methods for automatic estimation of dimension from time series

    International Nuclear Information System (INIS)

    Corana, A.; Bortolan, G.; Casaleggio, A.

    2004-01-01

    We present and compare two automatic methods for dimension estimation from time series. Both methods, based on conceptually different approaches, work on the derivative of the bi-logarithmic plot of the correlation integral versus the correlation length (log-log plot). The first method searches for the most probable dimension values (MPDV) and associates to each of them a possible scaling region. The second one searches for the most flat intervals (MFI) in the derivative of the log-log plot. The automatic procedures include the evaluation of the candidate scaling regions using two reliability indices. The data set used to test the methods consists of time series from known model attractors with and without the addition of noise, structured time series, and electrocardiographic signals from the MIT-BIH ECG database. Statistical analysis of results was carried out by means of paired t-test, and no statistically significant differences were found in the large majority of the trials. Consistent results are also obtained dealing with 'difficult' time series. In general for a more robust and reliable estimate, the use of both methods may represent a good solution when time series from complex systems are analyzed. Although we present results for the correlation dimension only, the procedures can also be used for the automatic estimation of generalized q-order dimensions and pointwise dimension. We think that the proposed methods, eliminating the need of operator intervention, allow a faster and more objective analysis, thus improving the usefulness of dimension analysis for the characterization of time series obtained from complex dynamical systems

  17. Multivariate interval-censored survival data

    DEFF Research Database (Denmark)

    Hougaard, Philip

    2014-01-01

    Interval censoring means that an event time is only known to lie in an interval (L,R], with L the last examination time before the event, and R the first after. In the univariate case, parametric models are easily fitted, whereas for non-parametric models, the mass is placed on some intervals, de...

  18. Forensic use of the Greulich and Pyle atlas: prediction intervals and relevance

    Energy Technology Data Exchange (ETDEWEB)

    Chaumoitre, K.; Panuel, M. [APHM, Hopital Nord, Department of Medical Imaging, Marseille (France); Faculte de Medecine de Marseille, UMR 7268 ADES, Aix-Marseille Universite-EFS-CNRS, Marseille Cedex 15 (France); Saliba-Serre, B.; Adalian, P.; Signoli, M.; Leonetti, G. [Faculte de Medecine de Marseille, UMR 7268 ADES, Aix-Marseille Universite-EFS-CNRS, Marseille Cedex 15 (France)

    2017-03-15

    The Greulich and Pyle (GP) atlas is one of the most frequently used methods of bone age (BA) estimation. Our aim is to assess its accuracy and to calculate the prediction intervals at 95% for forensic use. The study was conducted on a multi-ethnic sample of 2614 individuals (1423 boys and 1191 girls) referred to the university hospital of Marseille (France) for simple injuries. Hand radiographs were analysed using the GP atlas. Reliability of GP atlas and agreement between BA and chronological age (CA) were assessed and prediction intervals at 95% were calculated. The repeatability was excellent and the reproducibility was good. Pearson's linear correlation coefficient between CA and BA was 0.983. The mean difference between BA and CA was -0.18 years (boys) and 0.06 years (girls). The prediction interval at 95% for CA was given for each GP category and ranged between 1.2 and more than 4.5 years. The GP atlas is a reproducible and repeatable method that is still accurate for the present population, with a high correlation between BA and CA. The prediction intervals at 95% are wide, reflecting individual variability, and should be known when the method is used in forensic cases. (orig.)

  19. Enhancing the selection of backoff interval using fuzzy logic over wireless Ad Hoc networks.

    Science.gov (United States)

    Ranganathan, Radha; Kannan, Kathiravan

    2015-01-01

    IEEE 802.11 is the de facto standard for medium access over wireless ad hoc network. The collision avoidance mechanism (i.e., random binary exponential backoff-BEB) of IEEE 802.11 DCF (distributed coordination function) is inefficient and unfair especially under heavy load. In the literature, many algorithms have been proposed to tune the contention window (CW) size. However, these algorithms make every node select its backoff interval between [0, CW] in a random and uniform manner. This randomness is incorporated to avoid collisions among the nodes. But this random backoff interval can change the optimal order and frequency of channel access among competing nodes which results in unfairness and increased delay. In this paper, we propose an algorithm that schedules the medium access in a fair and effective manner. This algorithm enhances IEEE 802.11 DCF with additional level of contention resolution that prioritizes the contending nodes according to its queue length and waiting time. Each node computes its unique backoff interval using fuzzy logic based on the input parameters collected from contending nodes through overhearing. We evaluate our algorithm against IEEE 802.11, GDCF (gentle distributed coordination function) protocols using ns-2.35 simulator and show that our algorithm achieves good performance.

  20. Evaluation about the performance of E-government based on interval-valued intuitionistic fuzzy set.

    Science.gov (United States)

    Zhang, Shuai; Yu, Dejian; Wang, Yan; Zhang, Wenyu

    2014-01-01

    The evaluation is an important approach to promote the development of the E-Government. Since the rapid development of E-Government in the world, the E-Government performance evaluation has become a hot issue in the academia. In this paper, we develop a new evaluation method for the development of the E-Government based on the interval-valued intuitionistic fuzzy set which is a powerful technique in expressing the uncertainty of the real situation. First, we extend the geometric Heronian mean (GHM) operator to interval-valued intuitionistic fuzzy environment and proposed the interval-valued intuitionistic fuzzy GHM (IIFGHM) operator. Then, we investigate the relationships between the IIFGHM operator and some existing ones, such as generalized interval-valued intuitionistic fuzzy HM (GIIFHM) and interval-valued intuitionistic fuzzy weighted Bonferoni mean operator. Furthermore, we validate the effectiveness of the proposed method using a real case about the E-Government evaluation in Hangzhou City, China.