WorldWideScience

Sample records for interval distribution method

  1. Measurement of subcritical multiplication by the interval distribution method

    International Nuclear Information System (INIS)

    Nelson, G.W.

    1985-01-01

    The prompt decay constant or the subcritical neutron multiplication may be determined by measuring the distribution of the time intervals between successive neutron counts. The distribution data is analyzed by least-squares fitting to a theoretical distribution function derived from a point reactor probability model. Published results of measurements with one- and two-detector systems are discussed. Data collection times are shorter, and statistical errors are smaller the nearer the system is to delayed critical. Several of the measurements indicate that a shorter data collection time and higher accuracy are possible with the interval distribution method than with the Feynman variance method

  2. Interval methods: An introduction

    DEFF Research Database (Denmark)

    Achenie, L.E.K.; Kreinovich, V.; Madsen, Kaj

    2006-01-01

    This chapter contains selected papers presented at the Minisymposium on Interval Methods of the PARA'04 Workshop '' State-of-the-Art in Scientific Computing ''. The emphasis of the workshop was on high-performance computing (HPC). The ongoing development of ever more advanced computers provides...... the potential for solving increasingly difficult computational problems. However, given the complexity of modern computer architectures, the task of realizing this potential needs careful attention. A main concern of HPC is the development of software that optimizes the performance of a given computer....... An important characteristic of the computer performance in scientific computing is the accuracy of the Computation results. Often, we can estimate this accuracy by using traditional statistical techniques. However, in many practical situations, we do not know the probability distributions of different...

  3. An innovative method for offshore wind farm site selection based on the interval number with probability distribution

    Science.gov (United States)

    Wu, Yunna; Chen, Kaifeng; Xu, Hu; Xu, Chuanbo; Zhang, Haobo; Yang, Meng

    2017-12-01

    There is insufficient research relating to offshore wind farm site selection in China. The current methods for site selection have some defects. First, information loss is caused by two aspects: the implicit assumption that the probability distribution on the interval number is uniform; and ignoring the value of decision makers' (DMs') common opinion on the criteria information evaluation. Secondly, the difference in DMs' utility function has failed to receive attention. An innovative method is proposed in this article to solve these drawbacks. First, a new form of interval number and its weighted operator are proposed to reflect the uncertainty and reduce information loss. Secondly, a new stochastic dominance degree is proposed to quantify the interval number with a probability distribution. Thirdly, a two-stage method integrating the weighted operator with stochastic dominance degree is proposed to evaluate the alternatives. Finally, a case from China proves the effectiveness of this method.

  4. Neutron generation time of the reactor 'crocus' by an interval distribution method for counts collected by two detectors

    International Nuclear Information System (INIS)

    Haldy, P.-A.; Chikouche, M.

    1975-01-01

    The distribution is considered of time intervals between a count in one neutron detector and the consequent event registered in a second one. A 'four interval' probability generating function was derived by means of which the expression for the distribution of the time intervals, lasting from triggering detection in the first detector to subsequent count in the second, one could be obtained. The experimental work was conducted in the zero thermal power reactor Crocus, using a neutron source provided by spontaneous fission, a BF 3 counter for the first detector and an He 3 detector for the second instrument. (U.K.)

  5. Probability Distribution for Flowing Interval Spacing

    International Nuclear Information System (INIS)

    Kuzio, S.

    2001-01-01

    The purpose of this analysis is to develop a probability distribution for flowing interval spacing. A flowing interval is defined as a fractured zone that transmits flow in the Saturated Zone (SZ), as identified through borehole flow meter surveys (Figure 1). This analysis uses the term ''flowing interval spacing'' as opposed to fractured spacing, which is typically used in the literature. The term fracture spacing was not used in this analysis because the data used identify a zone (or a flowing interval) that contains fluid-conducting fractures but does not distinguish how many or which fractures comprise the flowing interval. The flowing interval spacing is measured between the midpoints of each flowing interval. Fracture spacing within the SZ is defined as the spacing between fractures, with no regard to which fractures are carrying flow. The Development Plan associated with this analysis is entitled, ''Probability Distribution for Flowing Interval Spacing'', (CRWMS M and O 2000a). The parameter from this analysis may be used in the TSPA SR/LA Saturated Zone Flow and Transport Work Direction and Planning Documents: (1) ''Abstraction of Matrix Diffusion for SZ Flow and Transport Analyses'' (CRWMS M and O 1999a) and (2) ''Incorporation of Heterogeneity in SZ Flow and Transport Analyses'', (CRWMS M and O 1999b). A limitation of this analysis is that the probability distribution of flowing interval spacing may underestimate the effect of incorporating matrix diffusion processes in the SZ transport model because of the possible overestimation of the flowing interval spacing. Larger flowing interval spacing results in a decrease in the matrix diffusion processes. This analysis may overestimate the flowing interval spacing because the number of fractures that contribute to a flowing interval cannot be determined from the data. Because each flowing interval probably has more than one fracture contributing to a flowing interval, the true flowing interval spacing could be

  6. Probability Distribution for Flowing Interval Spacing

    International Nuclear Information System (INIS)

    S. Kuzio

    2004-01-01

    Fracture spacing is a key hydrologic parameter in analyses of matrix diffusion. Although the individual fractures that transmit flow in the saturated zone (SZ) cannot be identified directly, it is possible to determine the fractured zones that transmit flow from flow meter survey observations. The fractured zones that transmit flow as identified through borehole flow meter surveys have been defined in this report as flowing intervals. The flowing interval spacing is measured between the midpoints of each flowing interval. The determination of flowing interval spacing is important because the flowing interval spacing parameter is a key hydrologic parameter in SZ transport modeling, which impacts the extent of matrix diffusion in the SZ volcanic matrix. The output of this report is input to the ''Saturated Zone Flow and Transport Model Abstraction'' (BSC 2004 [DIRS 170042]). Specifically, the analysis of data and development of a data distribution reported herein is used to develop the uncertainty distribution for the flowing interval spacing parameter for the SZ transport abstraction model. Figure 1-1 shows the relationship of this report to other model reports that also pertain to flow and transport in the SZ. Figure 1-1 also shows the flow of key information among the SZ reports. It should be noted that Figure 1-1 does not contain a complete representation of the data and parameter inputs and outputs of all SZ reports, nor does it show inputs external to this suite of SZ reports. Use of the developed flowing interval spacing probability distribution is subject to the limitations of the assumptions discussed in Sections 5 and 6 of this analysis report. The number of fractures in a flowing interval is not known. Therefore, the flowing intervals are assumed to be composed of one flowing zone in the transport simulations. This analysis may overestimate the flowing interval spacing because the number of fractures that contribute to a flowing interval cannot be

  7. Confidence intervals for the lognormal probability distribution

    International Nuclear Information System (INIS)

    Smith, D.L.; Naberejnev, D.G.

    2004-01-01

    The present communication addresses the topic of symmetric confidence intervals for the lognormal probability distribution. This distribution is frequently utilized to characterize inherently positive, continuous random variables that are selected to represent many physical quantities in applied nuclear science and technology. The basic formalism is outlined herein and a conjured numerical example is provided for illustration. It is demonstrated that when the uncertainty reflected in a lognormal probability distribution is large, the use of a confidence interval provides much more useful information about the variable used to represent a particular physical quantity than can be had by adhering to the notion that the mean value and standard deviation of the distribution ought to be interpreted as best value and corresponding error, respectively. Furthermore, it is shown that if the uncertainty is very large a disturbing anomaly can arise when one insists on interpreting the mean value and standard deviation as the best value and corresponding error, respectively. Reliance on using the mode and median as alternative parameters to represent the best available knowledge of a variable with large uncertainties is also shown to entail limitations. Finally, a realistic physical example involving the decay of radioactivity over a time period that spans many half-lives is presented and analyzed to further illustrate the concepts discussed in this communication

  8. A note on birth interval distributions

    International Nuclear Information System (INIS)

    Shrestha, G.

    1989-08-01

    A considerable amount of work has been done regarding the birth interval analysis in mathematical demography. This paper is prepared with the intention of reviewing some probability models related to interlive birth intervals proposed by different researchers. (author). 14 refs

  9. Coefficient Omega Bootstrap Confidence Intervals: Nonnormal Distributions

    Science.gov (United States)

    Padilla, Miguel A.; Divers, Jasmin

    2013-01-01

    The performance of the normal theory bootstrap (NTB), the percentile bootstrap (PB), and the bias-corrected and accelerated (BCa) bootstrap confidence intervals (CIs) for coefficient omega was assessed through a Monte Carlo simulation under conditions not previously investigated. Of particular interests were nonnormal Likert-type and binary items.…

  10. Interval sampling methods and measurement error: a computer simulation.

    Science.gov (United States)

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.

  11. Time interval approach to the pulsed neutron logging method

    International Nuclear Information System (INIS)

    Zhao Jingwu; Su Weining

    1994-01-01

    The time interval of neighbouring neutrons emitted from a steady state neutron source can be treated as that from a time-dependent neutron source. In the rock space, the neutron flux is given by the neutron diffusion equation and is composed of an infinite terms. Each term s composed of two die-away curves. The delay action is discussed and used to measure the time interval with only one detector in the experiment. Nuclear reactions with the time distribution due to different types of radiations observed in the neutron well-logging methods are presented with a view to getting the rock nuclear parameters from the time interval technique

  12. Rationalizing method of replacement intervals by using Bayesian statistics

    International Nuclear Information System (INIS)

    Kasai, Masao; Notoya, Junichi; Kusakari, Yoshiyuki

    2007-01-01

    This study represents the formulations for rationalizing the replacement intervals of equipments and/or parts taking into account the probability density functions (PDF) of the parameters of failure distribution functions (FDF) and compares the optimized intervals by our formulations with those by conventional formulations which uses only representative values of the parameters of FDF instead of using these PDFs. The failure data are generated by Monte Carlo simulations since the real failure data can not be available for us. The PDF of PDF parameters are obtained by Bayesian method and the representative values are obtained by likelihood estimation and Bayesian method. We found that the method using PDF by Bayesian method brings longer replacement intervals than one using the representative of the parameters. (author)

  13. Chosen interval methods for solving linear interval systems with special type of matrix

    Science.gov (United States)

    Szyszka, Barbara

    2013-10-01

    The paper is devoted to chosen direct interval methods for solving linear interval systems with special type of matrix. This kind of matrix: band matrix with a parameter, from finite difference problem is obtained. Such linear systems occur while solving one dimensional wave equation (Partial Differential Equations of hyperbolic type) by using the central difference interval method of the second order. Interval methods are constructed so as the errors of method are enclosed in obtained results, therefore presented linear interval systems contain elements that determining the errors of difference method. The chosen direct algorithms have been applied for solving linear systems because they have no errors of method. All calculations were performed in floating-point interval arithmetic.

  14. Confidence Intervals for True Scores Using the Skew-Normal Distribution

    Science.gov (United States)

    Garcia-Perez, Miguel A.

    2010-01-01

    A recent comparative analysis of alternative interval estimation approaches and procedures has shown that confidence intervals (CIs) for true raw scores determined with the Score method--which uses the normal approximation to the binomial distribution--have actual coverage probabilities that are closest to their nominal level. It has also recently…

  15. On a linear method in bootstrap confidence intervals

    Directory of Open Access Journals (Sweden)

    Andrea Pallini

    2007-10-01

    Full Text Available A linear method for the construction of asymptotic bootstrap confidence intervals is proposed. We approximate asymptotically pivotal and non-pivotal quantities, which are smooth functions of means of n independent and identically distributed random variables, by using a sum of n independent smooth functions of the same analytical form. Errors are of order Op(n-3/2 and Op(n-2, respectively. The linear method allows a straightforward approximation of bootstrap cumulants, by considering the set of n independent smooth functions as an original random sample to be resampled with replacement.

  16. Confidence Intervals for Asbestos Fiber Counts: Approximate Negative Binomial Distribution.

    Science.gov (United States)

    Bartley, David; Slaven, James; Harper, Martin

    2017-03-01

    The negative binomial distribution is adopted for analyzing asbestos fiber counts so as to account for both the sampling errors in capturing only a finite number of fibers and the inevitable human variation in identifying and counting sampled fibers. A simple approximation to this distribution is developed for the derivation of quantiles and approximate confidence limits. The success of the approximation depends critically on the use of Stirling's expansion to sufficient order, on exact normalization of the approximating distribution, on reasonable perturbation of quantities from the normal distribution, and on accurately approximating sums by inverse-trapezoidal integration. Accuracy of the approximation developed is checked through simulation and also by comparison to traditional approximate confidence intervals in the specific case that the negative binomial distribution approaches the Poisson distribution. The resulting statistics are shown to relate directly to early research into the accuracy of asbestos sampling and analysis. Uncertainty in estimating mean asbestos fiber concentrations given only a single count is derived. Decision limits (limits of detection) and detection limits are considered for controlling false-positive and false-negative detection assertions and are compared to traditional limits computed assuming normal distributions. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2017.

  17. A method to elicit beliefs as most likely intervals

    NARCIS (Netherlands)

    Schlag, K.H.; van der Weele, J.J.

    2015-01-01

    We show how to elicit the beliefs of an expert in the form of a "most likely interval", a set of future outcomes that are deemed more likely than any other outcome. Our method, called the Most Likely Interval elicitation rule (MLI), asks the expert for an interval and pays according to how well the

  18. Interval estimation methods of the mean in small sample situation and the results' comparison

    International Nuclear Information System (INIS)

    Wu Changli; Guo Chunying; Jiang Meng; Lin Yuangen

    2009-01-01

    The methods of the sample mean's interval estimation, namely the classical method, the Bootstrap method, the Bayesian Bootstrap method, the Jackknife method and the spread method of the Empirical Characteristic distribution function are described. Numerical calculation on the samples' mean intervals is carried out where the numbers of the samples are 4, 5, 6 respectively. The results indicate the Bootstrap method and the Bayesian Bootstrap method are much more appropriate than others in small sample situation. (authors)

  19. Optimal interval for major maintenance actions in electricity distribution networks

    Energy Technology Data Exchange (ETDEWEB)

    Louit, Darko; Pascual, Rodrigo [Centro de Mineria, Pontificia Universidad Catolica de Chile, Av. Vicuna MacKenna, 4860 Santiago (Chile); Banjevic, Dragan [Centre for Maintenance Optimization and Reliability Engineering, University of Toronto, 5 King' s College Rd., Toronto, Ontario (Canada)

    2009-09-15

    Many systems require the periodic undertaking of major (preventive) maintenance actions (MMAs) such as overhauls in mechanical equipment, reconditioning of train lines, resurfacing of roads, etc. In the long term, these actions contribute to achieving a lower rate of occurrence of failures, though in many cases they increase the intensity of the failure process shortly after performed, resulting in a non-monotonic trend for failure intensity. Also, in the special case of distributed assets such as communications and energy networks, pipelines, etc., it is likely that the maintenance action takes place sequentially over an extended period of time, implying that different sections of the network underwent the MMAs at different periods. This forces the development of a model based on a relative time scale (i.e. time since last major maintenance event) and the combination of data from different sections of a grid, under a normalization scheme. Additionally, extended maintenance times and sequential execution of the MMAs make it difficult to identify failures occurring before and after the preventive maintenance action. This results in the loss of important information for the characterization of the failure process. A simple model is introduced to determine the optimal MMA interval considering such restrictions. Furthermore, a case study illustrates the optimal tree trimming interval around an electricity distribution network. (author)

  20. Counting Raindrops and the Distribution of Intervals Between Them.

    Science.gov (United States)

    Van De Giesen, N.; Ten Veldhuis, M. C.; Hut, R.; Pape, J. J.

    2017-12-01

    Drop size distributions are often assumed to follow a generalized gamma function, characterized by one parameter, Λ, [1]. In principle, this Λ can be estimated by measuring the arrival rate of raindrops. The arrival rate should follow a Poisson distribution. By measuring the distribution of the time intervals between drops arriving at a certain surface area, one should not only be able to estimate the arrival rate but also the robustness of the underlying assumption concerning steady state. It is important to note that many rainfall radar systems also assume fixeddrop size distributions, and associated arrival rates, to derive rainfall rates. By testing these relationships with a simple device, we will be able to improve both land-based and space-based radar rainfall estimates. Here, an open-hardware sensor design is presented, consisting of a 3D printed housing for a piezoelectric element, some simple electronics and an Arduino. The target audience for this device are citizen scientists who want to contribute to collecting rainfall information beyond the standard rain gauge. The core of the sensor is a simple piezo-buzzer, as found in many devices such as watches and fire alarms. When a raindrop falls on a piezo-buzzer, a small voltage is generated , which can be used to register the drop's arrival time. By registering the intervals between raindrops, the associated Poisson distribution can be estimated. In addition to the hardware, we will present the first results of a measuring campaign in Myanmar that will have ran from August to October 2017. All design files and descriptions are available through GitHub: https://github.com/nvandegiesen/Intervalometer. This research is partially supported through the TWIGA project, funded by the European Commission's H2020 program under call SC5-18-2017 `Novel in-situ observation systems'. Reference [1]: Uijlenhoet, R., and J. N. M. Stricker. "A consistent rainfall parameterization based on the exponential raindrop size

  1. The Applicability of Confidence Intervals of Quantiles for the Generalized Logistic Distribution

    Science.gov (United States)

    Shin, H.; Heo, J.; Kim, T.; Jung, Y.

    2007-12-01

    The generalized logistic (GL) distribution has been widely used for frequency analysis. However, there is a little study related to the confidence intervals that indicate the prediction accuracy of distribution for the GL distribution. In this paper, the estimation of the confidence intervals of quantiles for the GL distribution is presented based on the method of moments (MOM), maximum likelihood (ML), and probability weighted moments (PWM) and the asymptotic variances of each quantile estimator are derived as functions of the sample sizes, return periods, and parameters. Monte Carlo simulation experiments are also performed to verify the applicability of the derived confidence intervals of quantile. As the results, the relative bias (RBIAS) and relative root mean square error (RRMSE) of the confidence intervals generally increase as return period increases and reverse as sample size increases. And PWM for estimating the confidence intervals performs better than the other methods in terms of RRMSE when the data is almost symmetric while ML shows the smallest RBIAS and RRMSE when the data is more skewed and sample size is moderately large. The GL model was applied to fit the distribution of annual maximum rainfall data. The results show that there are little differences in the estimated quantiles between ML and PWM while distinct differences in MOM.

  2. Indirect methods for reference interval determination - review and recommendations.

    Science.gov (United States)

    Jones, Graham R D; Haeckel, Rainer; Loh, Tze Ping; Sikaris, Ken; Streichert, Thomas; Katayev, Alex; Barth, Julian H; Ozarda, Yesim

    2018-04-19

    Reference intervals are a vital part of the information supplied by clinical laboratories to support interpretation of numerical pathology results such as are produced in clinical chemistry and hematology laboratories. The traditional method for establishing reference intervals, known as the direct approach, is based on collecting samples from members of a preselected reference population, making the measurements and then determining the intervals. An alternative approach is to perform analysis of results generated as part of routine pathology testing and using appropriate statistical techniques to determine reference intervals. This is known as the indirect approach. This paper from a working group of the International Federation of Clinical Chemistry (IFCC) Committee on Reference Intervals and Decision Limits (C-RIDL) aims to summarize current thinking on indirect approaches to reference intervals. The indirect approach has some major potential advantages compared with direct methods. The processes are faster, cheaper and do not involve patient inconvenience, discomfort or the risks associated with generating new patient health information. Indirect methods also use the same preanalytical and analytical techniques used for patient management and can provide very large numbers for assessment. Limitations to the indirect methods include possible effects of diseased subpopulations on the derived interval. The IFCC C-RIDL aims to encourage the use of indirect methods to establish and verify reference intervals, to promote publication of such intervals with clear explanation of the process used and also to support the development of improved statistical techniques for these studies.

  3. Analyzing Big Data with the Hybrid Interval Regression Methods

    Directory of Open Access Journals (Sweden)

    Chia-Hui Huang

    2014-01-01

    Full Text Available Big data is a new trend at present, forcing the significant impacts on information technologies. In big data applications, one of the most concerned issues is dealing with large-scale data sets that often require computation resources provided by public cloud services. How to analyze big data efficiently becomes a big challenge. In this paper, we collaborate interval regression with the smooth support vector machine (SSVM to analyze big data. Recently, the smooth support vector machine (SSVM was proposed as an alternative of the standard SVM that has been proved more efficient than the traditional SVM in processing large-scale data. In addition the soft margin method is proposed to modify the excursion of separation margin and to be effective in the gray zone that the distribution of data becomes hard to be described and the separation margin between classes.

  4. The Distribution of the Product Explains Normal Theory Mediation Confidence Interval Estimation.

    Science.gov (United States)

    Kisbu-Sakarya, Yasemin; MacKinnon, David P; Miočević, Milica

    2014-05-01

    The distribution of the product has several useful applications. One of these applications is its use to form confidence intervals for the indirect effect as the product of 2 regression coefficients. The purpose of this article is to investigate how the moments of the distribution of the product explain normal theory mediation confidence interval coverage and imbalance. Values of the critical ratio for each random variable are used to demonstrate how the moments of the distribution of the product change across values of the critical ratio observed in research studies. Results of the simulation study showed that as skewness in absolute value increases, coverage decreases. And as skewness in absolute value and kurtosis increases, imbalance increases. The difference between testing the significance of the indirect effect using the normal theory versus the asymmetric distribution of the product is further illustrated with a real data example. This article is the first study to show the direct link between the distribution of the product and indirect effect confidence intervals and clarifies the results of previous simulation studies by showing why normal theory confidence intervals for indirect effects are often less accurate than those obtained from the asymmetric distribution of the product or from resampling methods.

  5. Binomial Distribution Sample Confidence Intervals Estimation 1. Sampling and Medical Key Parameters Calculation

    Directory of Open Access Journals (Sweden)

    Tudor DRUGAN

    2003-08-01

    Full Text Available The aim of the paper was to present the usefulness of the binomial distribution in studying of the contingency tables and the problems of approximation to normality of binomial distribution (the limits, advantages, and disadvantages. The classification of the medical keys parameters reported in medical literature and expressing them using the contingency table units based on their mathematical expressions restrict the discussion of the confidence intervals from 34 parameters to 9 mathematical expressions. The problem of obtaining different information starting with the computed confidence interval for a specified method, information like confidence intervals boundaries, percentages of the experimental errors, the standard deviation of the experimental errors and the deviation relative to significance level was solves through implementation in PHP programming language of original algorithms. The cases of expression, which contain two binomial variables, were separately treated. An original method of computing the confidence interval for the case of two-variable expression was proposed and implemented. The graphical representation of the expression of two binomial variables for which the variation domain of one of the variable depend on the other variable was a real problem because the most of the software used interpolation in graphical representation and the surface maps were quadratic instead of triangular. Based on an original algorithm, a module was implements in PHP in order to represent graphically the triangular surface plots. All the implementation described above was uses in computing the confidence intervals and estimating their performance for binomial distributions sample sizes and variable.

  6. Robotic fish tracking method based on suboptimal interval Kalman filter

    Science.gov (United States)

    Tong, Xiaohong; Tang, Chao

    2017-11-01

    Autonomous Underwater Vehicle (AUV) research focused on tracking and positioning, precise guidance and return to dock and other fields. The robotic fish of AUV has become a hot application in intelligent education, civil and military etc. In nonlinear tracking analysis of robotic fish, which was found that the interval Kalman filter algorithm contains all possible filter results, but the range is wide, relatively conservative, and the interval data vector is uncertain before implementation. This paper proposes a ptimization algorithm of suboptimal interval Kalman filter. Suboptimal interval Kalman filter scheme used the interval inverse matrix with its worst inverse instead, is more approximate nonlinear state equation and measurement equation than the standard interval Kalman filter, increases the accuracy of the nominal dynamic system model, improves the speed and precision of tracking system. Monte-Carlo simulation results show that the optimal trajectory of sub optimal interval Kalman filter algorithm is better than that of the interval Kalman filter method and the standard method of the filter.

  7. Trajectory Optimization Based on Multi-Interval Mesh Refinement Method

    Directory of Open Access Journals (Sweden)

    Ningbo Li

    2017-01-01

    Full Text Available In order to improve the optimization accuracy and convergence rate for trajectory optimization of the air-to-air missile, a multi-interval mesh refinement Radau pseudospectral method was introduced. This method made the mesh endpoints converge to the practical nonsmooth points and decreased the overall collocation points to improve convergence rate and computational efficiency. The trajectory was divided into four phases according to the working time of engine and handover of midcourse and terminal guidance, and then the optimization model was built. The multi-interval mesh refinement Radau pseudospectral method with different collocation points in each mesh interval was used to solve the trajectory optimization model. Moreover, this method was compared with traditional h method. Simulation results show that this method can decrease the dimensionality of nonlinear programming (NLP problem and therefore improve the efficiency of pseudospectral methods for solving trajectory optimization problems.

  8. Process control and optimization with simple interval calculation method

    DEFF Research Database (Denmark)

    Pomerantsev, A.; Rodionova, O.; Høskuldsson, Agnar

    2006-01-01

    for the quality improvement in the course of production. The latter is an active quality optimization, which takes into account the actual history of the process. The advocate approach is allied to the conventional method of multivariate statistical process control (MSPC) as it also employs the historical process......Methods of process control and optimization are presented and illustrated with a real world example. The optimization methods are based on the PLS block modeling as well as on the simple interval calculation methods of interval prediction and object status classification. It is proposed to employ...... the series of expanding PLS/SIC models in order to support the on-line process improvements. This method helps to predict the effect of planned actions on the product quality and thus enables passive quality control. We have also considered an optimization approach that proposes the correcting actions...

  9. Time Interval to Initiation of Contraceptive Methods Following ...

    African Journals Online (AJOL)

    Objectives: The objectives of the study were to determine factors affecting the interval between a woman's last childbirth and the initiation of contraception. Materials and Methods: This was a retrospective study. Family planning clinic records of the Barau Dikko Teaching Hospital Kaduna from January 2000 to March 2014 ...

  10. [Nonparametric method of estimating survival functions containing right-censored and interval-censored data].

    Science.gov (United States)

    Xu, Yonghong; Gao, Xiaohuan; Wang, Zhengxi

    2014-04-01

    Missing data represent a general problem in many scientific fields, especially in medical survival analysis. Dealing with censored data, interpolation method is one of important methods. However, most of the interpolation methods replace the censored data with the exact data, which will distort the real distribution of the censored data and reduce the probability of the real data falling into the interpolation data. In order to solve this problem, we in this paper propose a nonparametric method of estimating the survival function of right-censored and interval-censored data and compare its performance to SC (self-consistent) algorithm. Comparing to the average interpolation and the nearest neighbor interpolation method, the proposed method in this paper replaces the right-censored data with the interval-censored data, and greatly improves the probability of the real data falling into imputation interval. Then it bases on the empirical distribution theory to estimate the survival function of right-censored and interval-censored data. The results of numerical examples and a real breast cancer data set demonstrated that the proposed method had higher accuracy and better robustness for the different proportion of the censored data. This paper provides a good method to compare the clinical treatments performance with estimation of the survival data of the patients. This pro vides some help to the medical survival data analysis.

  11. Reference interval computation: which method (not) to choose?

    Science.gov (United States)

    Pavlov, Igor Y; Wilson, Andrew R; Delgado, Julio C

    2012-07-11

    When different methods are applied to reference interval (RI) calculation the results can sometimes be substantially different, especially for small reference groups. If there are no reliable RI data available, there is no way to confirm which method generates results closest to the true RI. We randomly drawn samples obtained from a public database for 33 markers. For each sample, RIs were calculated by bootstrapping, parametric, and Box-Cox transformed parametric methods. Results were compared to the values of the population RI. For approximately half of the 33 markers, results of all 3 methods were within 3% of the true reference value. For other markers, parametric results were either unavailable or deviated considerably from the true values. The transformed parametric method was more accurate than bootstrapping for sample size of 60, very close to bootstrapping for sample size 120, but in some cases unavailable. We recommend against using parametric calculations to determine RIs. The transformed parametric method utilizing Box-Cox transformation would be preferable way of RI calculation, if it satisfies normality test. If not, the bootstrapping is always available, and is almost as accurate and precise as the transformed parametric method. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Design of time interval generator based on hybrid counting method

    Energy Technology Data Exchange (ETDEWEB)

    Yao, Yuan [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); Wang, Zhaoqi [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Lu, Houbing [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Hefei Electronic Engineering Institute, Hefei 230037 (China); Chen, Lian [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Jin, Ge, E-mail: goldjin@ustc.edu.cn [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China)

    2016-10-01

    Time Interval Generators (TIGs) are frequently used for the characterizations or timing operations of instruments in particle physics experiments. Though some “off-the-shelf” TIGs can be employed, the necessity of a custom test system or control system makes the TIGs, being implemented in a programmable device desirable. Nowadays, the feasibility of using Field Programmable Gate Arrays (FPGAs) to implement particle physics instrumentation has been validated in the design of Time-to-Digital Converters (TDCs) for precise time measurement. The FPGA-TDC technique is based on the architectures of Tapped Delay Line (TDL), whose delay cells are down to few tens of picosecond. In this case, FPGA-based TIGs with high delay step are preferable allowing the implementation of customized particle physics instrumentations and other utilities on the same FPGA device. A hybrid counting method for designing TIGs with both high resolution and wide range is presented in this paper. The combination of two different counting methods realizing an integratable TIG is described in detail. A specially designed multiplexer for tap selection is emphatically introduced. The special structure of the multiplexer is devised for minimizing the different additional delays caused by the unpredictable routings from different taps to the output. A Kintex-7 FPGA is used for the hybrid counting-based implementation of a TIG, providing a resolution up to 11 ps and an interval range up to 8 s.

  13. Design of time interval generator based on hybrid counting method

    International Nuclear Information System (INIS)

    Yao, Yuan; Wang, Zhaoqi; Lu, Houbing; Chen, Lian; Jin, Ge

    2016-01-01

    Time Interval Generators (TIGs) are frequently used for the characterizations or timing operations of instruments in particle physics experiments. Though some “off-the-shelf” TIGs can be employed, the necessity of a custom test system or control system makes the TIGs, being implemented in a programmable device desirable. Nowadays, the feasibility of using Field Programmable Gate Arrays (FPGAs) to implement particle physics instrumentation has been validated in the design of Time-to-Digital Converters (TDCs) for precise time measurement. The FPGA-TDC technique is based on the architectures of Tapped Delay Line (TDL), whose delay cells are down to few tens of picosecond. In this case, FPGA-based TIGs with high delay step are preferable allowing the implementation of customized particle physics instrumentations and other utilities on the same FPGA device. A hybrid counting method for designing TIGs with both high resolution and wide range is presented in this paper. The combination of two different counting methods realizing an integratable TIG is described in detail. A specially designed multiplexer for tap selection is emphatically introduced. The special structure of the multiplexer is devised for minimizing the different additional delays caused by the unpredictable routings from different taps to the output. A Kintex-7 FPGA is used for the hybrid counting-based implementation of a TIG, providing a resolution up to 11 ps and an interval range up to 8 s.

  14. A quick method to calculate QTL confidence interval

    Indian Academy of Sciences (India)

    2011-08-19

    Aug 19, 2011 ... experimental design and analysis to reveal the real molecular nature of the ... strap sample form the bootstrap distribution of QTL location. The 2.5 and ..... ative probability to harbour a true QTL, hence x-LOD rule is not stable ... Darvasi A. and Soller M. 1997 A simple method to calculate resolv- ing power ...

  15. The unified method: III. Nonlinearizable problems on the interval

    International Nuclear Information System (INIS)

    Lenells, J; Fokas, A S

    2012-01-01

    Boundary value problems for integrable nonlinear evolution PDEs formulated on the finite interval can be analyzed by the unified method introduced by one of the authors and extensively used in the literature. The implementation of this general method to this particular class of problems yields the solution in terms of the unique solution of a matrix Riemann–Hilbert problem formulated in the complex k-plane (the Fourier plane), which has a jump matrix with explicit (x, t)-dependence involving six scalar functions of k, called the spectral functions. Two of these functions depend on the initial data, whereas the other four depend on all boundary values. The most difficult step of the new method is the characterization of the latter four spectral functions in terms of the given initial and boundary data, i.e. the elimination of the unknown boundary values. Here, we present an effective characterization of the spectral functions in terms of the given initial and boundary data. We present two different characterizations of this problem. One is based on the analysis of the so-called global relation, on the analysis of the equations obtained from the global relation via certain transformations leaving the dispersion relation of the associated linearized PDE invariant and on the computation of the large k asymptotics of the eigenfunctions defining the relevant spectral functions. The other is based on the analysis of the global relation and on the introduction of the so-called Gelfand–Levitan–Marchenko representations of the eigenfunctions defining the relevant spectral functions. We also show that these two different characterizations are equivalent and that in the limit when the length of the interval tends to infinity, the relevant formulas reduce to the analogous formulas obtained recently for the case of boundary value problems formulated on the half-line. (paper)

  16. Adaptive Kalman Filter Based on Adjustable Sampling Interval in Burst Detection for Water Distribution System

    Directory of Open Access Journals (Sweden)

    Doo Yong Choi

    2016-04-01

    Full Text Available Rapid detection of bursts and leaks in water distribution systems (WDSs can reduce the social and economic costs incurred through direct loss of water into the ground, additional energy demand for water supply, and service interruptions. Many real-time burst detection models have been developed in accordance with the use of supervisory control and data acquisition (SCADA systems and the establishment of district meter areas (DMAs. Nonetheless, no consideration has been given to how frequently a flow meter measures and transmits data for predicting breaks and leaks in pipes. This paper analyzes the effect of sampling interval when an adaptive Kalman filter is used for detecting bursts in a WDS. A new sampling algorithm is presented that adjusts the sampling interval depending on the normalized residuals of flow after filtering. The proposed algorithm is applied to a virtual sinusoidal flow curve and real DMA flow data obtained from Jeongeup city in South Korea. The simulation results prove that the self-adjusting algorithm for determining the sampling interval is efficient and maintains reasonable accuracy in burst detection. The proposed sampling method has a significant potential for water utilities to build and operate real-time DMA monitoring systems combined with smart customer metering systems.

  17. Interval MULTIMOORA method with target values of attributes based on interval distance and preference degree: biomaterials selection

    Science.gov (United States)

    Hafezalkotob, Arian; Hafezalkotob, Ashkan

    2017-06-01

    A target-based MADM method covers beneficial and non-beneficial attributes besides target values for some attributes. Such techniques are considered as the comprehensive forms of MADM approaches. Target-based MADM methods can also be used in traditional decision-making problems in which beneficial and non-beneficial attributes only exist. In many practical selection problems, some attributes have given target values. The values of decision matrix and target-based attributes can be provided as intervals in some of such problems. Some target-based decision-making methods have recently been developed; however, a research gap exists in the area of MADM techniques with target-based attributes under uncertainty of information. We extend the MULTIMOORA method for solving practical material selection problems in which material properties and their target values are given as interval numbers. We employ various concepts of interval computations to reduce degeneration of uncertain data. In this regard, we use interval arithmetic and introduce innovative formula for interval distance of interval numbers to create interval target-based normalization technique. Furthermore, we use a pairwise preference matrix based on the concept of degree of preference of interval numbers to calculate the maximum, minimum, and ranking of these numbers. Two decision-making problems regarding biomaterials selection of hip and knee prostheses are discussed. Preference degree-based ranking lists for subordinate parts of the extended MULTIMOORA method are generated by calculating the relative degrees of preference for the arranged assessment values of the biomaterials. The resultant rankings for the problem are compared with the outcomes of other target-based models in the literature.

  18. Time Interval to Initiation of Contraceptive Methods Following ...

    African Journals Online (AJOL)

    2018-01-30

    Jan 30, 2018 ... interval between a woman's last childbirth and the initiation of contraception. Materials and ..... DF=Degree of freedom; χ2=Chi‑square test ..... practice of modern contraception among single women in a rural and urban ...

  19. Bootstrap confidence intervals for three-way methods

    NARCIS (Netherlands)

    Kiers, Henk A.L.

    Results from exploratory three-way analysis techniques such as CANDECOMP/PARAFAC and Tucker3 analysis are usually presented without giving insight into uncertainties due to sampling. Here a bootstrap procedure is proposed that produces percentile intervals for all output parameters. Special

  20. Measurements of the charged particle multiplicity distribution in restricted rapidity intervals

    CERN Document Server

    Buskulic, Damir; De Bonis, I; Décamp, D; Ghez, P; Goy, C; Lees, J P; Lucotte, A; Minard, M N; Odier, P; Pietrzyk, B; Ariztizabal, F; Chmeissani, M; Crespo, J M; Efthymiopoulos, I; Fernández, E; Fernández-Bosman, M; Gaitan, V; Garrido, L; Martínez, M; Orteu, S; Pacheco, A; Padilla, C; Palla, Fabrizio; Pascual, A; Perlas, J A; Sánchez, F; Teubert, F; Colaleo, A; Creanza, D; De Palma, M; Farilla, A; Gelao, G; Girone, M; Iaselli, Giuseppe; Maggi, G; Maggi, M; Marinelli, N; Natali, S; Nuzzo, S; Ranieri, A; Raso, G; Romano, F; Ruggieri, F; Selvaggi, G; Silvestris, L; Tempesta, P; Zito, G; Huang, X; Lin, J; Ouyang, Q; Wang, T; Xie, Y; Xu, R; Xue, S; Zhang, J; Zhang, L; Zhao, W; Bonvicini, G; Cattaneo, M; Comas, P; Coyle, P; Drevermann, H; Engelhardt, A; Forty, Roger W; Frank, M; Hagelberg, R; Harvey, J; Jacobsen, R; Janot, P; Jost, B; Knobloch, J; Lehraus, Ivan; Markou, C; Martin, E B; Mato, P; Meinhard, H; Minten, Adolf G; Miquel, R; Oest, T; Palazzi, P; Pater, J R; Pusztaszeri, J F; Ranjard, F; Rensing, P E; Rolandi, Luigi; Schlatter, W D; Schmelling, M; Schneider, O; Tejessy, W; Tomalin, I R; Venturi, A; Wachsmuth, H W; Wiedenmann, W; Wildish, T; Witzeling, W; Wotschack, J; Ajaltouni, Ziad J; Bardadin-Otwinowska, Maria; Barrès, A; Boyer, C; Falvard, A; Gay, P; Guicheney, C; Henrard, P; Jousset, J; Michel, B; Monteil, S; Montret, J C; Pallin, D; Perret, P; Podlyski, F; Proriol, J; Rossignol, J M; Saadi, F; Fearnley, Tom; Hansen, J B; Hansen, J D; Hansen, J R; Hansen, P H; Nilsson, B S; Kyriakis, A; Simopoulou, Errietta; Siotis, I; Vayaki, Anna; Zachariadou, K; Blondel, A; Bonneaud, G R; Brient, J C; Bourdon, P; Passalacqua, L; Rougé, A; Rumpf, M; Tanaka, R; Valassi, Andrea; Verderi, M; Videau, H L; Candlin, D J; Parsons, M I; Focardi, E; Parrini, G; Corden, M; Delfino, M C; Georgiopoulos, C H; Jaffe, D E; Antonelli, A; Bencivenni, G; Bologna, G; Bossi, F; Campana, P; Capon, G; Chiarella, V; Felici, G; Laurelli, P; Mannocchi, G; Murtas, F; Murtas, G P; Pepé-Altarelli, M; Dorris, S J; Halley, A W; ten Have, I; Knowles, I G; Lynch, J G; Morton, W T; O'Shea, V; Raine, C; Reeves, P; Scarr, J M; Smith, K; Smith, M G; Thompson, A S; Thomson, F; Thorn, S; Turnbull, R M; Becker, U; Braun, O; Geweniger, C; Graefe, G; Hanke, P; Hepp, V; Kluge, E E; Putzer, A; Rensch, B; Schmidt, M; Sommer, J; Stenzel, H; Tittel, K; Werner, S; Wunsch, M; Beuselinck, R; Binnie, David M; Cameron, W; Colling, D J; Dornan, Peter J; Konstantinidis, N P; Moneta, L; Moutoussi, A; Nash, J; San Martin, G; Sedgbeer, J K; Stacey, A M; Dissertori, G; Girtler, P; Kneringer, E; Kuhn, D; Rudolph, G; Bowdery, C K; Brodbeck, T J; Colrain, P; Crawford, G; Finch, A J; Foster, F; Hughes, G; Sloan, Terence; Whelan, E P; Williams, M I; Galla, A; Greene, A M; Kleinknecht, K; Quast, G; Raab, J; Renk, B; Sander, H G; Wanke, R; Zeitnitz, C; Aubert, Jean-Jacques; Bencheikh, A M; Benchouk, C; Bonissent, A; Bujosa, G; Calvet, D; Carr, J; Diaconu, C A; Etienne, F; Thulasidas, M; Nicod, D; Payre, P; Rousseau, D; Talby, M; Abt, I; Assmann, R W; Bauer, C; Blum, Walter; Brown, D; Dietl, H; Dydak, Friedrich; Ganis, G; Gotzhein, C; Jakobs, K; Kroha, H; Lütjens, G; Lutz, Gerhard; Männer, W; Moser, H G; Richter, R H; Rosado-Schlosser, A; Settles, Ronald; Seywerd, H C J; Stierlin, U; Saint-Denis, R; Wolf, G; Alemany, R; Boucrot, J; Callot, O; Cordier, A; Courault, F; Davier, M; Duflot, L; Grivaz, J F; Jacquet, M; Kim, D W; Le Diberder, F R; Lefrançois, J; Lutz, A M; Musolino, G; Nikolic, I A; Park, H J; Park, I C; Schune, M H; Simion, S; Veillet, J J; Videau, I; Abbaneo, D; Azzurri, P; Bagliesi, G; Batignani, G; Bettarini, S; Bozzi, C; Calderini, G; Carpinelli, M; Ciocci, M A; Ciulli, V; Dell'Orso, R; Fantechi, R; Ferrante, I; Foà, L; Forti, F; Giassi, A; Giorgi, M A; Gregorio, A; Ligabue, F; Lusiani, A; Marrocchesi, P S; Messineo, A; Rizzo, G; Sanguinetti, G; Sciabà, A; Spagnolo, P; Steinberger, Jack; Tenchini, Roberto; Tonelli, G; Triggiani, G; Vannini, C; Verdini, P G; Walsh, J; Betteridge, A P; Blair, G A; Bryant, L M; Cerutti, F; Gao, Y; Green, M G; Johnson, D L; Medcalf, T; Mir, M; Perrodo, P; Strong, J A; Bertin, V; Botterill, David R; Clifft, R W; Edgecock, T R; Haywood, S; Edwards, M; Maley, P; Norton, P R; Thompson, J C; Bloch-Devaux, B; Colas, P; Duarte, H; Emery, S; Kozanecki, Witold; Lançon, E; Lemaire, M C; Locci, E; Marx, B; Pérez, P; Rander, J; Renardy, J F; Rosowsky, A; Roussarie, A; Schuller, J P; Schwindling, J; Si Mohand, D; Trabelsi, A; Vallage, B; Johnson, R P; Kim, H Y; Litke, A M; McNeil, M A; Taylor, G; Beddall, A; Booth, C N; Boswell, R; Cartwright, S L; Combley, F; Dawson, I; Köksal, A; Letho, M; Newton, W M; Rankin, C; Thompson, L F; Böhrer, A; Brandt, S; Cowan, G D; Feigl, E; Grupen, Claus; Lutters, G; Minguet-Rodríguez, J A; Rivera, F; Saraiva, P; Smolik, L; Stephan, F; Apollonio, M; Bosisio, L; Della Marina, R; Giannini, G; Gobbo, B; Ragusa, F; Rothberg, J E; Wasserbaech, S R; Armstrong, S R; Bellantoni, L; Elmer, P; Feng, Z; Ferguson, D P S; Gao, Y S; González, S; Grahl, J; Harton, J L; Hayes, O J; Hu, H; McNamara, P A; Nachtman, J M; Orejudos, W; Pan, Y B; Saadi, Y; Schmitt, M; Scott, I J; Sharma, V; Turk, J; Walsh, A M; Wu Sau Lan; Wu, X; Yamartino, J M; Zheng, M; Zobernig, G

    1995-01-01

    Charged particle multiplicity distributions have been measured with the ALEPH detector in restricted rapidity intervals |Y| \\leq 0.5,1.0, 1.5,2.0\\/ along the thrust axis and also without restriction on rapidity. The distribution for the full range can be parametrized by a log-normal distribution. For smaller windows one finds a more complicated structure, which is understood to arise from perturbative effects. The negative-binomial distribution fails to describe the data both with and without the restriction on rapidity. The JETSET model is found to describe all aspects of the data while the width predicted by HERWIG is in significant disagreement.

  1. Resampling methods in Microsoft Excel® for estimating reference intervals.

    Science.gov (United States)

    Theodorsson, Elvar

    2015-01-01

    Computer-intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. 
The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular.
 Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples.

  2. Charged particle multiplicity distributions in restricted rapidity intervals in Z0 hadronic decays

    International Nuclear Information System (INIS)

    Uvarov, V.

    1991-01-01

    The multiplicity distributions of charged particles in restricted rapidity intervals in Z 0 hadronic decays measured by the DELPHI detector are presented. The data reveal a shoulder structure, best visible for intervals of intermediate size, i.e. for rapidity limits around ±1.5. The whole set of distributions including the shoulder structure is reproduced by the Lund Parton Shower model. The structure is found to be due to important contributions from 3- and 4-jet events with a hard gluon jet. A different model, based on the concept of independently produced groups of particles, 'clans', fluctuating both in number per event and particle content per clan, has also been used to analyse the present data. The results show that for each interval of rapidity the average number of clans per event is approximately the same as at lower energies. (author) 11 refs., 3 figs

  3. A general method for enclosing solutions of interval linear equations

    Czech Academy of Sciences Publication Activity Database

    Rohn, Jiří

    2012-01-01

    Roč. 6, č. 4 (2012), s. 709-717 ISSN 1862-4472 R&D Projects: GA ČR GA201/09/1957; GA ČR GC201/08/J020 Institutional research plan: CEZ:AV0Z10300504 Keywords : interval linear equations * solution set * enclosure * absolute value inequality Subject RIV: BA - General Mathematics Impact factor: 1.654, year: 2012

  4. Multifractal distribution of spike intervals for two oscillators coupled by unreliable pulses

    International Nuclear Information System (INIS)

    Kestler, Johannes; Kinzel, Wolfgang

    2006-01-01

    Two neurons coupled by unreliable synapses are modelled by leaky integrate-and-fire neurons and stochastic on-off synapses. The dynamics is mapped to an iterated function system. Numerical calculations yield a multifractal distribution of interspike intervals. The covering, information and correlation dimensions are calculated as a function of synaptic strength and transmission probability. (letter to the editor)

  5. A text zero-watermarking method based on keyword dense interval

    Science.gov (United States)

    Yang, Fan; Zhu, Yuesheng; Jiang, Yifeng; Qing, Yin

    2017-07-01

    Digital watermarking has been recognized as a useful technology for the copyright protection and authentication of digital information. However, rarely did the former methods focus on the key content of digital carrier. The idea based on the protection of key content is more targeted and can be considered in different digital information, including text, image and video. In this paper, we use text as research object and a text zero-watermarking method which uses keyword dense interval (KDI) as the key content is proposed. First, we construct zero-watermarking model by introducing the concept of KDI and giving the method of KDI extraction. Second, we design detection model which includes secondary generation of zero-watermark and the similarity computing method of keyword distribution. Besides, experiments are carried out, and the results show that the proposed method gives better performance than other available methods especially in the attacks of sentence transformation and synonyms substitution.

  6. Monitoring molecular interactions using photon arrival-time interval distribution analysis

    Science.gov (United States)

    Laurence, Ted A [Livermore, CA; Weiss, Shimon [Los Angels, CA

    2009-10-06

    A method for analyzing/monitoring the properties of species that are labeled with fluorophores. A detector is used to detect photons emitted from species that are labeled with one or more fluorophores and located in a confocal detection volume. The arrival time of each of the photons is determined. The interval of time between various photon pairs is then determined to provide photon pair intervals. The number of photons that have arrival times within the photon pair intervals is also determined. The photon pair intervals are then used in combination with the corresponding counts of intervening photons to analyze properties and interactions of the molecules including brightness, concentration, coincidence and transit time. The method can be used for analyzing single photon streams and multiple photon streams.

  7. A New Method Based on TOPSIS and Response Surface Method for MCDM Problems with Interval Numbers

    Directory of Open Access Journals (Sweden)

    Peng Wang

    2015-01-01

    Full Text Available As the preference of design maker (DM is always ambiguous, we have to face many multiple criteria decision-making (MCDM problems with interval numbers in our daily life. Though there have been some methods applied to solve this sort of problem, it is always complex to comprehend and sometimes difficult to implement. The calculation processes are always ineffective when a new alternative is added or removed. In view of the weakness like this, this paper presents a new method based on TOPSIS and response surface method (RSM for MCDM problems with interval numbers, RSM-TOPSIS-IN for short. The key point of this approach is the application of deviation degree matrix, which ensures that the DM can get a simple response surface (RS model to rank the alternatives. In order to demonstrate the feasibility and effectiveness of the proposed method, three illustrative MCMD problems with interval numbers are analysed, including (a selection of investment program, (b selection of a right partner, and (c assessment of road transport technologies. The contrast of ranking results shows that the RSM-TOPSIS-IN method is in good agreement with those derived by earlier researchers, indicating it is suitable to solve MCDM problems with interval numbers.

  8. Preventive maintenance and the interval availability distribution of an unreliable production system

    International Nuclear Information System (INIS)

    Dijkhuizen, G. van; Heijden, M. van der

    1999-01-01

    Traditionally, the optimal preventive maintenance interval for an unreliable production system has been determined by maximizing its limiting availability. Nowadays, it is widely recognized that this performance measure does not always provide relevant information for practical purposes. This is particularly true for order-driven manufacturing systems, in which due date performance has become a more important, and even a competitive factor. Under these circumstances, the so-called interval availability distribution is often seen as a more appropriate performance measure. Surprisingly enough, the relation between preventive maintenance and interval availability has received little attention in the existing literature. In this article, a series of mathematical models and optimization techniques is presented, with which the optimal preventive maintenance interval can be determined from an interval availability point of view, rather than from a limiting availability perspective. Computational results for a class of representative test problems indicate that significant improvements of up to 30% in the guaranteed interval availability can be obtained, by increasing preventive maintenance frequencies somewhere between 10 and 70%

  9. Method of forecasting power distribution

    International Nuclear Information System (INIS)

    Kaneto, Kunikazu.

    1981-01-01

    Purpose: To obtain forecasting results at high accuracy by reflecting the signals from neutron detectors disposed in the reactor core on the forecasting results. Method: An on-line computer transfers, to a simulator, those process data such as temperature and flow rate for coolants in each of the sections and various measuring signals such as control rod positions from the nuclear reactor. The simulator calculates the present power distribution before the control operation. The signals from the neutron detectors at each of the positions in the reactor core are estimated from the power distribution and errors are determined based on the estimated values and the measured values to determine the smooth error distribution in the axial direction. Then, input conditions at the time to be forecast are set by a data setter. The simulator calculates the forecast power distribution after the control operation based on the set conditions. The forecast power distribution is corrected using the error distribution. (Yoshino, Y.)

  10. Resampling Approach for Determination of the Method for Reference Interval Calculation in Clinical Laboratory Practice▿

    Science.gov (United States)

    Pavlov, Igor Y.; Wilson, Andrew R.; Delgado, Julio C.

    2010-01-01

    Reference intervals (RI) play a key role in clinical interpretation of laboratory test results. Numerous articles are devoted to analyzing and discussing various methods of RI determination. The two most widely used approaches are the parametric method, which assumes data normality, and a nonparametric, rank-based procedure. The decision about which method to use is usually made arbitrarily. The goal of this study was to demonstrate that using a resampling approach for the comparison of RI determination techniques could help researchers select the right procedure. Three methods of RI calculation—parametric, transformed parametric, and quantile-based bootstrapping—were applied to multiple random samples drawn from 81 values of complement factor B observations and from a computer-simulated normally distributed population. It was shown that differences in RI between legitimate methods could be up to 20% and even more. The transformed parametric method was found to be the best method for the calculation of RI of non-normally distributed factor B estimations, producing an unbiased RI and the lowest confidence limits and interquartile ranges. For a simulated Gaussian population, parametric calculations, as expected, were the best; quantile-based bootstrapping produced biased results at low sample sizes, and the transformed parametric method generated heavily biased RI. The resampling approach could help compare different RI calculation methods. An algorithm showing a resampling procedure for choosing the appropriate method for RI calculations is included. PMID:20554803

  11. Malmquist Productivity Index by Extended VIKOR Method Using Interval Numbers

    OpenAIRE

    Fallah, Mohammad; Mohajeri, Amir; Najafi, Esmaeil

    2013-01-01

    The VIKOR method was developed for multicriteria optimization of complex systems. It determines the compromise ranking list and the compromise solution obtained with the given weights. This method focuses on ranking and selecting from a set of alternatives in the presence of conflicting criteria. Here, the VIKOR method is used for two times $t$ and $t+1$ . In order to calculate the progress or regression via Malmquist productivity index, the positive and negative ideals at ...

  12. Binomial Distribution Sample Confidence Intervals Estimation 7. Absolute Risk Reduction and ARR-like Expressions

    Directory of Open Access Journals (Sweden)

    Andrei ACHIMAŞ CADARIU

    2004-08-01

    Full Text Available Assessments of a controlled clinical trial suppose to interpret some key parameters as the controlled event rate, experimental event date, relative risk, absolute risk reduction, relative risk reduction, number needed to treat when the effect of the treatment are dichotomous variables. Defined as the difference in the event rate between treatment and control groups, the absolute risk reduction is the parameter that allowed computing the number needed to treat. The absolute risk reduction is compute when the experimental treatment reduces the risk for an undesirable outcome/event. In medical literature when the absolute risk reduction is report with its confidence intervals, the method used is the asymptotic one, even if it is well know that may be inadequate. The aim of this paper is to introduce and assess nine methods of computing confidence intervals for absolute risk reduction and absolute risk reduction – like function.Computer implementations of the methods use the PHP language. Methods comparison uses the experimental errors, the standard deviations, and the deviation relative to the imposed significance level for specified sample sizes. Six methods of computing confidence intervals for absolute risk reduction and absolute risk reduction-like functions were assessed using random binomial variables and random sample sizes.The experiments shows that the ADAC, and ADAC1 methods obtains the best overall performance of computing confidence intervals for absolute risk reduction.

  13. Effect of a data buffer on the recorded distribution of time intervals for random events

    Energy Technology Data Exchange (ETDEWEB)

    Barton, J C [Polytechnic of North London (UK)

    1976-03-15

    The use of a data buffer enables the distribution of the time intervals between events to be studied for times less than the recording system dead-time but the usual negative exponential distribution for random events has to be modified. The theory for this effect is developed for an n-stage buffer followed by an asynchronous recorder. Results are evaluated for the values of n from 1 to 5. In the language of queueing theory the system studied is of type M/D/1/n+1, i.e. with constant service time and a finite number of places.

  14. The Distribution of the Interval between Events of a Cox Process with Shot Noise Intensity

    Directory of Open Access Journals (Sweden)

    Angelos Dassios

    2008-01-01

    Full Text Available Applying piecewise deterministic Markov processes theory, the probability generating function of a Cox process, incorporating with shot noise process as the claim intensity, is obtained. We also derive the Laplace transform of the distribution of the shot noise process at claim jump times, using stationary assumption of the shot noise process at any times. Based on this Laplace transform and from the probability generating function of a Cox process with shot noise intensity, we obtain the distribution of the interval of a Cox process with shot noise intensity for insurance claims and its moments, that is, mean and variance.

  15. Robust stability analysis for Markovian jumping interval neural networks with discrete and distributed time-varying delays

    International Nuclear Information System (INIS)

    Balasubramaniam, P.; Lakshmanan, S.; Manivannan, A.

    2012-01-01

    Highlights: ► Robust stability analysis for Markovian jumping interval neural networks is considered. ► Both linear fractional and interval uncertainties are considered. ► A new LKF is constructed with triple integral terms. ► MATLAB LMI control toolbox is used to validate theoretical results. ► Numerical examples are given to illustrate the effectiveness of the proposed method. - Abstract: This paper investigates robust stability analysis for Markovian jumping interval neural networks with discrete and distributed time-varying delays. The parameter uncertainties are assumed to be bounded in given compact sets. The delay is assumed to be time-varying and belong to a given interval, which means that the lower and upper bounds of interval time-varying delays are available. Based on the new Lyapunov–Krasovskii functional (LKF), some inequality techniques and stochastic stability theory, new delay-dependent stability criteria have been obtained in terms of linear matrix inequalities (LMIs). Finally, two numerical examples are given to illustrate the less conservative and effectiveness of our theoretical results.

  16. An Extended TOPSIS Method for Multiple Attribute Decision Making based on Interval Neutrosophic Uncertain Linguistic Variables

    Directory of Open Access Journals (Sweden)

    Said Broumi

    2015-03-01

    Full Text Available The interval neutrosophic uncertain linguistic variables can easily express the indeterminate and inconsistent information in real world, and TOPSIS is a very effective decision making method more and more extensive applications. In this paper, we will extend the TOPSIS method to deal with the interval neutrosophic uncertain linguistic information, and propose an extended TOPSIS method to solve the multiple attribute decision making problems in which the attribute value takes the form of the interval neutrosophic uncertain linguistic variables and attribute weight is unknown. Firstly, the operational rules and properties for the interval neutrosophic variables are introduced. Then the distance between two interval neutrosophic uncertain linguistic variables is proposed and the attribute weight is calculated by the maximizing deviation method, and the closeness coefficients to the ideal solution for each alternatives. Finally, an illustrative example is given to illustrate the decision making steps and the effectiveness of the proposed method.

  17. Non-Gaussian distributions of melodic intervals in music: The Lévy-stable approximation

    Science.gov (United States)

    Niklasson, Gunnar A.; Niklasson, Maria H.

    2015-11-01

    The analysis of structural patterns in music is of interest in order to increase our fundamental understanding of music, as well as for devising algorithms for computer-generated music, so called algorithmic composition. Musical melodies can be analyzed in terms of a “music walk” between the pitches of successive tones in a notescript, in analogy with the “random walk” model commonly used in physics. We find that the distribution of melodic intervals between tones can be approximated with a Lévy-stable distribution. Since music also exibits self-affine scaling, we propose that the “music walk” should be modelled as a Lévy motion. We find that the Lévy motion model captures basic structural patterns in classical as well as in folk music.

  18. Interval-Censored Time-to-Event Data Methods and Applications

    CERN Document Server

    Chen, Ding-Geng

    2012-01-01

    Interval-Censored Time-to-Event Data: Methods and Applications collects the most recent techniques, models, and computational tools for interval-censored time-to-event data. Top biostatisticians from academia, biopharmaceutical industries, and government agencies discuss how these advances are impacting clinical trials and biomedical research. Divided into three parts, the book begins with an overview of interval-censored data modeling, including nonparametric estimation, survival functions, regression analysis, multivariate data analysis, competing risks analysis, and other models for interva

  19. Distributed optimization system and method

    Science.gov (United States)

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2003-06-10

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  20. Comparison of the methods for determination of calibration and verification intervals of measuring devices

    Directory of Open Access Journals (Sweden)

    Toteva Pavlina

    2017-01-01

    Full Text Available The paper presents different determination and optimisation methods for verification intervals of technical devices for monitoring and measurement based on the requirements of some widely used international standards, e.g. ISO 9001, ISO/IEC 17020, ISO/IEC 17025 etc., maintained by various organizations implementing measuring devices in practice. Comparative analysis of the reviewed methods is conducted in terms of opportunities for assessing the adequacy of interval(s for calibration of measuring devices and their optimisation accepted by an organization – an extension or reduction depending on the obtained results. The advantages and disadvantages of the reviewed methods are discussed, and recommendations for their applicability are provided.

  1. Solving the interval type-2 fuzzy polynomial equation using the ranking method

    Science.gov (United States)

    Rahman, Nurhakimah Ab.; Abdullah, Lazim

    2014-07-01

    Polynomial equations with trapezoidal and triangular fuzzy numbers have attracted some interest among researchers in mathematics, engineering and social sciences. There are some methods that have been developed in order to solve these equations. In this study we are interested in introducing the interval type-2 fuzzy polynomial equation and solving it using the ranking method of fuzzy numbers. The ranking method concept was firstly proposed to find real roots of fuzzy polynomial equation. Therefore, the ranking method is applied to find real roots of the interval type-2 fuzzy polynomial equation. We transform the interval type-2 fuzzy polynomial equation to a system of crisp interval type-2 fuzzy polynomial equation. This transformation is performed using the ranking method of fuzzy numbers based on three parameters, namely value, ambiguity and fuzziness. Finally, we illustrate our approach by numerical example.

  2. Two sample Bayesian prediction intervals for order statistics based on the inverse exponential-type distributions using right censored sample

    Directory of Open Access Journals (Sweden)

    M.M. Mohie El-Din

    2011-10-01

    Full Text Available In this paper, two sample Bayesian prediction intervals for order statistics (OS are obtained. This prediction is based on a certain class of the inverse exponential-type distributions using a right censored sample. A general class of prior density functions is used and the predictive cumulative function is obtained in the two samples case. The class of the inverse exponential-type distributions includes several important distributions such the inverse Weibull distribution, the inverse Burr distribution, the loglogistic distribution, the inverse Pareto distribution and the inverse paralogistic distribution. Special cases of the inverse Weibull model such as the inverse exponential model and the inverse Rayleigh model are considered.

  3. A New Uncertain Analysis Method for the Prediction of Acoustic Field with Random and Interval Parameters

    Directory of Open Access Journals (Sweden)

    Mingjie Wang

    2016-01-01

    Full Text Available For the frequency response analysis of acoustic field with random and interval parameters, a nonintrusive uncertain analysis method named Polynomial Chaos Response Surface (PCRS method is proposed. In the proposed method, the polynomial chaos expansion method is employed to deal with the random parameters, and the response surface method is used to handle the interval parameters. The PCRS method does not require efforts to modify model equations due to its nonintrusive characteristic. By means of the PCRS combined with the existing interval analysis method, the lower and upper bounds of expectation, variance, and probability density function of the frequency response can be efficiently evaluated. Two numerical examples are conducted to validate the accuracy and efficiency of the approach. The results show that the PCRS method is more efficient compared to the direct Monte Carlo simulation (MCS method based on the original numerical model without causing significant loss of accuracy.

  4. Exponential operations and aggregation operators of interval neutrosophic sets and their decision making methods.

    Science.gov (United States)

    Ye, Jun

    2016-01-01

    An interval neutrosophic set (INS) is a subclass of a neutrosophic set and a generalization of an interval-valued intuitionistic fuzzy set, and then the characteristics of INS are independently described by the interval numbers of its truth-membership, indeterminacy-membership, and falsity-membership degrees. However, the exponential parameters (weights) of all the existing exponential operational laws of INSs and the corresponding exponential aggregation operators are crisp values in interval neutrosophic decision making problems. As a supplement, this paper firstly introduces new exponential operational laws of INSs, where the bases are crisp values or interval numbers and the exponents are interval neutrosophic numbers (INNs), which are basic elements in INSs. Then, we propose an interval neutrosophic weighted exponential aggregation (INWEA) operator and a dual interval neutrosophic weighted exponential aggregation (DINWEA) operator based on these exponential operational laws and introduce comparative methods based on cosine measure functions for INNs and dual INNs. Further, we develop decision-making methods based on the INWEA and DINWEA operators. Finally, a practical example on the selecting problem of global suppliers is provided to illustrate the applicability and rationality of the proposed methods.

  5. A nonparametric statistical method for determination of a confidence interval for the mean of a set of results obtained in a laboratory intercomparison

    International Nuclear Information System (INIS)

    Veglia, A.

    1981-08-01

    In cases where sets of data are obviously not normally distributed, the application of a nonparametric method for the estimation of a confidence interval for the mean seems to be more suitable than some other methods because such a method requires few assumptions about the population of data. A two-step statistical method is proposed which can be applied to any set of analytical results: elimination of outliers by a nonparametric method based on Tchebycheff's inequality, and determination of a confidence interval for the mean by a non-parametric method based on binominal distribution. The method is appropriate only for samples of size n>=10

  6. The time interval distribution of sand–dust storms in theory: testing with observational data for Yanchi, China

    International Nuclear Information System (INIS)

    Liu, Guoliang; Zhang, Feng; Hao, Lizhen

    2012-01-01

    We previously introduced a time record model for use in studying the duration of sand–dust storms. In the model, X is the normalized wind speed and Xr is the normalized wind speed threshold for the sand–dust storm. X is represented by a random signal with a normal Gaussian distribution. The storms occur when X ≥ Xr. From this model, the time interval distribution of N = Aexp(−bt) can be deduced, wherein N is the number of time intervals with length greater than t, A and b are constants, and b is related to Xr. In this study, sand–dust storm data recorded in spring at the Yanchi meteorological station in China were analysed to verify whether the time interval distribution of the sand–dust storms agrees with the above time interval distribution. We found that the distribution of the time interval between successive sand–dust storms in April agrees well with the above exponential equation. However, the interval distribution for the sand–dust storm data for the entire spring period displayed a better fit to the Weibull equation and depended on the variation of the sand–dust storm threshold wind speed. (paper)

  7. VIKOR Method for Interval Neutrosophic Multiple Attribute Group Decision-Making

    Directory of Open Access Journals (Sweden)

    Yu-Han Huang

    2017-11-01

    Full Text Available In this paper, we will extend the VIKOR (VIsekriterijumska optimizacija i KOmpromisno Resenje method to multiple attribute group decision-making (MAGDM with interval neutrosophic numbers (INNs. Firstly, the basic concepts of INNs are briefly presented. The method first aggregates all individual decision-makers’ assessment information based on an interval neutrosophic weighted averaging (INWA operator, and then employs the extended classical VIKOR method to solve MAGDM problems with INNs. The validity and stability of this method are verified by example analysis and sensitivity analysis, and its superiority is illustrated by a comparison with the existing methods.

  8. Synchronization of Markovian jumping stochastic complex networks with distributed time delays and probabilistic interval discrete time-varying delays

    International Nuclear Information System (INIS)

    Li Hongjie; Yue Dong

    2010-01-01

    The paper investigates the synchronization stability problem for a class of complex dynamical networks with Markovian jumping parameters and mixed time delays. The complex networks consist of m modes and the networks switch from one mode to another according to a Markovian chain with known transition probability. The mixed time delays are composed of discrete and distributed delays, the discrete time delay is assumed to be random and its probability distribution is known a priori. In terms of the probability distribution of the delays, the new type of system model with probability-distribution-dependent parameter matrices is proposed. Based on the stochastic analysis techniques and the properties of the Kronecker product, delay-dependent synchronization stability criteria in the mean square are derived in the form of linear matrix inequalities which can be readily solved by using the LMI toolbox in MATLAB, the solvability of derived conditions depends on not only the size of the delay, but also the probability of the delay-taking values in some intervals. Finally, a numerical example is given to illustrate the feasibility and effectiveness of the proposed method.

  9. A modified hybrid uncertain analysis method for dynamic response field of the LSOAAC with random and interval parameters

    Science.gov (United States)

    Zi, Bin; Zhou, Bin

    2016-07-01

    For the prediction of dynamic response field of the luffing system of an automobile crane (LSOAAC) with random and interval parameters, a hybrid uncertain model is introduced. In the hybrid uncertain model, the parameters with certain probability distribution are modeled as random variables, whereas, the parameters with lower and upper bounds are modeled as interval variables instead of given precise values. Based on the hybrid uncertain model, the hybrid uncertain dynamic response equilibrium equation, in which different random and interval parameters are simultaneously included in input and output terms, is constructed. Then a modified hybrid uncertain analysis method (MHUAM) is proposed. In the MHUAM, based on random interval perturbation method, the first-order Taylor series expansion and the first-order Neumann series, the dynamic response expression of the LSOAAC is developed. Moreover, the mathematical characteristics of extrema of bounds of dynamic response are determined by random interval moment method and monotonic analysis technique. Compared with the hybrid Monte Carlo method (HMCM) and interval perturbation method (IPM), numerical results show the feasibility and efficiency of the MHUAM for solving the hybrid LSOAAC problems. The effects of different uncertain models and parameters on the LSOAAC response field are also investigated deeply, and numerical results indicate that the impact made by the randomness in the thrust of the luffing cylinder F is larger than that made by the gravity of the weight in suspension Q . In addition, the impact made by the uncertainty in the displacement between the lower end of the lifting arm and the luffing cylinder a is larger than that made by the length of the lifting arm L .

  10. Calculation Methods for Wallenius’ Noncentral Hypergeometric Distribution

    DEFF Research Database (Denmark)

    Fog, Agner

    2008-01-01

    Two different probability distributions are both known in the literature as "the" noncentral hypergeometric distribution. Wallenius' noncentral hypergeometric distribution can be described by an urn model without replacement with bias. Fisher's noncentral hypergeometric distribution...... is the conditional distribution of independent binomial variates given their sum. No reliable calculation method for Wallenius' noncentral hypergeometric distribution has hitherto been described in the literature. Several new methods for calculating probabilities from Wallenius' noncentral hypergeometric...... distribution are derived. Range of applicability, numerical problems, and efficiency are discussed for each method. Approximations to the mean and variance are also discussed. This distribution has important applications in models of biased sampling and in models of evolutionary systems....

  11. The Interval-Valued Triangular Fuzzy Soft Set and Its Method of Dynamic Decision Making

    OpenAIRE

    Xiaoguo Chen; Hong Du; Yue Yang

    2014-01-01

    A concept of interval-valued triangular fuzzy soft set is presented, and some operations of “AND,” “OR,” intersection, union and complement, and so forth are defined. Then some relative properties are discussed and several conclusions are drawn. A dynamic decision making model is built based on the definition of interval-valued triangular fuzzy soft set, in which period weight is determined by the exponential decay method. The arithmetic weighted average operator of interval-valued triangular...

  12. In-Hospital Basic Life Support: Major Differences in Duration, Retraining Intervals, and Training Methods - A Danish Nationwide Study

    DEFF Research Database (Denmark)

    Rasmussen, Ditte K; Glerup Lauridsen, Kasper; Staerk, Mathilde

    2017-01-01

    Introduction: High-quality chest compressions and early defibrillation is essential to improve survival following in-hospital cardiac arrest. Efficient training in basic life support (BLS) for clinical staff is therefore important. This study aimed to investigate duration, training methods...... and retraining intervals for BLS training of clinical staff in Danish hospitals.Methods: We included all public, somatic hospitals in Denmark with a cardiac arrest team. Online questionnaires were distributed to resuscitation officers in each hospital. Questionnaires inquired information on: A) Course duration...... and retraining interval, and B) Training methods and setting.Results: In total, 44 hospitals replied (response rate: 96%). BLS training for clinical staff was conducted in 41 hospitals (93%). Median (Q1;Q3) course duration was 1.5 (1;2.5) hours. Retraining was conducted every year (17%), every second year (56...

  13. Global Robust Stability of Switched Interval Neural Networks with Discrete and Distributed Time-Varying Delays of Neural Type

    Directory of Open Access Journals (Sweden)

    Huaiqin Wu

    2012-01-01

    Full Text Available By combing the theories of the switched systems and the interval neural networks, the mathematics model of the switched interval neural networks with discrete and distributed time-varying delays of neural type is presented. A set of the interval parameter uncertainty neural networks with discrete and distributed time-varying delays of neural type are used as the individual subsystem, and an arbitrary switching rule is assumed to coordinate the switching between these networks. By applying the augmented Lyapunov-Krasovskii functional approach and linear matrix inequality (LMI techniques, a delay-dependent criterion is achieved to ensure to such switched interval neural networks to be globally asymptotically robustly stable in terms of LMIs. The unknown gain matrix is determined by solving this delay-dependent LMIs. Finally, an illustrative example is given to demonstrate the validity of the theoretical results.

  14. Analytical method for determining the channel-temperature distribution

    International Nuclear Information System (INIS)

    Kurbatov, I.M.

    1992-01-01

    The distribution of the predicted temperature over the volume or cross section of the active zone is important for thermal calculations of reactors taking into account random deviations. This requires a laborious calculation which includes the following steps: separation of the nominal temperature field, within the temperature range, into intervals, in each of which the temperature is set equal to its average value in the interval; determination of the number of channels whose temperature falls within each interval; construction of the channel-temperature distribution in each interval in accordance with the weighted error function; and summation of the number of channels with the same temperature over all intervals. This procedure can be greatly simplified with the help of methods which eliminate numerous variant calculations when the nominal temperature field is open-quotes refinedclose quotes up to the optimal field according to different criteria. In the present paper a universal analytical method is proposed for determining, by changing the coefficients in the channel-temperature distribution function, the form of this function that reflects all conditions of operation of the elements in the active zone. The problem is solved for the temperature of the coolant at the outlet from the reactor channels

  15. No Additional Benefits of Block- Over Evenly-Distributed High-Intensity Interval Training within a Polarized Microcycle.

    Science.gov (United States)

    McGawley, Kerry; Juudas, Elisabeth; Kazior, Zuzanna; Ström, Kristoffer; Blomstrand, Eva; Hansson, Ola; Holmberg, Hans-Christer

    2017-01-01

    Introduction: The current study aimed to investigate the responses to block- versus evenly-distributed high-intensity interval training (HIT) within a polarized microcycle. Methods: Twenty well-trained junior cross-country skiers (10 males, age 17.6 ± 1.5 and 10 females, age 17.3 ± 1.5) completed two, 3-week periods of training (EVEN and BLOCK) in a randomized, crossover-design study. In EVEN, 3 HIT sessions (5 × 4-min of diagonal-stride roller-skiing) were completed at a maximal sustainable intensity each week while low-intensity training (LIT) was distributed evenly around the HIT. In BLOCK, the same 9 HIT sessions were completed in the second week while only LIT was completed in the first and third weeks. Heart rate (HR), session ratings of perceived exertion (sRPE), and perceived recovery (pREC) were recorded for all HIT and LIT sessions, while distance covered was recorded for each HIT interval. The recovery-stress questionnaire for athletes (RESTQ-Sport) was completed weekly. Before and after EVEN and BLOCK, resting saliva and muscle samples were collected and an incremental test and 600-m time-trial (TT) were completed. Results: Pre- to post-testing revealed no significant differences between EVEN and BLOCK for changes in resting salivary cortisol, testosterone, or IgA, or for changes in muscle capillary density, fiber area, fiber composition, enzyme activity (CS, HAD, and PFK) or the protein content of VEGF or PGC-1α. Neither were any differences observed in the changes in skiing economy, [Formula: see text] or 600-m time-trial performance between interventions. These findings were coupled with no significant differences between EVEN and BLOCK for distance covered during HIT, summated HR zone scores, total sRPE training load, overall pREC or overall recovery-stress state. However, 600-m TT performance improved from pre- to post-training, irrespective of intervention ( P = 0.003), and a number of hormonal and muscle biopsy markers were also significantly

  16. The Interval Slope Method for Long-Term Forecasting of Stock Price Trends

    Directory of Open Access Journals (Sweden)

    Chun-xue Nie

    2016-01-01

    Full Text Available A stock price is a typical but complex type of time series data. We used the effective prediction of long-term time series data to schedule an investment strategy and obtain higher profit. Due to economic, environmental, and other factors, it is very difficult to obtain a precise long-term stock price prediction. The exponentially segmented pattern (ESP is introduced here and used to predict the fluctuation of different stock data over five future prediction intervals. The new feature of stock pricing during the subinterval, named the interval slope, can characterize fluctuations in stock price over specific periods. The cumulative distribution function (CDF of MSE was compared to those of MMSE-BC and SVR. We concluded that the interval slope developed here can capture more complex dynamics of stock price trends. The mean stock price can then be predicted over specific time intervals relatively accurately, in which multiple mean values over time intervals are used to express the time series in the long term. In this way, the prediction of long-term stock price can be more precise and prevent the development of cumulative errors.

  17. A Fourier transform method for the selection of a smoothing interval

    International Nuclear Information System (INIS)

    Kekre, H.B.; Madan, V.K.; Bairi, B.R.

    1989-01-01

    A novel method for the selection of a smoothing interval for the widely used Savitzky and Golay's smoothing filter is proposed. Complementary bandwidths for the nuclear spectral data and the smoothing filter are defined. The criterion for the selection of smoothing interval is based on matching the bandwidths of the spectral data to the filter. Using the above method five real observed spectral peaks of different full width at half maximum, viz. 23.5, 19.5, 17, 8.5 and 6.5 channels, were smoothed and the results are presented. (orig.)

  18. Method of high precision interval measurement in pulse laser ranging system

    Science.gov (United States)

    Wang, Zhen; Lv, Xin-yuan; Mao, Jin-jin; Liu, Wei; Yang, Dong

    2013-09-01

    Laser ranging is suitable for laser system, for it has the advantage of high measuring precision, fast measuring speed,no cooperative targets and strong resistance to electromagnetic interference,the measuremen of laser ranging is the key paremeters affecting the performance of the whole system.The precision of the pulsed laser ranging system was decided by the precision of the time interval measurement, the principle structure of laser ranging system was introduced, and a method of high precision time interval measurement in pulse laser ranging system was established in this paper.Based on the analysis of the factors which affected the precision of range measure,the pulse rising edges discriminator was adopted to produce timing mark for the start-stop time discrimination,and the TDC-GP2 high precision interval measurement system based on TMS320F2812 DSP was designed to improve the measurement precision.Experimental results indicate that the time interval measurement method in this paper can obtain higher range accuracy. Compared with the traditional time interval measurement system,the method simplifies the system design and reduce the influence of bad weather conditions,furthermore,it satisfies the requirements of low costs and miniaturization.

  19. The Interval-Valued Triangular Fuzzy Soft Set and Its Method of Dynamic Decision Making

    Directory of Open Access Journals (Sweden)

    Xiaoguo Chen

    2014-01-01

    Full Text Available A concept of interval-valued triangular fuzzy soft set is presented, and some operations of “AND,” “OR,” intersection, union and complement, and so forth are defined. Then some relative properties are discussed and several conclusions are drawn. A dynamic decision making model is built based on the definition of interval-valued triangular fuzzy soft set, in which period weight is determined by the exponential decay method. The arithmetic weighted average operator of interval-valued triangular fuzzy soft set is given by the aggregating thought, thereby aggregating interval-valued triangular fuzzy soft sets of different time-series into a collective interval-valued triangular fuzzy soft set. The formulas of selection and decision values of different objects are given; therefore the optimal decision making is achieved according to the decision values. Finally, the steps of this method are concluded, and one example is given to explain the application of the method.

  20. A comparison of confidence interval methods for the concordance correlation coefficient and intraclass correlation coefficient with small number of raters.

    Science.gov (United States)

    Feng, Dai; Svetnik, Vladimir; Coimbra, Alexandre; Baumgartner, Richard

    2014-01-01

    The intraclass correlation coefficient (ICC) with fixed raters or, equivalently, the concordance correlation coefficient (CCC) for continuous outcomes is a widely accepted aggregate index of agreement in settings with small number of raters. Quantifying the precision of the CCC by constructing its confidence interval (CI) is important in early drug development applications, in particular in qualification of biomarker platforms. In recent years, there have been several new methods proposed for construction of CIs for the CCC, but their comprehensive comparison has not been attempted. The methods consisted of the delta method and jackknifing with and without Fisher's Z-transformation, respectively, and Bayesian methods with vague priors. In this study, we carried out a simulation study, with data simulated from multivariate normal as well as heavier tailed distribution (t-distribution with 5 degrees of freedom), to compare the state-of-the-art methods for assigning CI to the CCC. When the data are normally distributed, the jackknifing with Fisher's Z-transformation (JZ) tended to provide superior coverage and the difference between it and the closest competitor, the Bayesian method with the Jeffreys prior was in general minimal. For the nonnormal data, the jackknife methods, especially the JZ method, provided the coverage probabilities closest to the nominal in contrast to the others which yielded overly liberal coverage. Approaches based upon the delta method and Bayesian method with conjugate prior generally provided slightly narrower intervals and larger lower bounds than others, though this was offset by their poor coverage. Finally, we illustrated the utility of the CIs for the CCC in an example of a wake after sleep onset (WASO) biomarker, which is frequently used in clinical sleep studies of drugs for treatment of insomnia.

  1. Distribution method optimization : inventory flexibility

    NARCIS (Netherlands)

    Asipko, D.

    2010-01-01

    This report presents the outcome of the Logistics Design Project carried out for Nike Inc. This project has two goals: create a model to measure a flexibility aspect of the inventory usage in different Nike distribution channels, and analyze opportunities of changing the decision model of splitting

  2. Treatment of uncertainty through the interval smart/swing weighting method: a case study

    Directory of Open Access Journals (Sweden)

    Luiz Flávio Autran Monteiro Gomes

    2011-12-01

    Full Text Available An increasingly competitive market means that many decisions must be taken, quickly and with precision, in complex, high risk scenarios. This combination of factors makes it necessary to use decision aiding methods which provide a means of dealing with uncertainty in the judgement of the alternatives. This work presents the use of the MAUT method, combined with the INTERVAL SMART/SWING WEIGHTING method. Although multicriteria decision aiding was not conceived specifically for tackling uncertainty, the combined use of MAUT and the INTERVAL SMART/SWING WEIGHTING method allows approaching decision problems under uncertainty. The main concepts which are involved in these two methods are described and their joint application to the case study concerning the selection of a printing service supplier is presented. The case study makes use of the WINPRE software as a support tool for the calculation of dominance. It is then concluded that the proposed approach can be applied to decision making problems under uncertainty.

  3. Geometric computations with interval and new robust methods applications in computer graphics, GIS and computational geometry

    CERN Document Server

    Ratschek, H

    2003-01-01

    This undergraduate and postgraduate text will familiarise readers with interval arithmetic and related tools to gain reliable and validated results and logically correct decisions for a variety of geometric computations plus the means for alleviating the effects of the errors. It also considers computations on geometric point-sets, which are neither robust nor reliable in processing with standard methods. The authors provide two effective tools for obtaining correct results: (a) interval arithmetic, and (b) ESSA the new powerful algorithm which improves many geometric computations and makes th

  4. Solutions of interval type-2 fuzzy polynomials using a new ranking method

    Science.gov (United States)

    Rahman, Nurhakimah Ab.; Abdullah, Lazim; Ghani, Ahmad Termimi Ab.; Ahmad, Noor'Ani

    2015-10-01

    A few years ago, a ranking method have been introduced in the fuzzy polynomial equations. Concept of the ranking method is proposed to find actual roots of fuzzy polynomials (if exists). Fuzzy polynomials are transformed to system of crisp polynomials, performed by using ranking method based on three parameters namely, Value, Ambiguity and Fuzziness. However, it was found that solutions based on these three parameters are quite inefficient to produce answers. Therefore in this study a new ranking method have been developed with the aim to overcome the inherent weakness. The new ranking method which have four parameters are then applied in the interval type-2 fuzzy polynomials, covering the interval type-2 of fuzzy polynomial equation, dual fuzzy polynomial equations and system of fuzzy polynomials. The efficiency of the new ranking method then numerically considered in the triangular fuzzy numbers and the trapezoidal fuzzy numbers. Finally, the approximate solutions produced from the numerical examples indicate that the new ranking method successfully produced actual roots for the interval type-2 fuzzy polynomials.

  5. QT interval in healthy dogs: which method of correcting the QT interval in dogs is appropriate for use in small animal clinics?

    Directory of Open Access Journals (Sweden)

    Maira S. Oliveira

    2014-05-01

    Full Text Available The electrocardiography (ECG QT interval is influenced by fluctuations in heart rate (HR what may lead to misinterpretation of its length. Considering that alterations in QT interval length reflect abnormalities of the ventricular repolarisation which predispose to occurrence of arrhythmias, this variable must be properly evaluated. The aim of this work is to determine which method of correcting the QT interval is the most appropriate for dogs regarding different ranges of normal HR (different breeds. Healthy adult dogs (n=130; German Shepherd, Boxer, Pit Bull Terrier, and Poodle were submitted to ECG examination and QT intervals were determined in triplicates from the bipolar limb II lead and corrected for the effects of HR through the application of three published formulae involving quadratic, cubic or linear regression. The mean corrected QT values (QTc obtained using the diverse formulae were significantly different (ρ<0.05, while those derived according to the equation QTcV = QT + 0.087(1- RR were the most consistent (linear regression. QTcV values were strongly correlated (r=0.83 with the QT interval and showed a coefficient of variation of 8.37% and a 95% confidence interval of 0.22-0.23 s. Owing to its simplicity and reliability, the QTcV was considered the most appropriate to be used for the correction of QT interval in dogs.

  6. The role of retinopathy distribution and other lesion types for the definition of examination intervals during screening for diabetic retinopathy.

    Science.gov (United States)

    Ometto, Giovanni; Erlandsen, Mogens; Hunter, Andrew; Bek, Toke

    2017-06-01

    It has previously been shown that the intervals between screening examinations for diabetic retinopathy can be optimized by including individual risk factors for the development of the disease in the risk assessment. However, in some cases, the risk model calculating the screening interval may recommend a different interval than an experienced clinician. The purpose of this study was to evaluate the influence of factors unrelated to diabetic retinopathy and the distribution of lesions for discrepancies between decisions made by the clinician and the risk model. Therefore, fundus photographs from 90 screening examinations where the recommendations of the clinician and a risk model had been discrepant were evaluated. Forty features were defined to describe the type and location of the lesions, and classification and ranking techniques were used to assess whether the features could predict the discrepancy between the grader and the risk model. Suspicion of tumours, retinal degeneration and vascular diseases other than diabetic retinopathy could explain why the clinician recommended shorter examination intervals than the model. Additionally, the regional distribution of microaneurysms/dot haemorrhages was important for defining a photograph as belonging to the group where both the clinician and the risk model had recommended a short screening interval as opposed to the other decision alternatives. Features unrelated to diabetic retinopathy and the regional distribution of retinal lesions may affect the recommendation of the examination interval during screening for diabetic retinopathy. The development of automated computerized algorithms for extracting information about the type and location of retinal lesions could be expected to further optimize examination intervals during screening for diabetic retinopathy. © 2016 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  7. Effect of a High-intensity Interval Training method on maximum oxygen consumption in Chilean schoolchildren

    Directory of Open Access Journals (Sweden)

    Sergio Galdames-Maliqueo

    2017-12-01

    Full Text Available Introduction: The low levels of maximum oxygen consumption (VO2max evaluated in Chilean schoolchildren suggest the startup of trainings that improve the aerobic capacity. Objective: To analyze the effect of a High-intensity Interval Training method on maximum oxygen consumption in Chilean schoolchildren. Materials and methods: Thirty-two high school students from the eighth grade, who were divided into two groups, were part of the study (experimental group = 16 students and control group = 16 students. The main analyzed variable was the maximum oxygen consumption through the Course Navette Test. A High-intensity Interval training method was applied based on the maximum aerobic speed obtained through the Test. A mixed ANOVA was used for statistical analysis. Results: The experimental group showed a significant increase in the Maximum Oxygen Consumption between the pretest and posttest when compared with the control group (p < 0.0001. Conclusion: The results of the study showed a positive effect of the High-intensity Interval Training on the maximum consumption of oxygen. At the end of the study, it is concluded that High-intensity Interval Training is a good stimulation methodology for Chilean schoolchildren.

  8. Continuous Exercise but Not High Intensity Interval Training Improves Fat Distribution in Overweight Adults

    Directory of Open Access Journals (Sweden)

    Shelley E. Keating

    2014-01-01

    Full Text Available Objective. The purpose of this study was to assess the effect of high intensity interval training (HIIT versus continuous aerobic exercise training (CONT or placebo (PLA on body composition by randomized controlled design. Methods. Work capacity and body composition (dual-energy X-ray absorptiometry were measured before and after 12 weeks of intervention in 38 previously inactive overweight adults. Results. There was a significant group × time interaction for change in work capacity (P<0.001, which increased significantly in CONT (23.8±3.0% and HIIT (22.3±3.5% but not PLA (3.1±5.0%. There was a near-significant main effect for percentage trunk fat, with trunk fat reducing in CONT by 3.1±1.6% and in PLA by 1.1±0.4%, but not in HIIT (increase of 0.7±1.0% (P=0.07. There was a significant reduction in android fat percentage in CONT (2.7±1.3% and PLA (1.4±0.8% but not HIIT (increase of 0.8±0.7% (P=0.04. Conclusion. These data suggest that HIIT may be advocated as a time-efficient strategy for eliciting comparable fitness benefits to traditional continuous exercise in inactive, overweight adults. However, in this population HIIT does not confer the same benefit to body fat levels as continuous exercise training.

  9. Distribution network planning method considering distributed generation for peak cutting

    International Nuclear Information System (INIS)

    Ouyang Wu; Cheng Haozhong; Zhang Xiubin; Yao Liangzhong

    2010-01-01

    Conventional distribution planning method based on peak load brings about large investment, high risk and low utilization efficiency. A distribution network planning method considering distributed generation (DG) for peak cutting is proposed in this paper. The new integrated distribution network planning method with DG implementation aims to minimize the sum of feeder investments, DG investments, energy loss cost and the additional cost of DG for peak cutting. Using the solution techniques combining genetic algorithm (GA) with the heuristic approach, the proposed model determines the optimal planning scheme including the feeder network and the siting and sizing of DG. The strategy for the site and size of DG, which is based on the radial structure characteristics of distribution network, reduces the complexity degree of solving the optimization model and eases the computational burden substantially. Furthermore, the operation schedule of DG at the different load level is also provided.

  10. An Interval Estimation Method of Patent Keyword Data for Sustainable Technology Forecasting

    Directory of Open Access Journals (Sweden)

    Daiho Uhm

    2017-11-01

    Full Text Available Technology forecasting (TF is forecasting the future state of a technology. It is exciting to know the future of technologies, because technology changes the way we live and enhances the quality of our lives. In particular, TF is an important area in the management of technology (MOT for R&D strategy and new product development. Consequently, there are many studies on TF. Patent analysis is one method of TF because patents contain substantial information regarding developed technology. The conventional methods of patent analysis are based on quantitative approaches such as statistics and machine learning. The most traditional TF methods based on patent analysis have a common problem. It is the sparsity of patent keyword data structured from collected patent documents. After preprocessing with text mining techniques, most frequencies of technological keywords in patent data have values of zero. This problem creates a disadvantage for the performance of TF, and we have trouble analyzing patent keyword data. To solve this problem, we propose an interval estimation method (IEM. Using an adjusted Wald confidence interval called the Agresti–Coull confidence interval, we construct our IEM for efficient TF. In addition, we apply the proposed method to forecast the technology of an innovative company. To show how our work can be applied in the real domain, we conduct a case study using Apple technology.

  11. The establishment of tocopherol reference intervals for Hungarian adult population using a validated HPLC method.

    Science.gov (United States)

    Veres, Gábor; Szpisjak, László; Bajtai, Attila; Siska, Andrea; Klivényi, Péter; Ilisz, István; Földesi, Imre; Vécsei, László; Zádori, Dénes

    2017-09-01

    Evidence suggests that decreased α-tocopherol (the most biologically active substance in the vitamin E group) level can cause neurological symptoms, most likely ataxia. The aim of the current study was to first provide reference intervals for serum tocopherols in the adult Hungarian population with appropriate sample size, recruiting healthy control subjects and neurological patients suffering from conditions without symptoms of ataxia, myopathy or cognitive deficiency. A validated HPLC method applying a diode array detector and rac-tocol as internal standard was utilized for that purpose. Furthermore, serum cholesterol levels were determined as well for data normalization. The calculated 2.5-97.5% reference intervals for α-, β/γ- and δ-tocopherols were 24.62-54.67, 0.81-3.69 and 0.29-1.07 μm, respectively, whereas the tocopherol/cholesterol ratios were 5.11-11.27, 0.14-0.72 and 0.06-0.22 μmol/mmol, respectively. The establishment of these reference intervals may improve the diagnostic accuracy of tocopherol measurements in certain neurological conditions with decreased tocopherol levels. Moreover, the current study draws special attention to the possible pitfalls in the complex process of the determination of reference intervals as well, including the selection of study population, the application of internal standard and method validation and the calculation of tocopherol/cholesterol ratios. Copyright © 2017 John Wiley & Sons, Ltd.

  12. Detection of bursts in neuronal spike trains by the mean inter-spike interval method

    Institute of Scientific and Technical Information of China (English)

    Lin Chen; Yong Deng; Weihua Luo; Zhen Wang; Shaoqun Zeng

    2009-01-01

    Bursts are electrical spikes firing with a high frequency, which are the most important property in synaptic plasticity and information processing in the central nervous system. However, bursts are difficult to identify because bursting activities or patterns vary with phys-iological conditions or external stimuli. In this paper, a simple method automatically to detect bursts in spike trains is described. This method auto-adaptively sets a parameter (mean inter-spike interval) according to intrinsic properties of the detected burst spike trains, without any arbitrary choices or any operator judgrnent. When the mean value of several successive inter-spike intervals is not larger than the parameter, a burst is identified. By this method, bursts can be automatically extracted from different bursting patterns of cultured neurons on multi-electrode arrays, as accurately as by visual inspection. Furthermore, significant changes of burst variables caused by electrical stimulus have been found in spontaneous activity of neuronal network. These suggest that the mean inter-spike interval method is robust for detecting changes in burst patterns and characteristics induced by environmental alterations.

  13. Count-to-count time interval distribution analysis in a fast reactor

    International Nuclear Information System (INIS)

    Perez-Navarro Gomez, A.

    1973-01-01

    The most important kinetic parameters have been measured at the zero power fast reactor CORAL-I by means of the reactor noise analysis in the time domain, using measurements of the count-to-count time intervals. (Author) 69 refs

  14. RANDOM FUNCTIONS AND INTERVAL METHOD FOR PREDICTING THE RESIDUAL RESOURCE OF BUILDING STRUCTURES

    Directory of Open Access Journals (Sweden)

    Shmelev Gennadiy Dmitrievich

    2017-11-01

    Full Text Available Subject: possibility of using random functions and interval prediction method for estimating the residual life of building structures in the currently used buildings. Research objectives: coordination of ranges of values to develop predictions and random functions that characterize the processes being predicted. Materials and methods: when performing this research, the method of random functions and the method of interval prediction were used. Results: in the course of this work, the basic properties of random functions, including the properties of families of random functions, are studied. The coordination of time-varying impacts and loads on building structures is considered from the viewpoint of their influence on structures and representation of the structures’ behavior in the form of random functions. Several models of random functions are proposed for predicting individual parameters of structures. For each of the proposed models, its scope of application is defined. The article notes that the considered approach of forecasting has been used many times at various sites. In addition, the available results allowed the authors to develop a methodology for assessing the technical condition and residual life of building structures for the currently used facilities. Conclusions: we studied the possibility of using random functions and processes for the purposes of forecasting the residual service lives of structures in buildings and engineering constructions. We considered the possibility of using an interval forecasting approach to estimate changes in defining parameters of building structures and their technical condition. A comprehensive technique for forecasting the residual life of building structures using the interval approach is proposed.

  15. A Combined Weighting Method Based on Hybrid of Interval Evidence Fusion and Random Sampling

    OpenAIRE

    Yan, Ying; Suo, Bin

    2017-01-01

    Due to the complexity of system and lack of expertise, epistemic uncertainties may present in the experts’ judgment on the importance of certain indices during group decision-making. A novel combination weighting method is proposed to solve the index weighting problem when various uncertainties are present in expert comments. Based on the idea of evidence theory, various types of uncertain evaluation information are uniformly expressed through interval evidence structures. Similarity matrix b...

  16. An Interval-Valued Intuitionistic Fuzzy TOPSIS Method Based on an Improved Score Function

    Directory of Open Access Journals (Sweden)

    Zhi-yong Bai

    2013-01-01

    Full Text Available This paper proposes an improved score function for the effective ranking order of interval-valued intuitionistic fuzzy sets (IVIFSs and an interval-valued intuitionistic fuzzy TOPSIS method based on the score function to solve multicriteria decision-making problems in which all the preference information provided by decision-makers is expressed as interval-valued intuitionistic fuzzy decision matrices where each of the elements is characterized by IVIFS value and the information about criterion weights is known. We apply the proposed score function to calculate the separation measures of each alternative from the positive and negative ideal solutions to determine the relative closeness coefficients. According to the values of the closeness coefficients, the alternatives can be ranked and the most desirable one(s can be selected in the decision-making process. Finally, two illustrative examples for multicriteria fuzzy decision-making problems of alternatives are used as a demonstration of the applications and the effectiveness of the proposed decision-making method.

  17. A Combined Weighting Method Based on Hybrid of Interval Evidence Fusion and Random Sampling

    Directory of Open Access Journals (Sweden)

    Ying Yan

    2017-01-01

    Full Text Available Due to the complexity of system and lack of expertise, epistemic uncertainties may present in the experts’ judgment on the importance of certain indices during group decision-making. A novel combination weighting method is proposed to solve the index weighting problem when various uncertainties are present in expert comments. Based on the idea of evidence theory, various types of uncertain evaluation information are uniformly expressed through interval evidence structures. Similarity matrix between interval evidences is constructed, and expert’s information is fused. Comment grades are quantified using the interval number, and cumulative probability function for evaluating the importance of indices is constructed based on the fused information. Finally, index weights are obtained by Monte Carlo random sampling. The method can process expert’s information with varying degrees of uncertainties, which possesses good compatibility. Difficulty in effectively fusing high-conflict group decision-making information and large information loss after fusion is avertible. Original expert judgments are retained rather objectively throughout the processing procedure. Cumulative probability function constructing and random sampling processes do not require any human intervention or judgment. It can be implemented by computer programs easily, thus having an apparent advantage in evaluation practices of fairly huge index systems.

  18. Computing interval-valued reliability measures: application of optimal control methods

    DEFF Research Database (Denmark)

    Kozin, Igor; Krymsky, Victor

    2017-01-01

    The paper describes an approach to deriving interval-valued reliability measures given partial statistical information on the occurrence of failures. We apply methods of optimal control theory, in particular, Pontryagin’s principle of maximum to solve the non-linear optimisation problem and derive...... the probabilistic interval-valued quantities of interest. It is proven that the optimisation problem can be translated into another problem statement that can be solved on the class of piecewise continuous probability density functions (pdfs). This class often consists of piecewise exponential pdfs which appear...... as soon as among the constraints there are bounds on a failure rate of a component under consideration. Finding the number of switching points of the piecewise continuous pdfs and their values becomes the focus of the approach described in the paper. Examples are provided....

  19. Various methods for the estimation of the post mortem interval from Calliphoridae: A review

    Directory of Open Access Journals (Sweden)

    Ruchi Sharma

    2015-03-01

    Forensic entomology is recognized in many countries as an important tool for legal investigations. Unfortunately, it has not received much attention in India as an important investigative tool. The maggots of the flies crawling on the dead bodies are widely considered to be just another disgusting element of decay and are not collected at the time of autopsy. They can aid in death investigations (time since death, manner of death, etc.. This paper reviews the various methods of post mortem interval estimation using Calliphoridae to make the investigators, law personnel and researchers aware of the importance of entomology in criminal investigations. The various problems confronted by forensic entomologists in estimating the time since death have also been discussed and there is a need for further research in the field as well as the laborator. Correct estimation of the post mortem interval is one of the most important aspects of legal medicine.

  20. Methods for confidence interval estimation of a ratio parameter with application to location quotients

    Directory of Open Access Journals (Sweden)

    Beyene Joseph

    2005-10-01

    Full Text Available Abstract Background The location quotient (LQ ratio, a measure designed to quantify and benchmark the degree of relative concentration of an activity in the analysis of area localization, has received considerable attention in the geographic and economics literature. This index can also naturally be applied in the context of population health to quantify and compare health outcomes across spatial domains. However, one commonly observed limitation of LQ is its widespread use as only a point estimate without an accompanying confidence interval. Methods In this paper we present statistical methods that can be used to construct confidence intervals for location quotients. The delta and Fieller's methods are generic approaches for a ratio parameter and the generalized linear modelling framework is a useful re-parameterization particularly helpful for generating profile-likelihood based confidence intervals for the location quotient. A simulation experiment is carried out to assess the performance of each of the analytic approaches and a health utilization data set is used for illustration. Results Both the simulation results as well as the findings from the empirical data show that the different analytical methods produce very similar confidence limits for location quotients. When incidence of outcome is not rare and sample sizes are large, the confidence limits are almost indistinguishable. The confidence limits from the generalized linear model approach might be preferable in small sample situations. Conclusion LQ is a useful measure which allows quantification and comparison of health and other outcomes across defined geographical regions. It is a very simple index to compute and has a straightforward interpretation. Reporting this estimate with appropriate confidence limits using methods presented in this paper will make the measure particularly attractive for policy and decision makers.

  1. An interval fixed-mix stochastic programming method for greenhouse gas mitigation in energy systems under uncertainty

    International Nuclear Information System (INIS)

    Xie, Y.L.; Li, Y.P.; Huang, G.H.; Li, Y.F.

    2010-01-01

    In this study, an interval fixed-mix stochastic programming (IFSP) model is developed for greenhouse gas (GHG) emissions reduction management under uncertainties. In the IFSP model, methods of interval-parameter programming (IPP) and fixed-mix stochastic programming (FSP) are introduced into an integer programming framework, such that the developed model can tackle uncertainties described in terms of interval values and probability distributions over a multi-stage context. Moreover, it can reflect dynamic decisions for facility-capacity expansion during the planning horizon. The developed model is applied to a case of planning GHG-emission mitigation, demonstrating that IFSP is applicable to reflecting complexities of multi-uncertainty, dynamic and interactive energy management systems, and capable of addressing the problem of GHG-emission reduction. A number of scenarios corresponding to different GHG-emission mitigation levels are examined; the results suggest that reasonable solutions have been generated. They can be used for generating plans for energy resource/electricity allocation and capacity expansion and help decision makers identify desired GHG mitigation policies under various economic costs and environmental requirements.

  2. Multiparticle distributions in limited rapidity intervals and the violation of asymptotic KNO scaling

    International Nuclear Information System (INIS)

    De Dias Deus, J.

    1986-03-01

    A simple model independent analysis of UA5 collaboration pantip collider data on rapidity cut charged particle distributions strongly suggests: i) independent particle emission from sources or clusters; ii) exact negative binomial multiparticle distributions. The violation of asymptotic KNO scaling is shown to arise from the fast growth with energy of the reduced correlations C K (O)/C 1 k (0). A comparison with recently published e + e - √s = 29 GeV data is presented

  3. Logarithmic Similarity Measure between Interval-Valued Fuzzy Sets and Its Fault Diagnosis Method

    Directory of Open Access Journals (Sweden)

    Zhikang Lu

    2018-02-01

    Full Text Available Fault diagnosis is an important task for the normal operation and maintenance of equipment. In many real situations, the diagnosis data cannot provide deterministic values and are usually imprecise or uncertain. Thus, interval-valued fuzzy sets (IVFSs are very suitable for expressing imprecise or uncertain fault information in real problems. However, existing literature scarcely deals with fault diagnosis problems, such as gasoline engines and steam turbines with IVFSs. However, the similarity measure is one of the important tools in fault diagnoses. Therefore, this paper proposes a new similarity measure of IVFSs based on logarithmic function and its fault diagnosis method for the first time. By the logarithmic similarity measure between the fault knowledge and some diagnosis-testing samples with interval-valued fuzzy information and its relation indices, we can determine the fault type and ranking order of faults corresponding to the relation indices. Then, the misfire fault diagnosis of the gasoline engine and the vibrational fault diagnosis of a turbine are presented to demonstrate the simplicity and effectiveness of the proposed diagnosis method. The fault diagnosis results of gasoline engine and steam turbine show that the proposed diagnosis method not only gives the main fault types of the gasoline engine and steam turbine but also provides useful information for multi-fault analyses and predicting future fault trends. Hence, the logarithmic similarity measure and its fault diagnosis method are main contributions in this study and they provide a useful new way for the fault diagnosis with interval-valued fuzzy information.

  4. An extension of compromise ranking method with interval numbers for the evaluation of renewable energy sources

    Directory of Open Access Journals (Sweden)

    M. Mousavi

    2014-06-01

    Full Text Available Evaluating and prioritizing appropriate renewable energy sources is inevitably a complex decision process. Various information and conflicting attributes should be taken into account. For this purpose, multi-attribute decision making (MADM methods can assist managers or decision makers in formulating renewable energy sources priorities by considering important objective and attributes. In this paper, a new extension of compromise ranking method with interval numbers is presented for the prioritization of renewable energy sources that is based on the performance similarity of alternatives to ideal solutions. To demonstrate the applicability of the proposed decision method, an application example is provided and the computational results are analyzed. Results illustrate that the presented method is viable in solving the evaluation and prioritization problem of renewable energy sources.

  5. An Extended TOPSIS Method for the Multiple Attribute Decision Making Problems Based on Interval Neutrosophic Set

    Directory of Open Access Journals (Sweden)

    Pingping Chi

    2013-03-01

    Full Text Available The interval neutrosophic set (INS can be easier to express the incomplete, indeterminate and inconsistent information, and TOPSIS is one of the most commonly used and effective method for multiple attribute decision making, however, in general, it can only process the attribute values with crisp numbers. In this paper, we have extended TOPSIS to INS, and with respect to the multiple attribute decision making problems in which the attribute weights are unknown and the attribute values take the form of INSs, we proposed an expanded TOPSIS method. Firstly, the definition of INS and the operational laws are given, and distance between INSs is defined. Then, the attribute weights are determined based on the Maximizing deviation method and an extended TOPSIS method is developed to rank the alternatives. Finally, an illustrative example is given to verify the developed approach and to demonstrate its practicality and effectiveness.

  6. Bootstrap resampling: a powerful method of assessing confidence intervals for doses from experimental data

    International Nuclear Information System (INIS)

    Iwi, G.; Millard, R.K.; Palmer, A.M.; Preece, A.W.; Saunders, M.

    1999-01-01

    Bootstrap resampling provides a versatile and reliable statistical method for estimating the accuracy of quantities which are calculated from experimental data. It is an empirically based method, in which large numbers of simulated datasets are generated by computer from existing measurements, so that approximate confidence intervals of the derived quantities may be obtained by direct numerical evaluation. A simple introduction to the method is given via a detailed example of estimating 95% confidence intervals for cumulated activity in the thyroid following injection of 99m Tc-sodium pertechnetate using activity-time data from 23 subjects. The application of the approach to estimating confidence limits for the self-dose to the kidney following injection of 99m Tc-DTPA organ imaging agent based on uptake data from 19 subjects is also illustrated. Results are then given for estimates of doses to the foetus following administration of 99m Tc-sodium pertechnetate for clinical reasons during pregnancy, averaged over 25 subjects. The bootstrap method is well suited for applications in radiation dosimetry including uncertainty, reliability and sensitivity analysis of dose coefficients in biokinetic models, but it can also be applied in a wide range of other biomedical situations. (author)

  7. Magnetic Resonance Imaging in the measurement of whole body muscle mass: A comparison of interval gap methods

    International Nuclear Information System (INIS)

    Hellmanns, K.; McBean, K.; Thoirs, K.

    2015-01-01

    Purpose: Magnetic Resonance Imaging (MRI) is commonly used in body composition research to measure whole body skeletal muscle mass (SM). MRI calculation methods of SM can vary by analysing the images at different slice intervals (or interval gaps) along the length of the body. This study compared SM measurements made from MRI images of apparently healthy individuals using different interval gap methods to determine the error associated with each technique. It was anticipated that the results would inform researchers of optimum interval gap measurements to detect a predetermined minimum change in SM. Methods: A method comparison study was used to compare eight interval gap methods (interval gaps of 40, 50, 60, 70, 80, 100, 120 and 140 mm) against a reference 10 mm interval gap method for measuring SM from twenty MRI image sets acquired from apparently healthy participants. Pearson product-moment correlation analysis was used to determine the association between methods. Total error was calculated as the sum of the bias (systematic error) and the random error (limits of agreement) of the mean differences. Percentage error was used to demonstrate proportional error. Results: Pearson product-moment correlation analysis between the reference method and all interval gap methods demonstrated strong and significant associations (r > 0.99, p < 0.0001). The 40 mm interval gap method was comparable with the 10 mm interval reference method and had a low error (total error 0.95 kg, −3.4%). Analysis methods using wider interval gap techniques demonstrated larger errors than reported for dual-energy x-ray absorptiometry (DXA), a technique which is more available, less expensive, and less time consuming than MRI analysis of SM. Conclusions: Researchers using MRI to measure SM can be confident in using a 40 mm interval gap technique when analysing the images to detect minimum changes less than 1 kg. The use of wider intervals will introduce error that is no better

  8. On the Distribution of Zeros and Poles of Rational Approximants on Intervals

    Directory of Open Access Journals (Sweden)

    V. V. Andrievskii

    2012-01-01

    Full Text Available The distribution of zeros and poles of best rational approximants is well understood for the functions (=||, >0. If ∈[−1,1] is not holomorphic on [−1,1], the distribution of the zeros of best rational approximants is governed by the equilibrium measure of [−1,1] under the additional assumption that the rational approximants are restricted to a bounded degree of the denominator. This phenomenon was discovered first for polynomial approximation. In this paper, we investigate the asymptotic distribution of zeros, respectively, -values, and poles of best real rational approximants of degree at most to a function ∈[−1,1] that is real-valued, but not holomorphic on [−1,1]. Generalizations to the lower half of the Walsh table are indicated.

  9. The intervals method: a new approach to analyse finite element outputs using multivariate statistics

    Directory of Open Access Journals (Sweden)

    Jordi Marcé-Nogué

    2017-10-01

    Full Text Available Background In this paper, we propose a new method, named the intervals’ method, to analyse data from finite element models in a comparative multivariate framework. As a case study, several armadillo mandibles are analysed, showing that the proposed method is useful to distinguish and characterise biomechanical differences related to diet/ecomorphology. Methods The intervals’ method consists of generating a set of variables, each one defined by an interval of stress values. Each variable is expressed as a percentage of the area of the mandible occupied by those stress values. Afterwards these newly generated variables can be analysed using multivariate methods. Results Applying this novel method to the biological case study of whether armadillo mandibles differ according to dietary groups, we show that the intervals’ method is a powerful tool to characterize biomechanical performance and how this relates to different diets. This allows us to positively discriminate between specialist and generalist species. Discussion We show that the proposed approach is a useful methodology not affected by the characteristics of the finite element mesh. Additionally, the positive discriminating results obtained when analysing a difficult case study suggest that the proposed method could be a very useful tool for comparative studies in finite element analysis using multivariate statistical approaches.

  10. The intervals method: a new approach to analyse finite element outputs using multivariate statistics

    Science.gov (United States)

    De Esteban-Trivigno, Soledad; Püschel, Thomas A.; Fortuny, Josep

    2017-01-01

    Background In this paper, we propose a new method, named the intervals’ method, to analyse data from finite element models in a comparative multivariate framework. As a case study, several armadillo mandibles are analysed, showing that the proposed method is useful to distinguish and characterise biomechanical differences related to diet/ecomorphology. Methods The intervals’ method consists of generating a set of variables, each one defined by an interval of stress values. Each variable is expressed as a percentage of the area of the mandible occupied by those stress values. Afterwards these newly generated variables can be analysed using multivariate methods. Results Applying this novel method to the biological case study of whether armadillo mandibles differ according to dietary groups, we show that the intervals’ method is a powerful tool to characterize biomechanical performance and how this relates to different diets. This allows us to positively discriminate between specialist and generalist species. Discussion We show that the proposed approach is a useful methodology not affected by the characteristics of the finite element mesh. Additionally, the positive discriminating results obtained when analysing a difficult case study suggest that the proposed method could be a very useful tool for comparative studies in finite element analysis using multivariate statistical approaches. PMID:29043107

  11. Generalized Analysis of a Distribution Separation Method

    Directory of Open Access Journals (Sweden)

    Peng Zhang

    2016-04-01

    Full Text Available Separating two probability distributions from a mixture model that is made up of the combinations of the two is essential to a wide range of applications. For example, in information retrieval (IR, there often exists a mixture distribution consisting of a relevance distribution that we need to estimate and an irrelevance distribution that we hope to get rid of. Recently, a distribution separation method (DSM was proposed to approximate the relevance distribution, by separating a seed irrelevance distribution from the mixture distribution. It was successfully applied to an IR task, namely pseudo-relevance feedback (PRF, where the query expansion model is often a mixture term distribution. Although initially developed in the context of IR, DSM is indeed a general mathematical formulation for probability distribution separation. Thus, it is important to further generalize its basic analysis and to explore its connections to other related methods. In this article, we first extend DSM’s theoretical analysis, which was originally based on the Pearson correlation coefficient, to entropy-related measures, including the KL-divergence (Kullback–Leibler divergence, the symmetrized KL-divergence and the JS-divergence (Jensen–Shannon divergence. Second, we investigate the distribution separation idea in a well-known method, namely the mixture model feedback (MMF approach. We prove that MMF also complies with the linear combination assumption, and then, DSM’s linear separation algorithm can largely simplify the EM algorithm in MMF. These theoretical analyses, as well as further empirical evaluation results demonstrate the advantages of our DSM approach.

  12. Adaptive Kalman Filter Based on Adjustable Sampling Interval in Burst Detection for Water Distribution System

    OpenAIRE

    Doo Yong Choi; Seong-Won Kim; Min-Ah Choi; Zong Woo Geem

    2016-01-01

    Rapid detection of bursts and leaks in water distribution systems (WDSs) can reduce the social and economic costs incurred through direct loss of water into the ground, additional energy demand for water supply, and service interruptions. Many real-time burst detection models have been developed in accordance with the use of supervisory control and data acquisition (SCADA) systems and the establishment of district meter areas (DMAs). Nonetheless, no consideration has been given to how frequen...

  13. Improved Accuracy of Nonlinear Parameter Estimation with LAV and Interval Arithmetic Methods

    Directory of Open Access Journals (Sweden)

    Humberto Muñoz

    2009-06-01

    Full Text Available The reliable solution of nonlinear parameter es- timation problems is an important computational problem in many areas of science and engineering, including such applications as real time optimization. Its goal is to estimate accurate model parameters that provide the best fit to measured data, despite small- scale noise in the data or occasional large-scale mea- surement errors (outliers. In general, the estimation techniques are based on some kind of least squares or maximum likelihood criterion, and these require the solution of a nonlinear and non-convex optimiza- tion problem. Classical solution methods for these problems are local methods, and may not be reliable for finding the global optimum, with no guarantee the best model parameters have been found. Interval arithmetic can be used to compute completely and reliably the global optimum for the nonlinear para- meter estimation problem. Finally, experimental re- sults will compare the least squares, l2, and the least absolute value, l1, estimates using interval arithmetic in a chemical engineering application.

  14. Methods for Distributed Optimal Energy Management

    DEFF Research Database (Denmark)

    Brehm, Robert

    The presented research deals with the fundamental underlying methods and concepts of how the growing number of distributed generation units based on renewable energy resources and distributed storage devices can be most efficiently integrated into the existing utility grid. In contrast to convent......The presented research deals with the fundamental underlying methods and concepts of how the growing number of distributed generation units based on renewable energy resources and distributed storage devices can be most efficiently integrated into the existing utility grid. In contrast...... to conventional centralised optimal energy flow management systems, here-in, focus is set on how optimal energy management can be achieved in a decentralised distributed architecture such as a multi-agent system. Distributed optimisation methods are introduced, targeting optimisation of energy flow in virtual......-consumption of renewable energy resources in low voltage grids. It can be shown that this method prevents mutual discharging of batteries and prevents peak loads, a supervisory control instance can dictate the level of autarchy from the utility grid. Further it is shown that the problem of optimal energy flow management...

  15. Probabilistic modeling using bivariate normal distributions for identification of flow and displacement intervals in longwall overburden

    Energy Technology Data Exchange (ETDEWEB)

    Karacan, C.O.; Goodman, G.V.R. [NIOSH, Pittsburgh, PA (United States). Off Mine Safety & Health Research

    2011-01-15

    Gob gas ventholes (GGV) are used to control methane emissions in longwall mines by capturing it within the overlying fractured strata before it enters the work environment. In order for GGVs to effectively capture more methane and less mine air, the length of the slotted sections and their proximity to top of the coal bed should be designed based on the potential gas sources and their locations, as well as the displacements in the overburden that will create potential flow paths for the gas. In this paper, an approach to determine the conditional probabilities of depth-displacement, depth-flow percentage, depth-formation and depth-gas content of the formations was developed using bivariate normal distributions. The flow percentage, displacement and formation data as a function of distance from coal bed used in this study were obtained from a series of borehole experiments contracted by the former US Bureau of Mines as part of a research project. Each of these parameters was tested for normality and was modeled using bivariate normal distributions to determine all tail probabilities. In addition, the probability of coal bed gas content as a function of depth was determined using the same techniques. The tail probabilities at various depths were used to calculate conditional probabilities for each of the parameters. The conditional probabilities predicted for various values of the critical parameters can be used with the measurements of flow and methane percentage at gob gas ventholes to optimize their performance.

  16. Risky Group Decision-Making Method for Distribution Grid Planning

    Science.gov (United States)

    Li, Cunbin; Yuan, Jiahang; Qi, Zhiqiang

    2015-12-01

    With rapid speed on electricity using and increasing in renewable energy, more and more research pay attention on distribution grid planning. For the drawbacks of existing research, this paper proposes a new risky group decision-making method for distribution grid planning. Firstly, a mixing index system with qualitative and quantitative indices is built. On the basis of considering the fuzziness of language evaluation, choose cloud model to realize "quantitative to qualitative" transformation and construct interval numbers decision matrices according to the "3En" principle. An m-dimensional interval numbers decision vector is regarded as super cuboids in m-dimensional attributes space, using two-level orthogonal experiment to arrange points uniformly and dispersedly. The numbers of points are assured by testing numbers of two-level orthogonal arrays and these points compose of distribution points set to stand for decision-making project. In order to eliminate the influence of correlation among indices, Mahalanobis distance is used to calculate the distance from each solutions to others which means that dynamic solutions are viewed as the reference. Secondly, due to the decision-maker's attitude can affect the results, this paper defines the prospect value function based on SNR which is from Mahalanobis-Taguchi system and attains the comprehensive prospect value of each program as well as the order. At last, the validity and reliability of this method is illustrated by examples which prove the method is more valuable and superiority than the other.

  17. A global multicenter study on reference values: 1. Assessment of methods for derivation and comparison of reference intervals.

    Science.gov (United States)

    Ichihara, Kiyoshi; Ozarda, Yesim; Barth, Julian H; Klee, George; Qiu, Ling; Erasmus, Rajiv; Borai, Anwar; Evgina, Svetlana; Ashavaid, Tester; Khan, Dilshad; Schreier, Laura; Rolle, Reynan; Shimizu, Yoshihisa; Kimura, Shogo; Kawano, Reo; Armbruster, David; Mori, Kazuo; Yadav, Binod K

    2017-04-01

    The IFCC Committee on Reference Intervals and Decision Limits coordinated a global multicenter study on reference values (RVs) to explore rational and harmonizable procedures for derivation of reference intervals (RIs) and investigate the feasibility of sharing RIs through evaluation of sources of variation of RVs on a global scale. For the common protocol, rather lenient criteria for reference individuals were adopted to facilitate harmonized recruitment with planned use of the latent abnormal values exclusion (LAVE) method. As of July 2015, 12 countries had completed their study with total recruitment of 13,386 healthy adults. 25 analytes were measured chemically and 25 immunologically. A serum panel with assigned values was measured by all laboratories. RIs were derived by parametric and nonparametric methods. The effect of LAVE methods is prominent in analytes which reflect nutritional status, inflammation and muscular exertion, indicating that inappropriate results are frequent in any country. The validity of the parametric method was confirmed by the presence of analyte-specific distribution patterns and successful Gaussian transformation using the modified Box-Cox formula in all countries. After successful alignment of RVs based on the panel test results, nearly half the analytes showed variable degrees of between-country differences. This finding, however, requires confirmation after adjusting for BMI and other sources of variation. The results are reported in the second part of this paper. The collaborative study enabled us to evaluate rational methods for deriving RIs and comparing the RVs based on real-world datasets obtained in a harmonized manner. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  18. Comparison of force fields and calculation methods for vibration intervals of isotopic H+3 molecules

    International Nuclear Information System (INIS)

    Carney, G.D.; Adler-Golden, S.M.; Lesseski, D.C.

    1986-01-01

    This paper reports (a) improved values for low-lying vibration intervals of H + 3 , H 2 D + , D 2 H + , and D + 3 calculated using the variational method and Simons--Parr--Finlan representations of the Carney--Porter and Dykstra--Swope ab initio H + 3 potential energy surfaces, (b) quartic normal coordinate force fields for isotopic H + 3 molecules, (c) comparisons of variational and second-order perturbation theory, and (d) convergence properties of the Lai--Hagstrom internal coordinate vibrational Hamiltonian. Standard deviations between experimental and ab initio fundamental vibration intervals of H + 3 , H 2 D + , D 2 H + , and D + 3 for these potential surfaces are 6.9 (Carney--Porter) and 1.2 cm -1 (Dykstra--Swope). The standard deviations between perturbation theory and exact variational fundamentals are 5 and 10 cm -1 for the respective surfaces. The internal coordinate Hamiltonian is found to be less efficient than the previously employed ''t'' coordinate Hamiltonian for these molecules, except in the case of H 2 D +

  19. Health-Care Waste Treatment Technology Selection Using the Interval 2-Tuple Induced TOPSIS Method

    Directory of Open Access Journals (Sweden)

    Chao Lu

    2016-06-01

    Full Text Available Health-care waste (HCW management is a major challenge for municipalities, particularly in the cities of developing nations. Selecting the best treatment technology for HCW can be regarded as a complex multi-criteria decision making (MCDM issue involving a number of alternatives and multiple evaluation criteria. In addition, decision makers tend to express their personal assessments via multi-granularity linguistic term sets because of different backgrounds and knowledge, some of which may be imprecise, uncertain and incomplete. Therefore, the main objective of this study is to propose a new hybrid decision making approach combining interval 2-tuple induced distance operators with the technique for order preference by similarity to an ideal solution (TOPSIS for tackling HCW treatment technology selection problems with linguistic information. The proposed interval 2-tuple induced TOPSIS (ITI-TOPSIS can not only model the uncertainty and diversity of the assessment information given by decision makers, but also reflect the complex attitudinal characters of decision makers and provide much more complete information for the selection of the optimum disposal alternative. Finally, an empirical example in Shanghai, China is provided to illustrate the proposed decision making method, and results show that the ITI-TOPSIS proposed in this paper can solve the problem of HCW treatment technology selection effectively.

  20. Semiorders, Intervals Orders and Pseudo Orders Preference Structures in Multiple Criteria Decision Aid Methods

    Directory of Open Access Journals (Sweden)

    Fernández Barberis, Gabriela

    2013-06-01

    Full Text Available During the last decades, an important number of Multicriteria Decision Aid Methods (MCDA has been proposed to help the decision maker to select the best compromise alternative. Meanwhile, the PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluations family of outranking method and their applications has attracted much attention from academics and practitioners. In this paper, an extension of these methods is presented, consisting of analyze its functioning under New Preference Structures (NPS. The preference structures taken into account are, namely: semiorders, intervals orders and pseudo orders. These structures outstandingly improve the modelization as they give more flexibility, amplitude and certainty at the preferences formulation, since they tend to abandon the Complete Transitive Comparability Axiom of Preferences in order to substitute it by the Partial Comparability Axiom of Preferences. It must be remarked the introduction of Incomparability relations to the analysis and the consideration of preference structures that accept the Indifference intransitivity. The NPS incorporation is carried out in three phases that the PROMETHEE Methodology takes in: preference structure enrichment, dominance relation enrichment and outranking relation exploitation for decision aid, in order to finally arrive at solving the alternatives ranking problem through the PROMETHEE I or the PROMETHEE II utilization, according to whether a partial ranking or a complete one, is respectively required under the NPS

  1. Reduction Method for Active Distribution Networks

    DEFF Research Database (Denmark)

    Raboni, Pietro; Chen, Zhe

    2013-01-01

    On-line security assessment is traditionally performed by Transmission System Operators at the transmission level, ignoring the effective response of distributed generators and small loads. On the other hand the required computation time and amount of real time data for including Distribution...... Networks also would be too large. In this paper an adaptive aggregation method for subsystems with power electronic interfaced generators and voltage dependant loads is proposed. With this tool may be relatively easier including distribution networks into security assessment. The method is validated...... by comparing the results obtained in PSCAD® with the detailed network model and with the reduced one. Moreover the control schemes of a wind turbine and a photovoltaic plant included in the detailed network model are described....

  2. Method for calculating the variance and prediction intervals for biomass estimates obtained from allometric equations

    CSIR Research Space (South Africa)

    Kirton, A

    2010-08-01

    Full Text Available for calculating the variance and prediction intervals for biomass estimates obtained from allometric equations A KIRTON B SCHOLES S ARCHIBALD CSIR Ecosystem Processes and Dynamics, Natural Resources and the Environment P.O. BOX 395, Pretoria, 0001, South... intervals (confidence intervals for predicted values) for allometric estimates can be obtained using an example of estimating tree biomass from stem diameter. It explains how to deal with relationships which are in the power function form - a common form...

  3. Supplier evaluation in manufacturing environment using compromise ranking method with grey interval numbers

    Directory of Open Access Journals (Sweden)

    Prasenjit Chatterjee

    2012-04-01

    Full Text Available Evaluation of proper supplier for manufacturing organizations is one of the most challenging problems in real time manufacturing environment due to a wide variety of customer demands. It has become more and more complicated to meet the challenges of international competitiveness and as the decision makers need to assess a wide range of alternative suppliers based on a set of conflicting criteria. Thus, the main objective of supplier selection is to select highly potential supplier through which all the set goals regarding the purchasing and manufacturing activity can be achieved. Because of these reasons, supplier selection has got considerable attention by the academicians and researchers. This paper presents a combined multi-criteria decision making methodology for supplier evaluation for given industrial applications. The proposed methodology is based on a compromise ranking method combined with Grey Interval Numbers considering different cardinal and ordinal criteria and their relative importance. A ‘supplier selection index’ is also proposed to help evaluation and ranking the alternative suppliers. Two examples are illustrated to demonstrate the potentiality and applicability of the proposed method.

  4. Multidrug-resistant tuberculosis treatment failure detection depends on monitoring interval and microbiological method

    Science.gov (United States)

    White, Richard A.; Lu, Chunling; Rodriguez, Carly A.; Bayona, Jaime; Becerra, Mercedes C.; Burgos, Marcos; Centis, Rosella; Cohen, Theodore; Cox, Helen; D'Ambrosio, Lia; Danilovitz, Manfred; Falzon, Dennis; Gelmanova, Irina Y.; Gler, Maria T.; Grinsdale, Jennifer A.; Holtz, Timothy H.; Keshavjee, Salmaan; Leimane, Vaira; Menzies, Dick; Milstein, Meredith B.; Mishustin, Sergey P.; Pagano, Marcello; Quelapio, Maria I.; Shean, Karen; Shin, Sonya S.; Tolman, Arielle W.; van der Walt, Martha L.; Van Deun, Armand; Viiklepp, Piret

    2016-01-01

    Debate persists about monitoring method (culture or smear) and interval (monthly or less frequently) during treatment for multidrug-resistant tuberculosis (MDR-TB). We analysed existing data and estimated the effect of monitoring strategies on timing of failure detection. We identified studies reporting microbiological response to MDR-TB treatment and solicited individual patient data from authors. Frailty survival models were used to estimate pooled relative risk of failure detection in the last 12 months of treatment; hazard of failure using monthly culture was the reference. Data were obtained for 5410 patients across 12 observational studies. During the last 12 months of treatment, failure detection occurred in a median of 3 months by monthly culture; failure detection was delayed by 2, 7, and 9 months relying on bimonthly culture, monthly smear and bimonthly smear, respectively. Risk (95% CI) of failure detection delay resulting from monthly smear relative to culture is 0.38 (0.34–0.42) for all patients and 0.33 (0.25–0.42) for HIV-co-infected patients. Failure detection is delayed by reducing the sensitivity and frequency of the monitoring method. Monthly monitoring of sputum cultures from patients receiving MDR-TB treatment is recommended. Expanded laboratory capacity is needed for high-quality culture, and for smear microscopy and rapid molecular tests. PMID:27587552

  5. Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks

    OpenAIRE

    Chaoyang Shi; Bi Yu Chen; William H. K. Lam; Qingquan Li

    2017-01-01

    Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are f...

  6. Assessment of nuclear concentration uncertainties in isotope kinetics by the interval calculation method

    International Nuclear Information System (INIS)

    Kolesov, V.; Kamaev, D.; Hitrick, D.; Ukraitsev, V.

    2008-01-01

    Basically, the problem of dependency fuel cycle characteristic uncertainties from group constants and decay parameters uncertainties can be solved (to some extent) as well by use of sensitivity analysis. However such procedure is rather labor consuming and does not give guaranteed estimations for received parameters since it works, strictly speaking only for small deviations cause it is initially based on linearization of the mathematical problems. Suggested and realized technique of fuel cycle characteristic uncertainties estimation is based on so-call interval analysis (or interval calculations). The basic advantage of this technique is the opportunity of deriving correct estimations. In a professional terms this decision consist on introduction of a new special type of data such as Interval data in a codes and definition from them all arithmetic operations. Interval type data are in a real practice operation and use now. There are many realizations of interval arithmetic implemented by different ways. (orig.)

  7. A new view to uncertainty in Electre III method by introducing interval numbers

    Directory of Open Access Journals (Sweden)

    Mohammad Kazem Sayyadi

    2012-07-01

    Full Text Available The Electre III is a widely accepted multi attribute decision making model, which takes into account the uncertainty and vagueness. Uncertainty concept in Electre III is introduced by indifference, preference and veto thresholds, but sometimes determining their accurate values can be very hard. In this paper we represent the values of performance matrix as interval numbers and we define the links between interval numbers and concordance matrix .Without changing the concept of concordance, in our propose concept, Electre III is usable in decision making problems with interval numbers.

  8. A parallel optimization method for product configuration and supplier selection based on interval

    Science.gov (United States)

    Zheng, Jian; Zhang, Meng; Li, Guoxi

    2017-06-01

    In the process of design and manufacturing, product configuration is an important way of product development, and supplier selection is an essential component of supply chain management. To reduce the risk of procurement and maximize the profits of enterprises, this study proposes to combine the product configuration and supplier selection, and express the multiple uncertainties as interval numbers. An integrated optimization model of interval product configuration and supplier selection was established, and NSGA-II was put forward to locate the Pareto-optimal solutions to the interval multiobjective optimization model.

  9. Avionics Configuration Assessment for Flightdeck Interval Management: A Comparison of Avionics and Notification Methods

    Science.gov (United States)

    Latorella, Kara A.

    2015-01-01

    Flightdeck Interval Management is one of the NextGen operational concepts that FAA is sponsoring to realize requisite National Airspace System (NAS) efficiencies. Interval Management will reduce variability in temporal deviations at a position, and thereby reduce buffers typically applied by controllers - resulting in higher arrival rates, and more efficient operations. Ground software generates a strategic schedule of aircraft pairs. Air Traffic Control (ATC) provides an IM clearance with the IM spacing objective (i.e., the TTF, and at which point to achieve the appropriate spacing from this aircraft) to the IM aircraft. Pilots must dial FIM speeds into the speed window on the Mode Control Panel in a timely manner, and attend to deviations between actual speed and the instantaneous FIM profile speed. Here, the crew is assumed to be operating the aircraft with autothrottles on, with autopilot engaged, and the autoflight system in Vertical Navigation (VNAV) and Lateral Navigation (LNAV); and is responsible for safely flying the aircraft while maintaining situation awareness of their ability to follow FIM speed commands and to achieve the FIM spacing goal. The objective of this study is to examine whether three Notification Methods and four Avionics Conditions affect pilots' performance, ratings on constructs associated with performance (workload, situation awareness), or opinions on acceptability. Three Notification Methods (alternate visual and aural alerts that notified pilots to the onset of a speed target, conformance deviation from the required speed profile, and reminded them if they failed to enter the speed within 10 seconds) were examined. These Notification Methods were: VVV (visuals for all three events), VAV (visuals for all three events, plus an aural for speed conformance deviations), and AAA (visual indications and the same aural to indicate all three of these events). Avionics Conditions were defined by the instrumentation (and location) used to

  10. A New Method of Multiattribute Decision-Making Based on Interval-Valued Hesitant Fuzzy Soft Sets and Its Application

    Directory of Open Access Journals (Sweden)

    Yan Yang

    2017-01-01

    Full Text Available Combining interval-valued hesitant fuzzy soft sets (IVHFSSs and a new comparative law, we propose a new method, which can effectively solve multiattribute decision-making (MADM problems. Firstly, a characteristic function of two interval values and a new comparative law of interval-valued hesitant fuzzy elements (IVHFEs based on the possibility degree are proposed. Then, we define two important definitions of IVHFSSs including the interval-valued hesitant fuzzy soft quasi subset and soft quasi equal based on the new comparative law. Finally, an algorithm is presented to solve MADM problems. We also use the method proposed in this paper to evaluate the importance of major components of the well drilling mud pump.

  11. Multipath interference test method for distributed amplifiers

    Science.gov (United States)

    Okada, Takahiro; Aida, Kazuo

    2005-12-01

    A method for testing distributed amplifiers is presented; the multipath interference (MPI) is detected as a beat spectrum between the multipath signal and the direct signal using a binary frequency shifted keying (FSK) test signal. The lightwave source is composed of a DFB-LD that is directly modulated by a pulse stream passing through an equalizer, and emits the FSK signal of the frequency deviation of about 430MHz at repetition rate of 80-100 kHz. The receiver consists of a photo-diode and an electrical spectrum analyzer (ESA). The base-band power spectrum peak appeared at the frequency of the FSK frequency deviation can be converted to amount of MPI using a calibration chart. The test method has improved the minimum detectable MPI as low as -70 dB, compared to that of -50 dB of the conventional test method. The detailed design and performance of the proposed method are discussed, including the MPI simulator for calibration procedure, computer simulations for evaluating the error caused by the FSK repetition rate and the fiber length under test and experiments on singlemode fibers and distributed Raman amplifier.

  12. Power-law inter-spike interval distributions infer a conditional maximization of entropy in cortical neurons.

    Directory of Open Access Journals (Sweden)

    Yasuhiro Tsubo

    Full Text Available The brain is considered to use a relatively small amount of energy for its efficient information processing. Under a severe restriction on the energy consumption, the maximization of mutual information (MMI, which is adequate for designing artificial processing machines, may not suit for the brain. The MMI attempts to send information as accurate as possible and this usually requires a sufficient energy supply for establishing clearly discretized communication bands. Here, we derive an alternative hypothesis for neural code from the neuronal activities recorded juxtacellularly in the sensorimotor cortex of behaving rats. Our hypothesis states that in vivo cortical neurons maximize the entropy of neuronal firing under two constraints, one limiting the energy consumption (as assumed previously and one restricting the uncertainty in output spike sequences at given firing rate. Thus, the conditional maximization of firing-rate entropy (CMFE solves a tradeoff between the energy cost and noise in neuronal response. In short, the CMFE sends a rich variety of information through broader communication bands (i.e., widely distributed firing rates at the cost of accuracy. We demonstrate that the CMFE is reflected in the long-tailed, typically power law, distributions of inter-spike intervals obtained for the majority of recorded neurons. In other words, the power-law tails are more consistent with the CMFE rather than the MMI. Thus, we propose the mathematical principle by which cortical neurons may represent information about synaptic input into their output spike trains.

  13. Optimal methods for fitting probability distributions to propagule retention time in studies of zoochorous dispersal.

    Science.gov (United States)

    Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi

    2016-02-01

    Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We

  14. Structural hybrid reliability index and its convergent solving method based on random–fuzzy–interval reliability model

    OpenAIRE

    Hai An; Ling Zhou; Hui Sun

    2016-01-01

    Aiming to resolve the problems of a variety of uncertainty variables that coexist in the engineering structure reliability analysis, a new hybrid reliability index to evaluate structural hybrid reliability, based on the random–fuzzy–interval model, is proposed in this article. The convergent solving method is also presented. First, the truncated probability reliability model, the fuzzy random reliability model, and the non-probabilistic interval reliability model are introduced. Then, the new...

  15. NONLINEAR ASSIGNMENT-BASED METHODS FOR INTERVAL-VALUED INTUITIONISTIC FUZZY MULTI-CRITERIA DECISION ANALYSIS WITH INCOMPLETE PREFERENCE INFORMATION

    OpenAIRE

    TING-YU CHEN

    2012-01-01

    In the context of interval-valued intuitionistic fuzzy sets, this paper develops nonlinear assignment-based methods to manage imprecise and uncertain subjective ratings under incomplete preference structures and thereby determines the optimal ranking order of the alternatives for multiple criteria decision analysis. By comparing each interval-valued intuitionistic fuzzy number's score function, accuracy function, membership uncertainty index, and hesitation uncertainty index, a ranking proced...

  16. An Integrated Method for Interval Multi-Objective Planning of a Water Resource System in the Eastern Part of Handan

    Directory of Open Access Journals (Sweden)

    Meiqin Suo

    2017-07-01

    Full Text Available In this study, an integrated solving method is proposed for interval multi-objective planning. The proposed method is based on fuzzy linear programming and an interactive two-step method. It cannot only provide objectively optimal values for multiple objectives at the same time, but also effectively offer a globally optimal interval solution. Meanwhile, the degree of satisfaction related to different objective functions would be obtained. Then, the integrated solving method for interval multi-objective planning is applied to a case study of planning multi-water resources joint scheduling under uncertainty in the eastern part of Handan, China. The solutions obtained are useful for decision makers in easing the contradiction between supply of multi-water resources and demand from different water users. Moreover, it can provide the optimal comprehensive benefits of economy, society, and the environment.

  17. Extension of a chaos control method to unstable trajectories on infinite- or finite-time intervals: Experimental verification

    International Nuclear Information System (INIS)

    Yagasaki, Kazuyuki

    2007-01-01

    In experiments for single and coupled pendula, we demonstrate the effectiveness of a new control method based on dynamical systems theory for stabilizing unstable aperiodic trajectories defined on infinite- or finite-time intervals. The basic idea of the method is similar to that of the OGY method, which is a well-known, chaos control method. Extended concepts of the stable and unstable manifolds of hyperbolic trajectories are used here

  18. The Rational Third-Kind Chebyshev Pseudospectral Method for the Solution of the Thomas-Fermi Equation over Infinite Interval

    Directory of Open Access Journals (Sweden)

    Majid Tavassoli Kajani

    2013-01-01

    Full Text Available We propose a pseudospectral method for solving the Thomas-Fermi equation which is a nonlinear ordinary differential equation on semi-infinite interval. This approach is based on the rational third-kind Chebyshev pseudospectral method that is indeed a combination of Tau and collocation methods. This method reduces the solution of this problem to the solution of a system of algebraic equations. Comparison with some numerical solutions shows that the present solution is highly accurate.

  19. Point and interval forecasts of mortality rates and life expectancy: A comparison of ten principal component methods

    Directory of Open Access Journals (Sweden)

    Han Lin Shang

    2011-07-01

    Full Text Available Using the age- and sex-specific data of 14 developed countries, we compare the point and interval forecast accuracy and bias of ten principal component methods for forecasting mortality rates and life expectancy. The ten methods are variants and extensions of the Lee-Carter method. Based on one-step forecast errors, the weighted Hyndman-Ullah method provides the most accurate point forecasts of mortality rates and the Lee-Miller method is the least biased. For the accuracy and bias of life expectancy, the weighted Hyndman-Ullah method performs the best for female mortality and the Lee-Miller method for male mortality. While all methods underestimate variability in mortality rates, the more complex Hyndman-Ullah methods are more accurate than the simpler methods. The weighted Hyndman-Ullah method provides the most accurate interval forecasts for mortality rates, while the robust Hyndman-Ullah method provides the best interval forecast accuracy for life expectancy.

  20. A Method on the Item Investment Risk Interval Decision-making of Processing Ranking Style

    Institute of Scientific and Technical Information of China (English)

    CHEN Li-wen

    2002-01-01

    In this paper, on the bases of the defeot of riskful type and indefinite type decisions, the concept of the type of item investment probability scheduling decision is given, and a linear programming model and its solution are made out. The feasibility of probability scheduling type item investment plan is studied by applying the quality of interval arithmetic.

  1. Probability evolution method for exit location distribution

    Science.gov (United States)

    Zhu, Jinjie; Chen, Zhen; Liu, Xianbin

    2018-03-01

    The exit problem in the framework of the large deviation theory has been a hot topic in the past few decades. The most probable escape path in the weak-noise limit has been clarified by the Freidlin-Wentzell action functional. However, noise in real physical systems cannot be arbitrarily small while noise with finite strength may induce nontrivial phenomena, such as noise-induced shift and noise-induced saddle-point avoidance. Traditional Monte Carlo simulation of noise-induced escape will take exponentially large time as noise approaches zero. The majority of the time is wasted on the uninteresting wandering around the attractors. In this paper, a new method is proposed to decrease the escape simulation time by an exponentially large factor by introducing a series of interfaces and by applying the reinjection on them. This method can be used to calculate the exit location distribution. It is verified by examining two classical examples and is compared with theoretical predictions. The results show that the method performs well for weak noise while may induce certain deviations for large noise. Finally, some possible ways to improve our method are discussed.

  2. Method to measure autonomic control of cardiac function using time interval parameters from impedance cardiography

    International Nuclear Information System (INIS)

    Meijer, Jan H; Boesveldt, Sanne; Elbertse, Eskeline; Berendse, H W

    2008-01-01

    The time difference between the electrocardiogram and impedance cardiogram can be considered as a measure for the time delay between the electrical and mechanical activities of the heart. This time interval, characterized by the pre-ejection period (PEP), is related to the sympathetic autonomous nervous control of cardiac activity. PEP, however, is difficult to measure in practice. Therefore, a novel parameter, the initial systolic time interval (ISTI), is introduced to provide a more practical measure. The use of ISTI instead of PEP was evaluated in three groups: young healthy subjects, patients with Parkinson's disease, and a group of elderly, healthy subjects of comparable age. PEP and ISTI were studied under two conditions: at rest and after an exercise stimulus. Under both conditions, PEP and ISTI behaved largely similarly in the three groups and were significantly correlated. It is concluded that ISTI can be used as a substitute for PEP and, therefore, to evaluate autonomic neuropathy both in clinical and extramural settings. Measurement of ISTI can also be used to non-invasively monitor the electromechanical cardiac time interval, and the associated autonomic activity, under physiological circumstances

  3. Analysis of Low Frequency Oscillation Using the Multi-Interval Parameter Estimation Method on a Rolling Blackout in the KEPCO System

    Directory of Open Access Journals (Sweden)

    Kwan-Shik Shim

    2017-04-01

    Full Text Available This paper describes a multiple time interval (“multi-interval” parameter estimation method. The multi-interval parameter estimation method estimates a parameter from a new multi-interval prediction error polynomial that can simultaneously consider multiple time intervals. The root of the multi-interval prediction error polynomial includes the effect on each time interval, and the important mode can be estimated by solving one polynomial for multiple time intervals or signals. The algorithm of the multi-interval parameter estimation method proposed in this paper is applied to the test function and the data measured from a PMU (phasor measurement unit installed in the KEPCO (Korea Electric Power Corporation system. The results confirm that the proposed multi-interval parameter estimation method accurately and reliably estimates important parameters.

  4. TL glow ratios at different temperature intervals of integration in thermoluminescence method. Comparison of Japanese standard (MHLW notified) method with CEN standard methods

    International Nuclear Information System (INIS)

    Todoriki, Setsuko; Saito, Kimie; Tsujimoto, Yuka

    2008-01-01

    The effect of the integration temperature intervals of TL intensities on the TL glow ratio was examined in comparison of the notified method of the Ministry of Health, Labour and Welfare (MHLW method) with EN1788. Two kinds of un-irradiated geological standard rock and three kinds of spices (black pepper, turmeric, and oregano) irradiated at 0.3 kGy or 1.0 kGy were subjected to TL analysis. Although the TL glow ratio exceeded 0.1 in the andesite according to the calculation of the MHLW notified method (integration interval; 70-490degC), the maximum of the first glow were observed at 300degC or more, attributed the influence of the natural radioactivity and distinguished from food irradiation. When the integration interval was set to 166-227degC according to EN1788, the TL glow ratios became remarkably smaller than 0.1, and the evaluation of the un-irradiated sample became more clear. For spices, the TL glow ratios by the MHLW notified method fell below 0.1 in un-irradiated samples and exceeded 0.1 in irradiated ones. Moreover, Glow1 maximum temperatures of the irradiated samples were observed at the range of 168-196degC, and those of un-irradiated samples were 258degC or more. Therefore, all samples were correctly judged by the criteria of the MHLW method. However, based on the temperature range of integration defined by EN1788, the TL glow ratio of un-irradiated samples remarkably became small compared with that of the MHLW method, and the discrimination of the irradiated sample from non-irradiation sample became clearer. (author)

  5. A NEW METHOD FOR CONSTRUCTING CONFIDENCE INTERVAL FOR CPM BASED ON FUZZY DATA

    Directory of Open Access Journals (Sweden)

    Bahram Sadeghpour Gildeh

    2011-06-01

    Full Text Available A measurement control system ensures that measuring equipment and measurement processes are fit for their intended use and its importance in achieving product quality objectives. In most real life applications, the observations are fuzzy. In some cases specification limits (SLs are not precise numbers and they are expressed in fuzzy terms, s o that the classical capability indices could not be applied. In this paper we obtain 100(1 - α% fuzzy confidence interval for C pm fuzzy process capability index, where instead of precise quality we have two membership functions for specification limits.

  6. [Confidence interval or p-value--similarities and differences between two important methods of statistical inference of quantitative studies].

    Science.gov (United States)

    Harari, Gil

    2014-01-01

    Statistic significance, also known as p-value, and CI (Confidence Interval) are common statistics measures and are essential for the statistical analysis of studies in medicine and life sciences. These measures provide complementary information about the statistical probability and conclusions regarding the clinical significance of study findings. This article is intended to describe the methodologies, compare between the methods, assert their suitability for the different needs of study results analysis and to explain situations in which each method should be used.

  7. A generic method for the evaluation of interval type-2 fuzzy linguistic summaries.

    Science.gov (United States)

    Boran, Fatih Emre; Akay, Diyar

    2014-09-01

    Linguistic summarization has turned out to be an important knowledge discovery technique by providing the most relevant natural language-based sentences in a human consistent manner. While many studies on linguistic summarization have handled ordinary fuzzy sets [type-1 fuzzy set (T1FS)] for modeling words, only few of them have dealt with interval type-2 fuzzy sets (IT2FS) even though IT2FS is better capable of handling uncertainties associated with words. Furthermore, the existent studies work with the scalar cardinality based degree of truth which might lead to inconsistency in the evaluation of interval type-2 fuzzy (IT2F) linguistic summaries. In this paper, to overcome this shortcoming, we propose a novel probabilistic degree of truth for evaluating IT2F linguistic summaries in the forms of type-I and type-II quantified sentences. We also extend the properties that should be fulfilled by any degree of truth on linguistic summarization with T1FS to IT2F environment. We not only prove that our probabilistic degree of truth satisfies the given properties, but also illustrate by examples that it provides more consistent results when compared to the existing degree of truth in the literature. Furthermore, we carry out an application on linguistic summarization of time series data of Europe Brent Spot Price, along with a comparison of the results achieved with our approach and that of the existing degree of truth in the literature.

  8. Effect of insertion method and postinsertion time interval prior to force application on the removal torque of orthodontic miniscrews.

    Science.gov (United States)

    Sharifi, Maryam; Ghassemi, Amirreza; Bayani, Shahin

    2015-01-01

    Success of orthodontic miniscrews in providing stable anchorage is dependent on their stability. The purpose of this study was to assess the effect of insertion method and postinsertion time interval on the removal torque of miniscrews as an indicator of their stability. Seventy-two miniscrews (Jeil Medical) were inserted into the femoral bones of three male German Shepherd dogs and assigned to nine groups of eight miniscrews. Three insertion methods, including hand-driven, motor-driven with 5.0-Ncm insertion torque, and motor-driven with 20.0-Ncm insertion torque, were tested. Three time intervals of 0, 2, and 6 weeks between miniscrew insertion and removal were tested as well. Removal torque values were measured in newton centimeters by a removal torque tester (IMADA). Data were analyzed by one-way analysis of variance (ANOVA) followed by the Bonferroni post hoc test at a .05 level of significance. A miniscrew survival rate of 93% was observed in this study. The highest mean value of removal torque among the three postinsertion intervals (2.4 ± 0.59 Ncm) was obtained immediately after miniscrew insertion with a statistically significant difference from the other two time intervals (P torque values were obtained immediately after insertion.

  9. Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks

    Directory of Open Access Journals (Sweden)

    Chaoyang Shi

    2017-12-01

    Full Text Available Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.

  10. Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks.

    Science.gov (United States)

    Shi, Chaoyang; Chen, Bi Yu; Lam, William H K; Li, Qingquan

    2017-12-06

    Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.

  11. A new method for assessing judgmental distributions

    NARCIS (Netherlands)

    Moors, J.J.A.; Schuld, M.H.; Mathijssen, A.C.A.

    1995-01-01

    For a number of statistical applications subjective estimates of some distributional parameters - or even complete densities are needed. The literature agrees that it is wise behaviour to ask only for some quantiles of the distribution; from these, the desired quantities are extracted. Quite a lot

  12. Review of Congestion Management Methods for Distribution Networks with High Penetration of Distributed Energy Resources

    DEFF Research Database (Denmark)

    Huang, Shaojun; Wu, Qiuwei; Liu, Zhaoxi

    2014-01-01

    This paper reviews the existing congestion management methods for distribution networks with high penetration of DERs documented in the recent research literatures. The congestion management methods for distribution networks reviewed can be grouped into two categories – market methods and direct...... control methods. The market methods consist of dynamic tariff, distribution capacity market, shadow price and flexible service market. The direct control methods are comprised of network reconfiguration, reactive power control and active power control. Based on the review of the existing methods...

  13. Allocation of ESS by interval optimization method considering impact of ship swinging on hybrid PV/diesel ship power system

    International Nuclear Information System (INIS)

    Wen, Shuli; Lan, Hai; Hong, Ying-Yi; Yu, David C.; Zhang, Lijun; Cheng, Peng

    2016-01-01

    Highlights: • An uncertainty model of PV generation on board is developed based on the experiments. • The moving and swinging of the ship are considered in the optimal ESS sizing problem. • Optimal sizing of ESS in a hybrid PV/diesel/ESS ship power system is gained by the interval optimization method. • Different cases were studied to show the significance of the proposed method considering the swinging effects on the cost. - Abstract: Owing to low efficiency of traditional ships and the serious environmental pollution that they cause, the use of solar energy and an energy storage system (ESS) in a ship’s power system is increasingly attracting attention. However, the swinging of a ship raises crucial challenges in designing an optimal system for a large oil tanker ship, which are associated with uncertainties in solar energy. In this study, a series of experiments are performed to investigate the characteristics of a photovoltaic (PV) system on a moving ship. Based on the experimental results, an interval uncertainty model of on-board PV generation is established, which considers the effect of the swinging of the ship. Due to the power balance equations, the outputs of the diesel generator and the ESS on a large oil tanker are also modeled using interval variables. An interval optimization method is developed to determine the optimal size of the ESS in this hybrid ship power system to reduce the fuel cost, capital cost of the ESS, and emissions of greenhouse gases. Variations of the ship load are analyzed using a new method, taking five operating conditions into account. Several cases are compared in detail to demonstrate the effectiveness of the proposed algorithm.

  14. An accurate calibration method for high pressure vibrating tube densimeters in the density interval (700 to 1600) kg . m-3

    International Nuclear Information System (INIS)

    Sanmamed, Yolanda A.; Dopazo-Paz, Ana; Gonzalez-Salgado, Diego; Troncoso, Jacobo; Romani, Luis

    2009-01-01

    A calibration procedure of vibrating tube densimeters for density measurement of liquids in the intervals (700 to 1600) kg . m -3 , (283.15 to 323.15) K, and (0.1 to 60) MPa is presented. It is based on the modelization of the vibrating tube as a thick-tube clamped at one end (cantilever) whose stress and thermal behaviour follows the ideas proposed in the Forced Path Mechanical Calibration model (FPMC). Model parameters are determined using two calibration fluids with densities certified at atmospheric pressure (dodecane and tetracholoroethylene) and a third one with densities known as a function of pressure (water). It is applied to the Anton Paar 512P densimeter, obtaining density measurements with an expanded uncertainty less than 0.2 kg . m -3 in the working intervals. This accuracy comes from the combination of several factors: densimeter behaves linearly in the working density interval, densities of both calibration fluids cover that interval and they have a very low uncertainty, and the mechanical behaviour of the tube is well characterized by the considered model. The main application of this method is the precise measurement of high density fluids for which most of the calibration procedures are inaccurate.

  15. Structural hybrid reliability index and its convergent solving method based on random–fuzzy–interval reliability model

    Directory of Open Access Journals (Sweden)

    Hai An

    2016-08-01

    Full Text Available Aiming to resolve the problems of a variety of uncertainty variables that coexist in the engineering structure reliability analysis, a new hybrid reliability index to evaluate structural hybrid reliability, based on the random–fuzzy–interval model, is proposed in this article. The convergent solving method is also presented. First, the truncated probability reliability model, the fuzzy random reliability model, and the non-probabilistic interval reliability model are introduced. Then, the new hybrid reliability index definition is presented based on the random–fuzzy–interval model. Furthermore, the calculation flowchart of the hybrid reliability index is presented and it is solved using the modified limit-step length iterative algorithm, which ensures convergence. And the validity of convergent algorithm for the hybrid reliability model is verified through the calculation examples in literature. In the end, a numerical example is demonstrated to show that the hybrid reliability index is applicable for the wear reliability assessment of mechanisms, where truncated random variables, fuzzy random variables, and interval variables coexist. The demonstration also shows the good convergence of the iterative algorithm proposed in this article.

  16. Estimation of optimum time interval for neutron- γ discrimination by simplified digital charge collection method

    International Nuclear Information System (INIS)

    Singh, Harleen; Singh, Sarabjeet

    2014-01-01

    The discrimination of mixed radiation field is of prime importance due to its application in neutron detection which leads to radiation safety, nuclear material detection etc. The liquid scintillators are one of the most important radiation detectors because the relative decay rate of neutron pulse is slower as compared to gamma radiation in these detectors. There are techniques like zero crossing and charge comparison which are very popular and implemented using analogue electronics. In the recent years due to availability of fast ADC and FPGA, digital methods for discrimination of mixed field radiations have been investigated. Some of the digital time domain techniques developed are pulse gradient analysis (PGA), simplified digital charge collection method (SDCC), digital zero crossing method. The performance of these methods depends on the appropriate selection of gate time for which the pulse is processed. In this paper, the SDCC method is investigated for a neutron-gamma mixed field. The main focus of the study is to get the knowledge of optimum gate time which is very important in neutron gamma discrimination analysis in a mixed radiation field. The comparison with charge collection (CC) method is also investigated

  17. Analyses of moments in pseudorapidity intervals at √s = 546 GeV by means of two probability distributions in pure-birth process

    International Nuclear Information System (INIS)

    Biyajima, M.; Shirane, K.; Suzuki, N.

    1988-01-01

    Moments in pseudorapidity intervals at the CERN Sp-barpS collider (√s = 546 GeV) are analyzed by means of two probability distributions in the pure-birth stochastic process. Our results show that a probability distribution obtained from the Poisson distribution as an initial condition is more useful than that obtained from the Kronecker δ function. Analyses of moments by Koba-Nielsen-Olesen scaling functions derived from solutions of the pure-birth stochastic process are also made. Moreover, analyses of preliminary data at √s = 200 and 900 GeV are added

  18. Analysis of calculating methods for failure distribution function based on maximal entropy principle

    International Nuclear Information System (INIS)

    Guo Chunying; Lin Yuangen; Jiang Meng; Wu Changli

    2009-01-01

    The computation of invalidation distribution functions of electronic devices when exposed in gamma rays is discussed here. First, the possible devices failure distribution models are determined through the tests of statistical hypotheses using the test data. The results show that: the devices' failure distribution can obey multi-distributions when the test data is few. In order to decide the optimum failure distribution model, the maximal entropy principle is used and the elementary failure models are determined. Then, the Bootstrap estimation method is used to simulate the intervals estimation of the mean and the standard deviation. On the basis of this, the maximal entropy principle is used again and the simulated annealing method is applied to find the optimum values of the mean and the standard deviation. Accordingly, the electronic devices' optimum failure distributions are finally determined and the survival probabilities are calculated. (authors)

  19. Pinning Synchronization for Complex Networks with Interval Coupling Delay by Variable Subintervals Method and Finsler’s Lemma

    Directory of Open Access Journals (Sweden)

    Dawei Gong

    2017-01-01

    Full Text Available The pinning synchronous problem for complex networks with interval delays is studied in this paper. First, by using an inequality which is introduced from Newton-Leibniz formula, a new synchronization criterion is derived. Second, combining Finsler’s Lemma with homogenous matrix, convergent linear matrix inequality (LMI relaxations for synchronization analysis are proposed with matrix-valued coefficients. Third, a new variable subintervals method is applied to expand the obtained results. Different from previous results, the interval delays are divided into some subdelays, which can introduce more free weighting matrices. Fourth, the results are shown as LMI, which can be easily analyzed or tested. Finally, the stability of the networks is proved via Lyapunov’s stability theorem, and the simulation of the trajectory claims the practicality of the proposed pinning control.

  20. Sampling Methods for Wallenius' and Fisher's Noncentral Hypergeometric Distributions

    DEFF Research Database (Denmark)

    Fog, Agner

    2008-01-01

    the mode, ratio-of-uniforms rejection method, and rejection by sampling in the tau domain. Methods for the multivariate distributions include: simulation of urn experiments, conditional method, Gibbs sampling, and Metropolis-Hastings sampling. These methods are useful for Monte Carlo simulation of models...... of biased sampling and models of evolution and for calculating moments and quantiles of the distributions.......Several methods for generating variates with univariate and multivariate Wallenius' and Fisher's noncentral hypergeometric distributions are developed. Methods for the univariate distributions include: simulation of urn experiments, inversion by binary search, inversion by chop-down search from...

  1. Distributed optimization for systems design : an augmented Lagrangian coordination method

    NARCIS (Netherlands)

    Tosserams, S.

    2008-01-01

    This thesis presents a coordination method for the distributed design optimization of engineering systems. The design of advanced engineering systems such as aircrafts, automated distribution centers, and microelectromechanical systems (MEMS) involves multiple components that together realize the

  2. The Multi-Attribute Group Decision-Making Method Based on Interval Grey Trapezoid Fuzzy Linguistic Variables

    Directory of Open Access Journals (Sweden)

    Kedong Yin

    2017-12-01

    Full Text Available With respect to multi-attribute group decision-making (MAGDM problems, where attribute values take the form of interval grey trapezoid fuzzy linguistic variables (IGTFLVs and the weights (including expert and attribute weight are unknown, improved grey relational MAGDM methods are proposed. First, the concept of IGTFLV, the operational rules, the distance between IGTFLVs, and the projection formula between the two IGTFLV vectors are defined. Second, the expert weights are determined by using the maximum proximity method based on the projection values between the IGTFLV vectors. The attribute weights are determined by the maximum deviation method and the priorities of alternatives are determined by improved grey relational analysis. Finally, an example is given to prove the effectiveness of the proposed method and the flexibility of IGTFLV.

  3. The Multi-Attribute Group Decision-Making Method Based on Interval Grey Trapezoid Fuzzy Linguistic Variables.

    Science.gov (United States)

    Yin, Kedong; Wang, Pengyu; Li, Xuemei

    2017-12-13

    With respect to multi-attribute group decision-making (MAGDM) problems, where attribute values take the form of interval grey trapezoid fuzzy linguistic variables (IGTFLVs) and the weights (including expert and attribute weight) are unknown, improved grey relational MAGDM methods are proposed. First, the concept of IGTFLV, the operational rules, the distance between IGTFLVs, and the projection formula between the two IGTFLV vectors are defined. Second, the expert weights are determined by using the maximum proximity method based on the projection values between the IGTFLV vectors. The attribute weights are determined by the maximum deviation method and the priorities of alternatives are determined by improved grey relational analysis. Finally, an example is given to prove the effectiveness of the proposed method and the flexibility of IGTFLV.

  4. Method of estimating the reactor power distribution

    International Nuclear Information System (INIS)

    Mitsuta, Toru; Fukuzaki, Takaharu; Doi, Kazuyori; Kiguchi, Takashi.

    1984-01-01

    Purpose: To improve the calculation accuracy for the power distribution thereby improve the reliability of power distribution monitor. Constitution: In detector containing strings disposed within a reactor core, movable type neutron flux monitors are provided in addition to position fixed type neutron monitors conventionally disposed so far. Upon periodical monitoring, a power distribution X1 is calculated from a physical reactor core model. Then, a higher power position X2 is detected by position detectors and value X2 is sent to a neutron flux monitor driving device to displace the movable type monitors to a higher power position in each of the strings. After displacement, the value X1 is amended by an amending device using measured values from the movable type and fixed type monitors and the amended value is sent to a reactor core monitor device. Upon failure of the fixed type monitors, the position is sent to the monitor driving device and the movable monitors are displaced to that position for measurement. (Sekiya, K.)

  5. Modified Taylor series method for solving nonlinear differential equations with mixed boundary conditions defined on finite intervals.

    Science.gov (United States)

    Vazquez-Leal, Hector; Benhammouda, Brahim; Filobello-Nino, Uriel Antonio; Sarmiento-Reyes, Arturo; Jimenez-Fernandez, Victor Manuel; Marin-Hernandez, Antonio; Herrera-May, Agustin Leobardo; Diaz-Sanchez, Alejandro; Huerta-Chua, Jesus

    2014-01-01

    In this article, we propose the application of a modified Taylor series method (MTSM) for the approximation of nonlinear problems described on finite intervals. The issue of Taylor series method with mixed boundary conditions is circumvented using shooting constants and extra derivatives of the problem. In order to show the benefits of this proposal, three different kinds of problems are solved: three-point boundary valued problem (BVP) of third-order with a hyperbolic sine nonlinearity, two-point BVP for a second-order nonlinear differential equation with an exponential nonlinearity, and a two-point BVP for a third-order nonlinear differential equation with a radical nonlinearity. The result shows that the MTSM method is capable to generate easily computable and highly accurate approximations for nonlinear equations. 34L30.

  6. Integrated Power Flow and Short Circuit Calculation Method for Distribution Network with Inverter Based Distributed Generation

    OpenAIRE

    Yang, Shan; Tong, Xiangqian

    2016-01-01

    Power flow calculation and short circuit calculation are the basis of theoretical research for distribution network with inverter based distributed generation. The similarity of equivalent model for inverter based distributed generation during normal and fault conditions of distribution network and the differences between power flow and short circuit calculation are analyzed in this paper. Then an integrated power flow and short circuit calculation method for distribution network with inverte...

  7. Distributed gas detection system and method

    Science.gov (United States)

    Challener, William Albert; Palit, Sabarni; Karp, Jason Harris; Kasten, Ansas Matthias; Choudhury, Niloy

    2017-11-21

    A distributed gas detection system includes one or more hollow core fibers disposed in different locations, one or more solid core fibers optically coupled with the one or more hollow core fibers and configured to receive light of one or more wavelengths from a light source, and an interrogator device configured to receive at least some of the light propagating through the one or more solid core fibers and the one or more hollow core fibers. The interrogator device is configured to identify a location of a presence of a gas-of-interest by examining absorption of at least one of the wavelengths of the light at least one of the hollow core fibers.

  8. Extending the alias Monte Carlo sampling method to general distributions

    International Nuclear Information System (INIS)

    Edwards, A.L.; Rathkopf, J.A.; Smidt, R.K.

    1991-01-01

    The alias method is a Monte Carlo sampling technique that offers significant advantages over more traditional methods. It equals the accuracy of table lookup and the speed of equal probable bins. The original formulation of this method sampled from discrete distributions and was easily extended to histogram distributions. We have extended the method further to applications more germane to Monte Carlo particle transport codes: continuous distributions. This paper presents the alias method as originally derived and our extensions to simple continuous distributions represented by piecewise linear functions. We also present a method to interpolate accurately between distributions tabulated at points other than the point of interest. We present timing studies that demonstrate the method's increased efficiency over table lookup and show further speedup achieved through vectorization. 6 refs., 12 figs., 2 tabs

  9. METHODS OF MEASURING THE EFFECTS OF LIGHTNING BY SIMULATING ITS STRIKES WITH THE INTERVAL ASSESSMENT OF THE RESULTS OF MEASUREMENTS

    Directory of Open Access Journals (Sweden)

    P. V. Kriksin

    2017-01-01

    Full Text Available The article presents the results of the development of new methods aimed at more accurate interval estimate of the experimental values of voltages on grounding devices of substations and circuits in the control cables, that occur when lightning strikes to lightning rods; the abovementioned estimate made it possible to increase the accuracy of the results of the study of lightning noise by 28 %. A more accurate value of interval estimation were achieved by developing a measurement model that takes into account, along with the measured values, different measurement errors and includes the special processing of the measurement results. As a result, the interval of finding the true value of the sought voltage is determined with an accuracy of 95 %. The methods can be applied to the IK-1 and IKP-1 measurement complexes, consisting in the aperiodic pulse generator, the generator of high-frequency pulses and selective voltmeters, respectively. To evaluate the effectiveness of the developed methods series of experimental voltage assessments of grounding devices of ten active high-voltage substation have been fulfilled in accordance with the developed methods and traditional techniques. The evaluation results confirmed the possibility of finding the true values of voltage over a wide range, that ought to be considered in the process of technical diagnostics of lightning protection of substations when the analysis of the measurement results and the development of measures to reduce the effects of lightning are being fulfilled. Also, a comparative analysis of the results of measurements made in accordance with the developed methods and traditional techniques has demonstrated that the true value of the sought voltage may exceed the measured value at an average of 28 %, that ought to be considered in the further analysis of the parameters of lightning protection at the facility and in the development of corrective actions. The developed methods have been

  10. New Interval-Valued Intuitionistic Fuzzy Behavioral MADM Method and Its Application in the Selection of Photovoltaic Cells

    Directory of Open Access Journals (Sweden)

    Xiaolu Zhang

    2016-10-01

    Full Text Available As one of the emerging renewable resources, the use of photovoltaic cells has become a promise for offering clean and plentiful energy. The selection of a best photovoltaic cell for a promoter plays a significant role in aspect of maximizing income, minimizing costs and conferring high maturity and reliability, which is a typical multiple attribute decision making (MADM problem. Although many prominent MADM techniques have been developed, most of them are usually to select the optimal alternative under the hypothesis that the decision maker or expert is completely rational and the decision data are represented by crisp values. However, in the selecting processes of photovoltaic cells the decision maker is usually bounded rational and the ratings of alternatives are usually imprecise and vague. To address these kinds of complex and common issues, in this paper we develop a new interval-valued intuitionistic fuzzy behavioral MADM method. We employ interval-valued intuitionistic fuzzy numbers (IVIFNs to express the imprecise ratings of alternatives; and we construct LINMAP-based nonlinear programming models to identify the reference points under IVIFNs contexts, which avoid the subjective randomness of selecting the reference points. Finally we develop a prospect theory-based ranking method to identify the optimal alternative, which takes fully into account the decision maker’s behavioral characteristics such as reference dependence, diminishing sensitivity and loss aversion in the decision making process.

  11. Theoretical method for determining particle distribution functions of classical systems

    International Nuclear Information System (INIS)

    Johnson, E.

    1980-01-01

    An equation which involves the triplet distribution function and the three-particle direct correlation function is obtained. This equation was derived using an analogue of the Ornstein--Zernike equation. The new equation is used to develop a variational method for obtaining the triplet distribution function of uniform one-component atomic fluids from the pair distribution function. The variational method may be used with the first and second equations in the YBG hierarchy to obtain pair and triplet distribution functions. It should be easy to generalize the results to the n-particle distribution function

  12. Integrated Power Flow and Short Circuit Calculation Method for Distribution Network with Inverter Based Distributed Generation

    Directory of Open Access Journals (Sweden)

    Shan Yang

    2016-01-01

    Full Text Available Power flow calculation and short circuit calculation are the basis of theoretical research for distribution network with inverter based distributed generation. The similarity of equivalent model for inverter based distributed generation during normal and fault conditions of distribution network and the differences between power flow and short circuit calculation are analyzed in this paper. Then an integrated power flow and short circuit calculation method for distribution network with inverter based distributed generation is proposed. The proposed method let the inverter based distributed generation be equivalent to Iθ bus, which makes it suitable to calculate the power flow of distribution network with a current limited inverter based distributed generation. And the low voltage ride through capability of inverter based distributed generation can be considered as well in this paper. Finally, some tests of power flow and short circuit current calculation are performed on a 33-bus distribution network. The calculated results from the proposed method in this paper are contrasted with those by the traditional method and the simulation method, whose results have verified the effectiveness of the integrated method suggested in this paper.

  13. Analyzed method for calculating the distribution of electrostatic field

    International Nuclear Information System (INIS)

    Lai, W.

    1981-01-01

    An analyzed method for calculating the distribution of electrostatic field under any given axial gradient in tandem accelerators is described. This method possesses satisfactory accuracy compared with the results of numerical calculation

  14. Methods and Tools for Profiling and Control of Distributed Systems

    Directory of Open Access Journals (Sweden)

    Sukharev Roman

    2017-01-01

    Full Text Available The article analyzes and standardizes methods for profiling distributed systems that focus on simulation to conduct experiments and build a graph model of the system. The theory of queueing networks is used for simulation modeling of distributed systems, receiving and processing user requests. To automate the above method of profiling distributed systems the software application was developed with a modular structure and similar to a SCADA-system.

  15. A method to measure depth distributions of implanted ions

    International Nuclear Information System (INIS)

    Arnesen, A.; Noreland, T.

    1977-04-01

    A new variant of the radiotracer method for depth distribution determinations has been tested. Depth distributions of radioactive implanted ions are determined by dissolving thin, uniform layers of evaporated material from the surface of a backing and by measuring the activity before and after the layer removal. The method has been used to determine depth distributions for 25 keV and 50 keV 57 Co ions in aluminium and gold. (Auth.)

  16. Comparison of estimation methods for fitting weibull distribution to ...

    African Journals Online (AJOL)

    Comparison of estimation methods for fitting weibull distribution to the natural stand of Oluwa Forest Reserve, Ondo State, Nigeria. ... Journal of Research in Forestry, Wildlife and Environment ... The result revealed that maximum likelihood method was more accurate in fitting the Weibull distribution to the natural stand.

  17. Cathode power distribution system and method of using the same for power distribution

    Science.gov (United States)

    Williamson, Mark A; Wiedmeyer, Stanley G; Koehl, Eugene R; Bailey, James L; Willit, James L; Barnes, Laurel A; Blaskovitz, Robert J

    2014-11-11

    Embodiments include a cathode power distribution system and/or method of using the same for power distribution. The cathode power distribution system includes a plurality of cathode assemblies. Each cathode assembly of the plurality of cathode assemblies includes a plurality of cathode rods. The system also includes a plurality of bus bars configured to distribute current to each of the plurality of cathode assemblies. The plurality of bus bars include a first bus bar configured to distribute the current to first ends of the plurality of cathode assemblies and a second bus bar configured to distribute the current to second ends of the plurality of cathode assemblies.

  18. An interval-based possibilistic programming method for waste management with cost minimization and environmental-impact abatement under uncertainty.

    Science.gov (United States)

    Li, Y P; Huang, G H

    2010-09-15

    Considerable public concerns have been raised in the past decades since a large amount of pollutant emissions from municipal solid waste (MSW) disposal of processes pose risks on surrounding environment and human health. Moreover, in MSW management, various uncertainties exist in the related costs, impact factors and objectives, which can affect the optimization processes and the decision schemes generated. In this study, an interval-based possibilistic programming (IBPP) method is developed for planning the MSW management with minimized system cost and environmental impact under uncertainty. The developed method can deal with uncertainties expressed as interval values and fuzzy sets in the left- and right-hand sides of constraints and objective function. An interactive algorithm is provided for solving the IBPP problem, which does not lead to more complicated intermediate submodels and has a relatively low computational requirement. The developed model is applied to a case study of planning a MSW management system, where mixed integer linear programming (MILP) technique is introduced into the IBPP framework to facilitate dynamic analysis for decisions of timing, sizing and siting in terms of capacity expansion for waste-management facilities. Three cases based on different waste-management policies are examined. The results obtained indicate that inclusion of environmental impacts in the optimization model can change the traditional waste-allocation pattern merely based on the economic-oriented planning approach. The results obtained can help identify desired alternatives for managing MSW, which has advantages in providing compromised schemes under an integrated consideration of economic efficiency and environmental impact under uncertainty. Copyright 2010 Elsevier B.V. All rights reserved.

  19. A Comparison of Distribution Free and Non-Distribution Free Factor Analysis Methods

    Science.gov (United States)

    Ritter, Nicola L.

    2012-01-01

    Many researchers recognize that factor analysis can be conducted on both correlation matrices and variance-covariance matrices. Although most researchers extract factors from non-distribution free or parametric methods, researchers can also extract factors from distribution free or non-parametric methods. The nature of the data dictates the method…

  20. Influence of geometry of the discharge interval on distribution of ion and electron streams at surface of the Penning source cathode

    International Nuclear Information System (INIS)

    Egiazaryan, G.A.; Khachatrian, Zh.B.; Badalyan, E.S.; Ter-Gevorgyan, E.I.; Hovhannisyan, V.N.

    2006-01-01

    In the discharge of oscillating electrons, the mechanism of the processes, which controls the distribution of the ion and electron streams over the cathode surface, is investigated experimentally. The influence of the length of the discharge interval on value and distribution of the ion and electron streams is analyzed. The distribution both of ion and electron streams at the cathode surface is determined at different conditions of the discharge. It is shown that for given values of the anode diameter d a =31 mm and the gas pressure P=5x10 -5 Torr, the intensive stream of positive ions falls entirely on the cathode central area in the whole interval of the anode length variation (l a =1-11 cm). At the cathode, the ion current reaches the maximal value at a certain (optimal) value of the anode length that, in turn, depends on the anode voltage U a . The intensive stream of longitudinal electrons forms in the short anodes only (l a =2.5-3.5 cm) and depending on the choice of the discharge regime, may fall both on central and middle parts of the cathode

  1. Conditional prediction intervals of wind power generation

    DEFF Research Database (Denmark)

    Pinson, Pierre; Kariniotakis, Georges

    2010-01-01

    A generic method for the providing of prediction intervals of wind power generation is described. Prediction intervals complement the more common wind power point forecasts, by giving a range of potential outcomes for a given probability, their so-called nominal coverage rate. Ideally they inform...... on the characteristics of prediction errors for providing conditional interval forecasts. By simultaneously generating prediction intervals with various nominal coverage rates, one obtains full predictive distributions of wind generation. Adapted resampling is applied here to the case of an onshore Danish wind farm...... to the case of a large number of wind farms in Europe and Australia among others is finally discussed....

  2. Most probable dimension value and most flat interval methods for automatic estimation of dimension from time series

    International Nuclear Information System (INIS)

    Corana, A.; Bortolan, G.; Casaleggio, A.

    2004-01-01

    We present and compare two automatic methods for dimension estimation from time series. Both methods, based on conceptually different approaches, work on the derivative of the bi-logarithmic plot of the correlation integral versus the correlation length (log-log plot). The first method searches for the most probable dimension values (MPDV) and associates to each of them a possible scaling region. The second one searches for the most flat intervals (MFI) in the derivative of the log-log plot. The automatic procedures include the evaluation of the candidate scaling regions using two reliability indices. The data set used to test the methods consists of time series from known model attractors with and without the addition of noise, structured time series, and electrocardiographic signals from the MIT-BIH ECG database. Statistical analysis of results was carried out by means of paired t-test, and no statistically significant differences were found in the large majority of the trials. Consistent results are also obtained dealing with 'difficult' time series. In general for a more robust and reliable estimate, the use of both methods may represent a good solution when time series from complex systems are analyzed. Although we present results for the correlation dimension only, the procedures can also be used for the automatic estimation of generalized q-order dimensions and pointwise dimension. We think that the proposed methods, eliminating the need of operator intervention, allow a faster and more objective analysis, thus improving the usefulness of dimension analysis for the characterization of time series obtained from complex dynamical systems

  3. A decision making method based on interval type-2 fuzzy sets: An approach for ambulance location preference

    Directory of Open Access Journals (Sweden)

    Lazim Abdullah

    2018-01-01

    Full Text Available Selecting the best solution to deploy an ambulance in a strategic location is of the important variables that need to be accounted for improving the emergency medical services. The selection requires both quantitative and qualitative evaluation. Fuzzy set based approach is one of the well-known theories that help decision makers to handle fuzziness, uncertainty in decision making and vagueness of information. This paper proposes a new decision making method of Interval Type-2 Fuzzy Simple Additive Weighting (IT2 FSAW as to deal with uncertainty and vagueness. The new IT2 FSAW is applied to establish a preference in ambulance location. The decision making framework defines four criteria and five alternatives of ambulance location preference. Four experts attached to a Malaysian government hospital and a university medical center were interviewed to provide linguistic evaluation prior to analyzing with the new IT2 FSAW. Implementation of the proposed method in the case of ambulance location preference suggests that the ‘road network’ is the best alternative for ambulance location. The results indicate that the proposed method offers a consensus solution for handling the vague and qualitative criteria of ambulance location preference.

  4. Fast Reliability Assessing Method for Distribution Network with Distributed Renewable Energy Generation

    Science.gov (United States)

    Chen, Fan; Huang, Shaoxiong; Ding, Jinjin; Ding, Jinjin; Gao, Bo; Xie, Yuguang; Wang, Xiaoming

    2018-01-01

    This paper proposes a fast reliability assessing method for distribution grid with distributed renewable energy generation. First, the Weibull distribution and the Beta distribution are used to describe the probability distribution characteristics of wind speed and solar irradiance respectively, and the models of wind farm, solar park and local load are built for reliability assessment. Then based on power system production cost simulation probability discretization and linearization power flow, a optimal power flow objected with minimum cost of conventional power generation is to be resolved. Thus a reliability assessment for distribution grid is implemented fast and accurately. The Loss Of Load Probability (LOLP) and Expected Energy Not Supplied (EENS) are selected as the reliability index, a simulation for IEEE RBTS BUS6 system in MATLAB indicates that the fast reliability assessing method calculates the reliability index much faster with the accuracy ensured when compared with Monte Carlo method.

  5. neutron multiplicity measurements on 220 l waste drums containing Pu in the range 0.1-1 g 240Pueff with the time interval analysis method

    International Nuclear Information System (INIS)

    Baeten, P.; Bruggeman, M.; Carchon, R.; De Boeck, W.

    1998-01-01

    Measurement results are presented for the assay of plutonium in 220 l waste drums containing Pu-masses in the range 0.1-1 g 240 Pu eff obtained with the time interval analysis (TIA) method. TIA is a neutron multiplicity method based on the concept of one- and two-dimensional Rossi-alpha distributions. The main source of measurement bias in neutron multiplicity measurements at low count-rates is the impredictable variation of the high-multiplicity neutron background of spallation neutrons induced by cosmic rays. The TIA-method was therefore equipped with a special background filter, which is designed and optimized to reduce the influence of these spallation neutrons by rejecting the high-multiplicity events. The measurement results, obtained with the background correction filter outlined in this paper, prove the repeatability and validity of the TIA-method and show that multiplicity counting with the TIA-technique is applicable for masses as low as 0.1 g 240 Pu eff even at a detection efficiency of 12%. (orig.)

  6. Confidence intervals for experiments with background and small numbers of events

    International Nuclear Information System (INIS)

    Bruechle, W.

    2003-01-01

    Methods to find a confidence interval for Poisson distributed variables are illuminated, especially for the case of poor statistics. The application of 'central' and 'highest probability density' confidence intervals is compared for the case of low count-rates. A method to determine realistic estimates of the confidence intervals for Poisson distributed variables affected with background, and their ratios, is given. (orig.)

  7. Confidence intervals for experiments with background and small numbers of events

    International Nuclear Information System (INIS)

    Bruechle, W.

    2002-07-01

    Methods to find a confidence interval for Poisson distributed variables are illuminated, especially for the case of poor statistics. The application of 'central' and 'highest probability density' confidence intervals is compared for the case of low count-rates. A method to determine realistic estimates of the confidence intervals for Poisson distributed variables affected with background, and their ratios, is given. (orig.)

  8. Comparing four methods to estimate usual intake distributions

    NARCIS (Netherlands)

    Souverein, O.W.; Dekkers, A.L.; Geelen, A.; Haubrock, J.; Vries, de J.H.M.; Ocke, M.C.; Harttig, U.; Boeing, H.; Veer, van 't P.

    2011-01-01

    Background/Objectives: The aim of this paper was to compare methods to estimate usual intake distributions of nutrients and foods. As ‘true’ usual intake distributions are not known in practice, the comparison was carried out through a simulation study, as well as empirically, by application to data

  9. Improving allowed outage time and surveillance test interval requirements: a study of their interactions using probabilistic methods

    International Nuclear Information System (INIS)

    Martorell, S.A.; Serradell, V.G.; Samanta, P.K.

    1995-01-01

    Technical Specifications (TS) define the limits and conditions for operating nuclear plants safely. We selected the Limiting Conditions for Operations (LCO) and Surveillance Requirements (SR), both within TS, as the main items to be evaluated using probabilistic methods. In particular, we focused on the Allowed Outage Time (AOT) and Surveillance Test Interval (STI) requirements in LCO and SR, respectively. Already, significant operating and design experience has accumulated revealing several problems which require modifications in some TS rules. Developments in Probabilistic Safety Assessment (PSA) allow the evaluation of effects due to such modifications in AOT and STI from a risk point of view. Thus, some changes have already been adopted in some plants. However, the combined effect of several changes in AOT and STI, i.e. through their interactions, is not addressed. This paper presents a methodology which encompasses, along with the definition of AOT and STI interactions, the quantification of interactions in terms of risk using PSA methods, an approach for evaluating simultaneous AOT and STI modifications, and an assessment of strategies for giving flexibility to plant operation through simultaneous changes on AOT and STI using trade-off-based risk criteria

  10. Application of interval 2-tuple linguistic MULTIMOORA method for health-care waste treatment technology evaluation and selection.

    Science.gov (United States)

    Liu, Hu-Chen; You, Jian-Xin; Lu, Chao; Shan, Meng-Meng

    2014-11-01

    The management of health-care waste (HCW) is a major challenge for municipalities, particularly in the cities of developing countries. Selection of the best treatment technology for HCW can be viewed as a complicated multi-criteria decision making (MCDM) problem which requires consideration of a number of alternatives and conflicting evaluation criteria. Additionally, decision makers often use different linguistic term sets to express their assessments because of their different backgrounds and preferences, some of which may be imprecise, uncertain and incomplete. In response, this paper proposes a modified MULTIMOORA method based on interval 2-tuple linguistic variables (named ITL-MULTIMOORA) for evaluating and selecting HCW treatment technologies. In particular, both subjective and objective importance coefficients of criteria are taken into consideration in the developed approach in order to conduct a more effective analysis. Finally, an empirical case study in Shanghai, the most crowded metropolis of China, is presented to demonstrate the proposed method, and results show that the proposed ITL-MULTIMOORA can solve the HCW treatment technology selection problem effectively under uncertain and incomplete information environment. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Dynamic Subsidy Method for Congestion Management in Distribution Networks

    DEFF Research Database (Denmark)

    Huang, Shaojun; Wu, Qiuwei

    2016-01-01

    Dynamic subsidy (DS) is a locational price paid by the distribution system operator (DSO) to its customers in order to shift energy consumption to designated hours and nodes. It is promising for demand side management and congestion management. This paper proposes a new DS method for congestion...... management in distribution networks, including the market mechanism, the mathematical formulation through a two-level optimization, and the method solving the optimization by tightening the constraints and linearization. Case studies were conducted with a one node system and the Bus 4 distribution network...... of the Roy Billinton Test System (RBTS) with high penetration of electric vehicles (EVs) and heat pumps (HPs). The case studies demonstrate the efficacy of the DS method for congestion management in distribution networks. Studies in this paper show that the DS method offers the customers a fair opportunity...

  12. Method of controlling power distribution in FBR type reactors

    International Nuclear Information System (INIS)

    Sawada, Shusaku; Kaneto, Kunikazu.

    1982-01-01

    Purpose: To attain the power distribution flattening with ease by obtaining a radial power distribution substantially in a constant configuration not depending on the burn-up cycle. Method: As the fuel burning proceeds, the radial power distribution is effected by the accumulation of fission products in the inner blancket fuel assemblies which varies the effect thereof as the neutron absorbing substances. Taking notice of the above fact, the power distribution is controlled in a heterogeneous FBR type reactor by varying the core residence period of the inner blancket assemblies in accordance with the charging density of the inner blancket assemblies in the reactor core. (Kawakami, Y.)

  13. Methods of assessing grain-size distribution during grain growth

    DEFF Research Database (Denmark)

    Tweed, Cherry J.; Hansen, Niels; Ralph, Brian

    1985-01-01

    This paper considers methods of obtaining grain-size distributions and ways of describing them. In order to collect statistically useful amounts of data, an automatic image analyzer is used, and the resulting data are subjected to a series of tests that evaluate the differences between two related...... distributions (before and after grain growth). The distributions are measured from two-dimensional sections, and both the data and the corresponding true three-dimensional grain-size distributions (obtained by stereological analysis) are collected. The techniques described here are illustrated by reference...

  14. Neutron coincidence counting based on time interval analysis with dead time corrected one and two dimensional Rossi-alpha distributions: an application for passive neutron waste assay

    International Nuclear Information System (INIS)

    Bruggeman, M.; Baeten, P.; De Boeck, W.; Carchon, R.

    1996-03-01

    The report describes a new neutron multiplicity counting method based on Rossi-alpha distributions. The report also gives the necessary dead time correction formulas for the multiplicity counting method. The method was tested numerically using a Monte Carlo simulation of pulse trains. The use of this multiplicity method in the field of waste assay is explained: it can be used to determine the amount of fissile material in a waste drum without prior knowledge of the actual detection efficiency

  15. A Comparative Study of Distribution System Parameter Estimation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup

    2016-07-17

    In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of both methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.

  16. Statistical properties of interval mapping methods on quantitative trait loci location: impact on QTL/eQTL analyses

    Directory of Open Access Journals (Sweden)

    Wang Xiaoqiang

    2012-04-01

    Full Text Available Abstract Background Quantitative trait loci (QTL detection on a huge amount of phenotypes, like eQTL detection on transcriptomic data, can be dramatically impaired by the statistical properties of interval mapping methods. One of these major outcomes is the high number of QTL detected at marker locations. The present study aims at identifying and specifying the sources of this bias, in particular in the case of analysis of data issued from outbred populations. Analytical developments were carried out in a backcross situation in order to specify the bias and to propose an algorithm to control it. The outbred population context was studied through simulated data sets in a wide range of situations. The likelihood ratio test was firstly analyzed under the "one QTL" hypothesis in a backcross population. Designs of sib families were then simulated and analyzed using the QTL Map software. On the basis of the theoretical results in backcross, parameters such as the population size, the density of the genetic map, the QTL effect and the true location of the QTL, were taken into account under the "no QTL" and the "one QTL" hypotheses. A combination of two non parametric tests - the Kolmogorov-Smirnov test and the Mann-Whitney-Wilcoxon test - was used in order to identify the parameters that affected the bias and to specify how much they influenced the estimation of QTL location. Results A theoretical expression of the bias of the estimated QTL location was obtained for a backcross type population. We demonstrated a common source of bias under the "no QTL" and the "one QTL" hypotheses and qualified the possible influence of several parameters. Simulation studies confirmed that the bias exists in outbred populations under both the hypotheses of "no QTL" and "one QTL" on a linkage group. The QTL location was systematically closer to marker locations than expected, particularly in the case of low QTL effect, small population size or low density of markers, i

  17. A comparison of confidence interval methods for the intraclass correlation coefficient in community-based cluster randomization trials with a binary outcome.

    Science.gov (United States)

    Braschel, Melissa C; Svec, Ivana; Darlington, Gerarda A; Donner, Allan

    2016-04-01

    Many investigators rely on previously published point estimates of the intraclass correlation coefficient rather than on their associated confidence intervals to determine the required size of a newly planned cluster randomized trial. Although confidence interval methods for the intraclass correlation coefficient that can be applied to community-based trials have been developed for a continuous outcome variable, fewer methods exist for a binary outcome variable. The aim of this study is to evaluate confidence interval methods for the intraclass correlation coefficient applied to binary outcomes in community intervention trials enrolling a small number of large clusters. Existing methods for confidence interval construction are examined and compared to a new ad hoc approach based on dividing clusters into a large number of smaller sub-clusters and subsequently applying existing methods to the resulting data. Monte Carlo simulation is used to assess the width and coverage of confidence intervals for the intraclass correlation coefficient based on Smith's large sample approximation of the standard error of the one-way analysis of variance estimator, an inverted modified Wald test for the Fleiss-Cuzick estimator, and intervals constructed using a bootstrap-t applied to a variance-stabilizing transformation of the intraclass correlation coefficient estimate. In addition, a new approach is applied in which clusters are randomly divided into a large number of smaller sub-clusters with the same methods applied to these data (with the exception of the bootstrap-t interval, which assumes large cluster sizes). These methods are also applied to a cluster randomized trial on adolescent tobacco use for illustration. When applied to a binary outcome variable in a small number of large clusters, existing confidence interval methods for the intraclass correlation coefficient provide poor coverage. However, confidence intervals constructed using the new approach combined with Smith

  18. Score Function of Distribution and Revival of the Moment Method

    Czech Academy of Sciences Publication Activity Database

    Fabián, Zdeněk

    2016-01-01

    Roč. 45, č. 4 (2016), s. 1118-1136 ISSN 0361-0926 R&D Projects: GA MŠk(CZ) LG12020 Institutional support: RVO:67985807 Keywords : characteristics of distributions * data characteristics * general moment method * Huber moment estimator * parametric methods * score function Subject RIV: BB - Applied Statistics , Operational Research Impact factor: 0.311, year: 2016

  19. A note on Nonparametric Confidence Interval for a Shift Parameter ...

    African Journals Online (AJOL)

    The method is illustrated using the Cauchy distribution as a location model. The kernel-based method is found to have a shorter interval for the shift parameter between two Cauchy distributions than the one based on the Mann-Whitney test statistic. Keywords: Best Asymptotic Normal; Cauchy distribution; Kernel estimates; ...

  20. Distributed MIMO-ISAR Sub-image Fusion Method

    Directory of Open Access Journals (Sweden)

    Gu Wenkun

    2017-02-01

    Full Text Available The fast fluctuation associated with maneuvering a target’s radar cross-section often affects the imaging performance stability of traditional monostatic Inverse Synthetic Aperture Radar (ISAR. To address this problem, in this study, we propose an imaging method based on the fusion of sub-images of frequencydiversity-distributed multiple Input-Multiple Output-Inverse Synthetic Aperture Radar (MIMO-ISAR. First, we establish the analytic expression of a two-dimensional ISAR sub-image acquired by different channels of distributed MIMO-ISAR. Then, we derive the distance and azimuth distortion factors of the image acquired by the different channels. By compensating for the distortion of the ISAR image, we ultimately realize distributed MIMO-ISAR fusion imaging. Simulations verify the validity of this imaging method using distributed MIMO-ISAR.

  1. Information-theoretic methods for estimating of complicated probability distributions

    CERN Document Server

    Zong, Zhi

    2006-01-01

    Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur

  2. Mathematical methods linear algebra normed spaces distributions integration

    CERN Document Server

    Korevaar, Jacob

    1968-01-01

    Mathematical Methods, Volume I: Linear Algebra, Normed Spaces, Distributions, Integration focuses on advanced mathematical tools used in applications and the basic concepts of algebra, normed spaces, integration, and distributions.The publication first offers information on algebraic theory of vector spaces and introduction to functional analysis. Discussions focus on linear transformations and functionals, rectangular matrices, systems of linear equations, eigenvalue problems, use of eigenvectors and generalized eigenvectors in the representation of linear operators, metric and normed vector

  3. Bootstrap Signal-to-Noise Confidence Intervals: An Objective Method for Subject Exclusion and Quality Control in ERP Studies

    Science.gov (United States)

    Parks, Nathan A.; Gannon, Matthew A.; Long, Stephanie M.; Young, Madeleine E.

    2016-01-01

    Analysis of event-related potential (ERP) data includes several steps to ensure that ERPs meet an appropriate level of signal quality. One such step, subject exclusion, rejects subject data if ERP waveforms fail to meet an appropriate level of signal quality. Subject exclusion is an important quality control step in the ERP analysis pipeline as it ensures that statistical inference is based only upon those subjects exhibiting clear evoked brain responses. This critical quality control step is most often performed simply through visual inspection of subject-level ERPs by investigators. Such an approach is qualitative, subjective, and susceptible to investigator bias, as there are no standards as to what constitutes an ERP of sufficient signal quality. Here, we describe a standardized and objective method for quantifying waveform quality in individual subjects and establishing criteria for subject exclusion. The approach uses bootstrap resampling of ERP waveforms (from a pool of all available trials) to compute a signal-to-noise ratio confidence interval (SNR-CI) for individual subject waveforms. The lower bound of this SNR-CI (SNRLB) yields an effective and objective measure of signal quality as it ensures that ERP waveforms statistically exceed a desired signal-to-noise criterion. SNRLB provides a quantifiable metric of individual subject ERP quality and eliminates the need for subjective evaluation of waveform quality by the investigator. We detail the SNR-CI methodology, establish the efficacy of employing this approach with Monte Carlo simulations, and demonstrate its utility in practice when applied to ERP datasets. PMID:26903849

  4. Three-Phase Harmonic Analysis Method for Unbalanced Distribution Systems

    Directory of Open Access Journals (Sweden)

    Jen-Hao Teng

    2014-01-01

    Full Text Available Due to the unbalanced features of distribution systems, a three-phase harmonic analysis method is essential to accurately analyze the harmonic impact on distribution systems. Moreover, harmonic analysis is the basic tool for harmonic filter design and harmonic resonance mitigation; therefore, the computational performance should also be efficient. An accurate and efficient three-phase harmonic analysis method for unbalanced distribution systems is proposed in this paper. The variations of bus voltages, bus current injections and branch currents affected by harmonic current injections can be analyzed by two relationship matrices developed from the topological characteristics of distribution systems. Some useful formulas are then derived to solve the three-phase harmonic propagation problem. After the harmonic propagation for each harmonic order is calculated, the total harmonic distortion (THD for bus voltages can be calculated accordingly. The proposed method has better computational performance, since the time-consuming full admittance matrix inverse employed by the commonly-used harmonic analysis methods is not necessary in the solution procedure. In addition, the proposed method can provide novel viewpoints in calculating the branch currents and bus voltages under harmonic pollution which are vital for harmonic filter design. Test results demonstrate the effectiveness and efficiency of the proposed method.

  5. The frequency-independent control method for distributed generation systems

    DEFF Research Database (Denmark)

    Naderi, Siamak; Pouresmaeil, Edris; Gao, Wenzhong David

    2012-01-01

    In this paper a novel frequency-independent control method suitable for distributed generation (DG) is presented. This strategy is derived based on the . abc/. αβ transformation and . abc/. dq transformation of the ac system variables. The active and reactive currents injected by the DG are contr......In this paper a novel frequency-independent control method suitable for distributed generation (DG) is presented. This strategy is derived based on the . abc/. αβ transformation and . abc/. dq transformation of the ac system variables. The active and reactive currents injected by the DG...

  6. Serum prolactin revisited: parametric reference intervals and cross platform evaluation of polyethylene glycol precipitation-based methods for discrimination between hyperprolactinemia and macroprolactinemia.

    Science.gov (United States)

    Overgaard, Martin; Pedersen, Susanne Møller

    2017-10-26

    Hyperprolactinemia diagnosis and treatment is often compromised by the presence of biologically inactive and clinically irrelevant higher-molecular-weight complexes of prolactin, macroprolactin. The objective of this study was to evaluate the performance of two macroprolactin screening regimes across commonly used automated immunoassay platforms. Parametric total and monomeric gender-specific reference intervals were determined for six immunoassay methods using female (n=96) and male sera (n=127) from healthy donors. The reference intervals were validated using 27 hyperprolactinemic and macroprolactinemic sera, whose presence of monomeric and macroforms of prolactin were determined using gel filtration chromatography (GFC). Normative data for six prolactin assays included the range of values (2.5th-97.5th percentiles). Validation sera (hyperprolactinemic and macroprolactinemic; n=27) showed higher discordant classification [mean=2.8; 95% confidence interval (CI) 1.2-4.4] for the monomer reference interval method compared to the post-polyethylene glycol (PEG) recovery cutoff method (mean=1.8; 95% CI 0.8-2.8). The two monomer/macroprolactin discrimination methods did not differ significantly (p=0.089). Among macroprolactinemic sera evaluated by both discrimination methods, the Cobas and Architect/Kryptor prolactin assays showed the lowest and the highest number of misclassifications, respectively. Current automated immunoassays for prolactin testing require macroprolactin screening methods based on PEG precipitation in order to discriminate truly from falsely elevated serum prolactin. While the recovery cutoff and monomeric reference interval macroprolactin screening methods demonstrate similar discriminative ability, the latter method also provides the clinician with an easy interpretable monomeric prolactin concentration along with a monomeric reference interval.

  7. Modeling error distributions of growth curve models through Bayesian methods.

    Science.gov (United States)

    Zhang, Zhiyong

    2016-06-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.

  8. Dual reference point temperature interrogating method for distributed temperature sensor

    International Nuclear Information System (INIS)

    Ma, Xin; Ju, Fang; Chang, Jun; Wang, Weijie; Wang, Zongliang

    2013-01-01

    A novel method based on dual temperature reference points is presented to interrogate the temperature in a distributed temperature sensing (DTS) system. This new method is suitable to overcome deficiencies due to the impact of DC offsets and the gain difference in the two signal channels of the sensing system during temperature interrogation. Moreover, this method can in most cases avoid the need to calibrate the gain and DC offsets in the receiver, data acquisition and conversion. An improved temperature interrogation formula is presented and the experimental results show that this method can efficiently estimate the channel amplification and system DC offset, thus improving the system accuracy. (letter)

  9. A code for obtaining temperature distribution by finite element method

    International Nuclear Information System (INIS)

    Bloch, M.

    1984-01-01

    The ELEFIB Fortran language computer code using finite element method for calculating temperature distribution of linear and two dimensional problems, in permanent region or in the transient phase of heat transfer, is presented. The formulation of equations uses the Galerkin method. Some examples are shown and the results are compared with other papers. The comparative evaluation shows that the elaborated code gives good values. (M.C.K.) [pt

  10. Distributed Interior-point Method for Loosely Coupled Problems

    DEFF Research Database (Denmark)

    Pakazad, Sina Khoshfetrat; Hansson, Anders; Andersen, Martin Skovgaard

    2014-01-01

    In this paper, we put forth distributed algorithms for solving loosely coupled unconstrained and constrained optimization problems. Such problems are usually solved using algorithms that are based on a combination of decomposition and first order methods. These algorithms are commonly very slow a...

  11. Community Based Distribution of Child Spacing Methods at ...

    African Journals Online (AJOL)

    uses volunteer CBD agents. Mrs. E.F. Pelekamoyo. Service Delivery Officer. National Family Welfare Council of Malawi. Private Bag 308. Lilongwe 3. Malawi. Community Based Distribution of. Child Spacing Methods ... than us at the Hospital; male motivators by talking to their male counterparts help them to accept that their ...

  12. Correction of measured multiplicity distributions by the simulated annealing method

    International Nuclear Information System (INIS)

    Hafidouni, M.

    1993-01-01

    Simulated annealing is a method used to solve combinatorial optimization problems. It is used here for the correction of the observed multiplicity distribution from S-Pb collisions at 200 GeV/c per nucleon. (author) 11 refs., 2 figs

  13. Negative binomial distribution fits to multiplicity distributions is restricted δη intervals from central O+Cu collisions at 14.6A GeV/c and their implication for open-quotes Intermittencyclose quotes

    International Nuclear Information System (INIS)

    Tannenbaum, M.J.

    1993-01-01

    Experience in analyzing the data from Light and Heavy Ion Collisions in terms of distributions rather than moments suggests that conventional fluctuations of multiplicity and transverse energy can be well described by Gamma or Negative Binomial Distributions (NBD). Multiplicity distributions were obtained for central 16 O+Cu collisions in bins of δη= 0.1,0.2, 0.3 .... 0.5,1.0, where the bin of 1.0 covers 1.2 < η < 2.2 in the laboratory. NBD fits were performed to these distributions with excellent results in all δη bins. The κ parameter of the NBD fit increases linearly with the δη interval, which is a totally unexpected and particularly striking result. Due to the well known property of the NBD under convolution, this result indicates that the multiplicity distributions in adjacent bins of pseudorapidity δη ∼ 0.1 are largely statistically independent. The relationship to 2-particle correlations and open-quotes Intermittencyclose quotes will be discussed

  14. Methods for reconstruction of the density distribution of nuclear power

    International Nuclear Information System (INIS)

    Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S.

    2015-01-01

    Highlights: • Two methods for reconstruction of the pin power distribution are presented. • The ARM method uses analytical solution of the 2D diffusion equation. • The PRM method uses polynomial solution without boundary conditions. • The maximum errors in pin power reconstruction occur in the peripheral water region. • The errors are significantly less in the inner area of the core. - Abstract: In analytical reconstruction method (ARM), the two-dimensional (2D) neutron diffusion equation is analytically solved for two energy groups (2G) and homogeneous nodes with dimensions of a fuel assembly (FA). The solution employs a 2D fourth-order expansion for the axial leakage term. The Nodal Expansion Method (NEM) provides the solution average values as the four average partial currents on the surfaces of the node, the average flux in the node and the multiplying factor of the problem. The expansion coefficients for the axial leakage are determined directly from NEM method or can be determined in the reconstruction method. A new polynomial reconstruction method (PRM) is implemented based on the 2D expansion for the axial leakage term. The ARM method use the four average currents on the surfaces of the node and four average fluxes in corners of the node as boundary conditions and the average flux in the node as a consistency condition. To determine the average fluxes in corners of the node an analytical solution is employed. This analytical solution uses the average fluxes on the surfaces of the node as boundary conditions and discontinuities in corners are incorporated. The polynomial and analytical solutions to the PRM and ARM methods, respectively, represent the homogeneous flux distributions. The detailed distributions inside a FA are estimated by product of homogeneous distribution by local heterogeneous form function. Moreover, the form functions of power are used. The results show that the methods have good accuracy when compared with reference values and

  15. Multi-level methods and approximating distribution functions

    International Nuclear Information System (INIS)

    Wilson, D.; Baker, R. E.

    2016-01-01

    Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparable to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.

  16. Multi-level methods and approximating distribution functions

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, D., E-mail: daniel.wilson@dtc.ox.ac.uk; Baker, R. E. [Mathematical Institute, University of Oxford, Radcliffe Observatory Quarter, Woodstock Road, Oxford, OX2 6GG (United Kingdom)

    2016-07-15

    Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparable to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.

  17. Advanced airflow distribution methods for reducing exposure of indoor pollution

    DEFF Research Database (Denmark)

    Cao, Guangyu; Nielsen, Peter Vilhelm; Melikov, Arsen

    2017-01-01

    The adverse effect of various indoor pollutants on occupants’ health have been recognized. In public spaces flu viruses may spread from person to person by airflow generated by various traditional ventilation methods, like natural ventilation and mixing ventilation (MV Personalized ventilation (PV......) supplies clean air close to the occupant and directly into the breathing zone. Studies show that it improves the inhaled air quality and reduces the risk of airborne cross-infection in comparison with total volume (TV) ventilation. However, it is still challenging for PV and other advanced air distribution...... methods to reduce the exposure to gaseous and particulate pollutants under disturbed conditions and to ensure thermal comfort at the same time. The objective of this study is to analyse the performance of different advanced airflow distribution methods for protection of occupants from exposure to indoor...

  18. Advanced airflow distribution methods for reducing exposure of indoor pollution

    DEFF Research Database (Denmark)

    Cao, Guangyu; Nielsen, Peter Vilhelm; Melikov, Arsen Krikor

    methods to reduce the exposure to gaseous and particulate pollutants under disturbed conditions and to ensure thermal comfort at the same time. The objective of this study is to analyse the performance of different advanced airflow distribution methods for protection of occupants from exposure to indoor......The adverse effect of various indoor pollutants on occupants’ health have been recognized. In public spaces flu viruses may spread from person to person by airflow generated by various traditional ventilation methods, like natural ventilation and mixing ventilation (MV Personalized ventilation (PV......) supplies clean air close to the occupant and directly into the breathing zone. Studies show that it improves the inhaled air quality and reduces the risk of airborne cross-infection in comparison with total volume (TV) ventilation. However, it is still challenging for PV and other advanced air distribution...

  19. System and Method for Monitoring Distributed Asset Data

    Science.gov (United States)

    Gorinevsky, Dimitry (Inventor)

    2015-01-01

    A computer-based monitoring system and monitoring method implemented in computer software for detecting, estimating, and reporting the condition states, their changes, and anomalies for many assets. The assets are of same type, are operated over a period of time, and outfitted with data collection systems. The proposed monitoring method accounts for variability of working conditions for each asset by using regression model that characterizes asset performance. The assets are of the same type but not identical. The proposed monitoring method accounts for asset-to-asset variability; it also accounts for drifts and trends in the asset condition and data. The proposed monitoring system can perform distributed processing of massive amounts of historical data without discarding any useful information where moving all the asset data into one central computing system might be infeasible. The overall processing is includes distributed preprocessing data records from each asset to produce compressed data.

  20. Synchronization Methods for Three Phase Distributed Power Generation Systems

    DEFF Research Database (Denmark)

    Timbus, Adrian Vasile; Teodorescu, Remus; Blaabjerg, Frede

    2005-01-01

    Nowadays, it is a general trend to increase the electricity production using Distributed Power Generation Systems (DPGS) based on renewable energy resources such as wind, sun or hydrogen. If these systems are not properly controlled, their connection to the utility network can generate problems...... on the grid side. Therefore, considerations about power generation, safe running and grid synchronization must be done before connecting these systems to the utility network. This paper is mainly dealing with the grid synchronization issues of distributed systems. An overview of the synchronization methods...

  1. Method of imaging the electrical conductivity distribution of a subsurface

    Science.gov (United States)

    Johnson, Timothy C.

    2017-09-26

    A method of imaging electrical conductivity distribution of a subsurface containing metallic structures with known locations and dimensions is disclosed. Current is injected into the subsurface to measure electrical potentials using multiple sets of electrodes, thus generating electrical resistivity tomography measurements. A numeric code is applied to simulate the measured potentials in the presence of the metallic structures. An inversion code is applied that utilizes the electrical resistivity tomography measurements and the simulated measured potentials to image the subsurface electrical conductivity distribution and remove effects of the subsurface metallic structures with known locations and dimensions.

  2. Novel Method of Unambiguous Moving Target Detection in Pulse-Doppler Radar with Random Pulse Repetition Interval

    Directory of Open Access Journals (Sweden)

    Liu Zhen

    2012-03-01

    Full Text Available Blind zones and ambiguities in range and velocity measurement are two important issues in traditional pulse-Doppler radar. By generating random deviations with respect to a mean Pulse Repetition Interval (PRI, this paper proposes a novel algorithm of Moving Target Detection (MTD based on the Compressed Sensing (CS theory, in which the random deviations of the PRIare converted to the Restricted Isometry Property (RIP of the observing matrix. The ambiguities of range and velocity are eliminated by designing the signal parameters. The simulation results demonstrate that this scheme has high performance of detection, and there is no ambiguity and blind zones as well. It can also shorten the coherent processing interval compared to traditional staggered PRI mode because only one pulse train is needed instead of several trains.

  3. Electrical power distribution control methods, electrical energy demand monitoring methods, and power management devices

    Science.gov (United States)

    Chassin, David P [Pasco, WA; Donnelly, Matthew K [Kennewick, WA; Dagle, Jeffery E [Richland, WA

    2011-12-06

    Electrical power distribution control methods, electrical energy demand monitoring methods, and power management devices are described. In one aspect, an electrical power distribution control method includes providing electrical energy from an electrical power distribution system, applying the electrical energy to a load, providing a plurality of different values for a threshold at a plurality of moments in time and corresponding to an electrical characteristic of the electrical energy, and adjusting an amount of the electrical energy applied to the load responsive to an electrical characteristic of the electrical energy triggering one of the values of the threshold at the respective moment in time.

  4. Communication Systems and Study Method for Active Distribution Power systems

    DEFF Research Database (Denmark)

    Wei, Mu; Chen, Zhe

    Due to the involvement and evolvement of communication technologies in contemporary power systems, the applications of modern communication technologies in distribution power system are becoming increasingly important. In this paper, the International Organization for Standardization (ISO......) reference seven-layer model of communication systems, and the main communication technologies and protocols on each corresponding layer are introduced. Some newly developed communication techniques, like Ethernet, are discussed with reference to the possible applications in distributed power system....... The suitability of the communication technology to the distribution power system with active renewable energy based generation units is discussed. Subsequently the typical possible communication systems are studied by simulation. In this paper, a novel method of integrating communication system impact into power...

  5. Distribution-independent hierarchicald N-body methods

    International Nuclear Information System (INIS)

    Aluru, S.

    1994-01-01

    The N-body problem is to simulate the motion of N particles under the influence of mutual force fields based on an inverse square law. The problem has applications in several domains including astrophysics, molecular dynamics, fluid dynamics, radiosity methods in computer graphics and numerical complex analysis. Research efforts have focused on reducing the O(N 2 ) time per iteration required by the naive algorithm of computing each pairwise interaction. Widely respected among these are the Barnes-Hut and Greengard methods. Greengard claims his algorithm reduces the complexity to O(N) time per iteration. Throughout this thesis, we concentrate on rigorous, distribution-independent, worst-case analysis of the N-body methods. We show that Greengard's algorithm is not O(N), as claimed. Both Barnes-Hut and Greengard's methods depend on the same data structure, which we show is distribution-dependent. For the distribution that results in the smallest running time, we show that Greengard's algorithm is Ω(N log 2 N) in two dimensions and Ω(N log 4 N) in three dimensions. We have designed a hierarchical data structure whose size depends entirely upon the number of particles and is independent of the distribution of the particles. We show that both Greengard's and Barnes-Hut algorithms can be used in conjunction with this data structure to reduce their complexity. Apart from reducing the complexity of the Barnes-Hut algorithm, the data structure also permits more accurate error estimation. We present two- and three-dimensional algorithms for creating the data structure. The multipole method designed using this data structure has a complexity of O(N log N) in two dimensions and O(N log 2 N) in three dimensions

  6. Recurrence interval analysis of trading volumes.

    Science.gov (United States)

    Ren, Fei; Zhou, Wei-Xing

    2010-06-01

    We study the statistical properties of the recurrence intervals τ between successive trading volumes exceeding a certain threshold q. The recurrence interval analysis is carried out for the 20 liquid Chinese stocks covering a period from January 2000 to May 2009, and two Chinese indices from January 2003 to April 2009. Similar to the recurrence interval distribution of the price returns, the tail of the recurrence interval distribution of the trading volumes follows a power-law scaling, and the results are verified by the goodness-of-fit tests using the Kolmogorov-Smirnov (KS) statistic, the weighted KS statistic and the Cramér-von Mises criterion. The measurements of the conditional probability distribution and the detrended fluctuation function show that both short-term and long-term memory effects exist in the recurrence intervals between trading volumes. We further study the relationship between trading volumes and price returns based on the recurrence interval analysis method. It is found that large trading volumes are more likely to occur following large price returns, and the comovement between trading volumes and price returns is more pronounced for large trading volumes.

  7. Application of autoradiographic methods for contaminant distribution studies in soils

    International Nuclear Information System (INIS)

    Povetko, O.G.; Higley, K.A.

    2000-01-01

    In order to determine physical location of contaminants in soil, solidified soil 'thin' sections, which preserve the undisturbed structural characteristics of the original soil, were prepared. This paper describes an application of different autoradiographic methods to identify the distribution of selected nuclides along key structural features of sample soils and sizes of 'hot particles' of contaminant. These autoradiographic methods included contact autoradiography using CR-39 (Homalite Plastics) plastic alpha track detectors and neutron-induced autoradiography that produced fission fragment tracks in Lexan (Thrust Industries, Inc.) plastic detectors. Intact soil samples containing weapons-grade plutonium from Rocky Flats Environmental Test Site and control samples from outside the site location were used in thin soil section preparation. Distribution of particles of actinides was observed and analyzed through the soil section depth profile from the surface to the 15-cm depth. The combination of two autoradiographic methods allowed to distinguish alpha- emitting particles of natural U, 239+240 Pu and non-fissile alpha-emitters. Locations of 990 alpha 'stars' caused by 239+240 Pu and 241 Am 'hot particles' were recorded, particles were sized, their size-frequency, depth and activity distributions were analyzed. Several large colloidal conglomerates of 239+240 Pu and 241 Am 'hot particles' were found in soil profile. Their alpha and fission fragment 'star' images were micro photographed. (author)

  8. Rock sampling. [method for controlling particle size distribution

    Science.gov (United States)

    Blum, P. (Inventor)

    1971-01-01

    A method for sampling rock and other brittle materials and for controlling resultant particle sizes is described. The method involves cutting grooves in the rock surface to provide a grouping of parallel ridges and subsequently machining the ridges to provide a powder specimen. The machining step may comprise milling, drilling, lathe cutting or the like; but a planing step is advantageous. Control of the particle size distribution is effected primarily by changing the height and width of these ridges. This control exceeds that obtainable by conventional grinding.

  9. Reference intervals for serum total cholesterol, HDL cholesterol and ...

    African Journals Online (AJOL)

    Reference intervals of total cholesterol, HDL cholesterol and non-HDL cholesterol concentrations were determined on 309 blood donors from an urban and peri-urban population of Botswana. Using non-parametric methods to establish 2.5th and 97.5th percentiles of the distribution, the intervals were: total cholesterol 2.16 ...

  10. Method for adding nodes to a quantum key distribution system

    Science.gov (United States)

    Grice, Warren P

    2015-02-24

    An improved quantum key distribution (QKD) system and method are provided. The system and method introduce new clients at intermediate points along a quantum channel, where any two clients can establish a secret key without the need for a secret meeting between the clients. The new clients perform operations on photons as they pass through nodes in the quantum channel, and participate in a non-secret protocol that is amended to include the new clients. The system and method significantly increase the number of clients that can be supported by a conventional QKD system, with only a modest increase in cost. The system and method are compatible with a variety of QKD schemes, including polarization, time-bin, continuous variable and entanglement QKD.

  11. A method for statistically comparing spatial distribution maps

    Directory of Open Access Journals (Sweden)

    Reynolds Mary G

    2009-01-01

    Full Text Available Abstract Background Ecological niche modeling is a method for estimation of species distributions based on certain ecological parameters. Thus far, empirical determination of significant differences between independently generated distribution maps for a single species (maps which are created through equivalent processes, but with different ecological input parameters, has been challenging. Results We describe a method for comparing model outcomes, which allows a statistical evaluation of whether the strength of prediction and breadth of predicted areas is measurably different between projected distributions. To create ecological niche models for statistical comparison, we utilized GARP (Genetic Algorithm for Rule-Set Production software to generate ecological niche models of human monkeypox in Africa. We created several models, keeping constant the case location input records for each model but varying the ecological input data. In order to assess the relative importance of each ecological parameter included in the development of the individual predicted distributions, we performed pixel-to-pixel comparisons between model outcomes and calculated the mean difference in pixel scores. We used a two sample Student's t-test, (assuming as null hypothesis that both maps were identical to each other regardless of which input parameters were used to examine whether the mean difference in corresponding pixel scores from one map to another was greater than would be expected by chance alone. We also utilized weighted kappa statistics, frequency distributions, and percent difference to look at the disparities in pixel scores. Multiple independent statistical tests indicated precipitation as the single most important independent ecological parameter in the niche model for human monkeypox disease. Conclusion In addition to improving our understanding of the natural factors influencing the distribution of human monkeypox disease, such pixel-to-pixel comparison

  12. Discrete method for design of flow distribution in manifolds

    International Nuclear Information System (INIS)

    Wang, Junye; Wang, Hualin

    2015-01-01

    Flow in manifold systems is encountered in designs of various industrial processes, such as fuel cells, microreactors, microchannels, plate heat exchanger, and radial flow reactors. The uniformity of flow distribution in manifold is a key indicator for performance of the process equipment. In this paper, a discrete method for a U-type arrangement was developed to evaluate the uniformity of the flow distribution and the pressure drop and then was used for direct comparisons between the U-type and the Z-type. The uniformity of the U-type is generally better than that of the Z-type in most of cases for small ζ and large M. The U-type and the Z-type approach each other as ζ increases or M decreases. However, the Z-type is more sensitive to structures than the U-type and approaches uniform flow distribution faster than the U-type as M decreases or ζ increases. This provides a simple yet powerful tool for the designers to evaluate and select a flow arrangement and offers practical measures for industrial applications. - Highlights: • Discrete methodology of flow field designs in manifolds with U-type arrangements. • Quantitative comparison between U-type and Z-type arrangements. • Discrete solution of flow distribution with varying flow coefficients. • Practical measures and guideline to design of manifold systems.

  13. The Emergent Capabilities of Distributed Satellites and Methods for Selecting Distributed Satellite Science Missions

    Science.gov (United States)

    Corbin, B. A.; Seager, S.; Ross, A.; Hoffman, J.

    2017-12-01

    Distributed satellite systems (DSS) have emerged as an effective and cheap way to conduct space science, thanks to advances in the small satellite industry. However, relatively few space science missions have utilized multiple assets to achieve their primary scientific goals. Previous research on methods for evaluating mission concepts designs have shown that distributed systems are rarely competitive with monolithic systems, partially because it is difficult to quantify the added value of DSSs over monolithic systems. Comparatively little research has focused on how DSSs can be used to achieve new, fundamental space science goals that cannot be achieved with monolithic systems or how to choose a design from a larger possible tradespace of options. There are seven emergent capabilities of distributed satellites: shared sampling, simultaneous sampling, self-sampling, census sampling, stacked sampling, staged sampling, and sacrifice sampling. These capabilities are either fundamentally, analytically, or operationally unique in their application to distributed science missions, and they can be leveraged to achieve science goals that are either impossible or difficult and costly to achieve with monolithic systems. The Responsive Systems Comparison (RSC) method combines Multi-Attribute Tradespace Exploration with Epoch-Era Analysis to examine benefits, costs, and flexible options in complex systems over the mission lifecycle. Modifications to the RSC method as it exists in previously published literature were made in order to more accurately characterize how value is derived from space science missions. New metrics help rank designs by the value derived over their entire mission lifecycle and show more accurate cumulative value distributions. The RSC method was applied to four case study science missions that leveraged the emergent capabilities of distributed satellites to achieve their primary science goals. In all four case studies, RSC showed how scientific value was

  14. Standard test method for distribution coefficients of inorganic species by the batch method

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This test method covers the determination of distribution coefficients of chemical species to quantify uptake onto solid materials by a batch sorption technique. It is a laboratory method primarily intended to assess sorption of dissolved ionic species subject to migration through pores and interstices of site specific geomedia. It may also be applied to other materials such as manufactured adsorption media and construction materials. Application of the results to long-term field behavior is not addressed in this method. Distribution coefficients for radionuclides in selected geomedia are commonly determined for the purpose of assessing potential migratory behavior of contaminants in the subsurface of contaminated sites and waste disposal facilities. This test method is also applicable to studies for parametric studies of the variables and mechanisms which contribute to the measured distribution coefficient. 1.2 The values stated in SI units are to be regarded as standard. No other units of measurement a...

  15. A Network Reconfiguration Method Considering Data Uncertainties in Smart Distribution Networks

    Directory of Open Access Journals (Sweden)

    Ke-yan Liu

    2017-05-01

    Full Text Available This work presents a method for distribution network reconfiguration with the simultaneous consideration of distributed generation (DG allocation. The uncertainties of load fluctuation before the network reconfiguration are also considered. Three optimal objectives, including minimal line loss cost, minimum Expected Energy Not Supplied, and minimum switch operation cost, are investigated. The multi-objective optimization problem is further transformed into a single-objective optimization problem by utilizing weighting factors. The proposed network reconfiguration method includes two periods. The first period is to create a feasible topology network by using binary particle swarm optimization (BPSO. Then the DG allocation problem is solved by utilizing sensitivity analysis and a Harmony Search algorithm (HSA. In the meanwhile, interval analysis is applied to deal with the uncertainties of load and devices parameters. Test cases are studied using the standard IEEE 33-bus and PG&E 69-bus systems. Different scenarios and comparisons are analyzed in the experiments. The results show the applicability of the proposed method. The performance analysis of the proposed method is also investigated. The computational results indicate that the proposed network reconfiguration algorithm is feasible.

  16. Planning and Optimization Methods for Active Distribution Systems

    DEFF Research Database (Denmark)

    Abbey, Chad; Baitch, Alex; Bak-Jensen, Birgitte

    distribution planning. Active distribution networks (ADNs) have systems in place to control a combination of distributed energy resources (DERs), defined as generators, loads and storage. With these systems in place, the AND becomes an Active Distribution System (ADS). Distribution system operators (DSOs) have...

  17. Data distribution method of workflow in the cloud environment

    Science.gov (United States)

    Wang, Yong; Wu, Junjuan; Wang, Ying

    2017-08-01

    Cloud computing for workflow applications provides the required high efficiency calculation and large storage capacity and it also brings challenges to the protection of trade secrets and other privacy data. Because of privacy data will cause the increase of the data transmission time, this paper presents a new data allocation algorithm based on data collaborative damage degree, to improve the existing data allocation strategy? Safety and public cloud computer algorithm depends on the private cloud; the static allocation method in the initial stage only to the non-confidential data division to improve the original data, in the operational phase will continue to generate data to dynamically adjust the data distribution scheme. The experimental results show that the improved method is effective in reducing the data transmission time.

  18. Combustor and method for distributing fuel in the combustor

    Science.gov (United States)

    Uhm, Jong Ho; Ziminsky, Willy Steve; Johnson, Thomas Edward; York, William David

    2016-04-26

    A combustor includes a tube bundle that extends radially across at least a portion of the combustor. The tube bundle includes an upstream surface axially separated from a downstream surface. A plurality of tubes extends from the upstream surface through the downstream surface, and each tube provides fluid communication through the tube bundle. A baffle extends axially inside the tube bundle between adjacent tubes. A method for distributing fuel in a combustor includes flowing a fuel into a fuel plenum defined at least in part by an upstream surface, a downstream surface, a shroud, and a plurality of tubes that extend from the upstream surface to the downstream surface. The method further includes impinging the fuel against a baffle that extends axially inside the fuel plenum between adjacent tubes.

  19. The synchronization method for distributed small satellite SAR

    Science.gov (United States)

    Xing, Lei; Gong, Xiaochun; Qiu, Wenxun; Sun, Zhaowei

    2007-11-01

    One of critical requirement for distributed small satellite SAR is the trigger time precision when all satellites turning on radar loads. This trigger operation is controlled by a dedicated communication tool or GPS system. In this paper a hardware platform is proposed which has integrated navigation, attitude control, and data handling system together. Based on it, a probabilistic synchronization method is proposed for SAR time precision requirement with ring architecture. To simplify design of transceiver, half-duplex communication way is used in this method. Research shows that time precision is relevant to relative frequency drift rate, satellite number, retry times, read error and round delay length. Installed with crystal oscillator short-term stability 10 -11 magnitude, this platform can achieve and maintain nanosecond order time error with a typical three satellites formation experiment during whole operating process.

  20. A probabilistic approach for representation of interval uncertainty

    International Nuclear Information System (INIS)

    Zaman, Kais; Rangavajhala, Sirisha; McDonald, Mark P.; Mahadevan, Sankaran

    2011-01-01

    In this paper, we propose a probabilistic approach to represent interval data for input variables in reliability and uncertainty analysis problems, using flexible families of continuous Johnson distributions. Such a probabilistic representation of interval data facilitates a unified framework for handling aleatory and epistemic uncertainty. For fitting probability distributions, methods such as moment matching are commonly used in the literature. However, unlike point data where single estimates for the moments of data can be calculated, moments of interval data can only be computed in terms of upper and lower bounds. Finding bounds on the moments of interval data has been generally considered an NP-hard problem because it includes a search among the combinations of multiple values of the variables, including interval endpoints. In this paper, we present efficient algorithms based on continuous optimization to find the bounds on second and higher moments of interval data. With numerical examples, we show that the proposed bounding algorithms are scalable in polynomial time with respect to increasing number of intervals. Using the bounds on moments computed using the proposed approach, we fit a family of Johnson distributions to interval data. Furthermore, using an optimization approach based on percentiles, we find the bounding envelopes of the family of distributions, termed as a Johnson p-box. The idea of bounding envelopes for the family of Johnson distributions is analogous to the notion of empirical p-box in the literature. Several sets of interval data with different numbers of intervals and type of overlap are presented to demonstrate the proposed methods. As against the computationally expensive nested analysis that is typically required in the presence of interval variables, the proposed probabilistic representation enables inexpensive optimization-based strategies to estimate bounds on an output quantity of interest.

  1. Assessing common classification methods for the identification of abnormal repolarization using indicators of T-wave morphology and QT interval

    DEFF Research Database (Denmark)

    Shakibfar, Saeed; Graff, Claus; Ehlers, Lars Holger

    2012-01-01

    Various parameters based on QTc and T-wave morphology have been shown to be useful discriminators for drug induced I(Kr)-blocking. Using different classification methods this study compares the potential of these two features for identifying abnormal repolarization on the ECG. A group of healthy ...... the approach has not been tested in this setting....

  2. Simple method of generating and distributing frequency-entangled qudits

    Science.gov (United States)

    Jin, Rui-Bo; Shimizu, Ryosuke; Fujiwara, Mikio; Takeoka, Masahiro; Wakabayashi, Ryota; Yamashita, Taro; Miki, Shigehito; Terai, Hirotaka; Gerrits, Thomas; Sasaki, Masahide

    2016-11-01

    High-dimensional, frequency-entangled photonic quantum bits (qudits for d-dimension) are promising resources for quantum information processing in an optical fiber network and can also be used to improve channel capacity and security for quantum communication. However, up to now, it is still challenging to prepare high-dimensional frequency-entangled qudits in experiments, due to technical limitations. Here we propose and experimentally implement a novel method for a simple generation of frequency-entangled qudts with d\\gt 10 without the use of any spectral filters or cavities. The generated state is distributed over 15 km in total length. This scheme combines the technique of spectral engineering of biphotons generated by spontaneous parametric down-conversion and the technique of spectrally resolved Hong-Ou-Mandel interference. Our frequency-entangled qudits will enable quantum cryptographic experiments with enhanced performances. This distribution of distinct entangled frequency modes may also be useful for improved metrology, quantum remote synchronization, as well as for fundamental test of stronger violation of local realism.

  3. Dirichlet and Related Distributions Theory, Methods and Applications

    CERN Document Server

    Ng, Kai Wang; Tang, Man-Lai

    2011-01-01

    The Dirichlet distribution appears in many areas of application, which include modelling of compositional data, Bayesian analysis, statistical genetics, and nonparametric inference. This book provides a comprehensive review of the Dirichlet distribution and two extended versions, the Grouped Dirichlet Distribution (GDD) and the Nested Dirichlet Distribution (NDD), arising from likelihood and Bayesian analysis of incomplete categorical data and survey data with non-response. The theoretical properties and applications are also reviewed in detail for other related distributions, such as the inve

  4. Electrocardiographic PR-interval duration and cardiovascular risk

    DEFF Research Database (Denmark)

    Rasmussen, Peter Vibe; Nielsen, Jonas Bille; Skov, Morten Wagner

    2017-01-01

    Background Because of ambiguous reports in the literature, we aimed to investigate the association between PR interval and the risk of all-cause and cardiovascular death, heart failure, and pacemaker implantation, allowing for a nonlinear relationship. MethodsWe included 293,111 individuals...... into 7 groups based on the population PR interval distribution. Cox models were used, with reference to a PR interval between 152 and 161 ms (40th to heart failure...... adjustment. A long PR interval conferred an increased risk of heart failure ( > 200 ms; HR, 1.31; 95% CI, 1.22-1.42; P 200 ms (HR, 3...

  5. Robust Confidence Interval for a Ratio of Standard Deviations

    Science.gov (United States)

    Bonett, Douglas G.

    2006-01-01

    Comparing variability of test scores across alternate forms, test conditions, or subpopulations is a fundamental problem in psychometrics. A confidence interval for a ratio of standard deviations is proposed that performs as well as the classic method with normal distributions and performs dramatically better with nonnormal distributions. A simple…

  6. A convenient method of obtaining percentile norms and accompanying interval estimates for self-report mood scales (DASS, DASS-21, HADS, PANAS, and sAD).

    Science.gov (United States)

    Crawford, John R; Garthwaite, Paul H; Lawrie, Caroline J; Henry, Julie D; MacDonald, Marie A; Sutherland, Jane; Sinha, Priyanka

    2009-06-01

    A series of recent papers have reported normative data from the general adult population for commonly used self-report mood scales. To bring together and supplement these data in order to provide a convenient means of obtaining percentile norms for the mood scales. A computer program was developed that provides point and interval estimates of the percentile rank corresponding to raw scores on the various self-report scales. The program can be used to obtain point and interval estimates of the percentile rank of an individual's raw scores on the DASS, DASS-21, HADS, PANAS, and sAD mood scales, based on normative sample sizes ranging from 758 to 3822. The interval estimates can be obtained using either classical or Bayesian methods as preferred. The computer program (which can be downloaded at www.abdn.ac.uk/~psy086/dept/MoodScore.htm) provides a convenient and reliable means of supplementing existing cut-off scores for self-report mood scales.

  7. An interval-possibilistic basic-flexible programming method for air quality management of municipal energy system through introducing electric vehicles.

    Science.gov (United States)

    Yu, L; Li, Y P; Huang, G H; Shan, B G

    2017-09-01

    Contradictions of sustainable transportation development and environmental issues have been aggravated significantly and been one of the major concerns for energy systems planning and management. A heavy emphasis is placed on stimulation of electric vehicles (EVs) to handle these problems associated with various complexities and uncertainties in municipal energy system (MES). In this study, an interval-possibilistic basic-flexible programming (IPBFP) method is proposed for planning MES of Qingdao, where uncertainties expressed as interval-flexible variables and interval-possibilistic parameters can be effectively reflected. Support vector regression (SVR) is used for predicting electricity demand of the city under various scenarios. Solutions of EVs stimulation levels and satisfaction levels in association with flexible constraints and predetermined necessity degrees are analyzed, which can help identify the optimized energy-supply patterns that could plunk for improvement of air quality and hedge against violation of soft constraints. Results disclose that largely developing EVs can help facilitate the city's energy system with an environment-effective way. However, compared to the rapid growth of transportation, the EVs' contribution of improving the city's air quality is limited. It is desired that, to achieve an environmentally sustainable MES, more concerns should be focused on the integration of increasing renewable energy resources, stimulating EVs as well as improving energy transmission, transport and storage. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. 99Tc in the environment. Sources, distribution and methods

    International Nuclear Information System (INIS)

    Garcia-Leon, Manuel

    2005-01-01

    99 Tc is a β-emitter, E max =294 keV, with a very long half-life (T 1/2 =2.11 x 10 5 y). It is mainly produced in the fission of 235 U and 239 Pu at a rate of about 6%. This rate together with its long half-life makes it a significant nuclide in the whole nuclear fuel cycle, from which it can be introduced into the environment at different rates depending on the cycle step. A gross estimation shows that adding all the possible sources, at least 2000 TBq had been released into the environment up to 2000 and that up to the middle of the nineties of the last century some 64000 TBq had been produced worldwide. Nuclear explosions have liberated some 160 TBq into the environment. In this work, environmental distribution of 99 Tc as well as the methods for its determination will be discussed. Emphasis is put on the environmental relevance of 99 Tc, mainly with regard to the future committed radiation dose received by the population and to the problem of nuclear waste management. Its determination at environmental levels is a challenging task. For that, special mention is made about the mass spectrometric methods for its measurement. (author)

  9. Moisture distribution in sludges based on different testing methods

    Institute of Scientific and Technical Information of China (English)

    Wenyi Deng; Xiaodong Li; Jianhua Yan; Fei Wang; Yong Chi; Kefa Cen

    2011-01-01

    Moisture distributions in municipal sewage sludge, printing and dyeing sludge and paper mill sludge were experimentally studied based on four different methods, i.e., drying test, thermogravimetric-differential thermal analysis (TG-DTA) test, thermogravimetricdifferential scanning calorimetry (TG-DSC) test and water activity test. The results indicated that the moistures in the mechanically dewatered sludges were interstitial water, surface water and bound water. The interstitial water accounted for more than 50% wet basis (wb) of the total moisture content. The bond strength of sludge moisture increased with decreasing moisture content, especially when the moisture content was lower than 50% wb. Furthermore, the comparison among the four different testing methods was presented.The drying test was advantaged by its ability to quantify free water, interstitial water, surface water and bound water; while TG-DSC test, TG-DTA test and water activity test were capable of determining the bond strength of moisture in sludge. It was found that the results from TG-DSC and TG-DTA test are more persuasive than water activity test.

  10. Development of methods for DSM and distribution automation planning

    International Nuclear Information System (INIS)

    Kaerkkaeinen, S.; Kekkonen, V.; Rissanen, P.

    1998-01-01

    Demand-Side Management (DSM) is usually an utility (or sometimes governmental) activity designed to influence energy demand of customers (both level and load variation). It includes basic options like strategic conservation or load growth, peak clipping. Load shifting and fuel switching. Typical ways to realize DSM are direct load control, innovative tariffs, different types of campaign etc. Restructuring of utility in Finland and increased competition in electricity market have had dramatic influence on the DSM. Traditional ways are impossible due to the conflicting interests of generation, network and supply business and increased competition between different actors in the market. Costs and benefits of DSM are divided to different companies, and different type of utilities are interested only in those activities which are beneficial to them. On the other hand, due to the increased competition the suppliers are diversifying to different types of products and increasing number of customer services partly based on DSM are available. The aim of this project was to develop and assess methods for DSM and distribution automation planning from the utility point of view. The methods were also applied to case studies at utilities

  11. Development of methods for DSM and distribution automation planning

    Energy Technology Data Exchange (ETDEWEB)

    Kaerkkaeinen, S; Kekkonen, V [VTT Energy, Espoo (Finland); Rissanen, P [Tietosavo Oy (Finland)

    1998-08-01

    Demand-Side Management (DSM) is usually an utility (or sometimes governmental) activity designed to influence energy demand of customers (both level and load variation). It includes basic options like strategic conservation or load growth, peak clipping. Load shifting and fuel switching. Typical ways to realize DSM are direct load control, innovative tariffs, different types of campaign etc. Restructuring of utility in Finland and increased competition in electricity market have had dramatic influence on the DSM. Traditional ways are impossible due to the conflicting interests of generation, network and supply business and increased competition between different actors in the market. Costs and benefits of DSM are divided to different companies, and different type of utilities are interested only in those activities which are beneficial to them. On the other hand, due to the increased competition the suppliers are diversifying to different types of products and increasing number of customer services partly based on DSM are available. The aim of this project was to develop and assess methods for DSM and distribution automation planning from the utility point of view. The methods were also applied to case studies at utilities

  12. Review of islanding detection methods for distributed generation

    DEFF Research Database (Denmark)

    Chen, Zhe; Mahat, Pukar; Bak-Jensen, Birgitte

    2008-01-01

    This paper presents an overview of power system islanding and islanding detection techniques. Islanding detection techniques, for a distribution system with distributed generation (DG), can broadly be divided into remote and local techniques. A remote islanding detection technique is associated...

  13. Reinterpretation of the results of a pooled analysis of dietary carotenoid intake and breast cancer risk by using the interval collapsing method

    Directory of Open Access Journals (Sweden)

    Jong-Myon Bae

    2016-06-01

    Full Text Available OBJECTIVES: A pooled analysis of 18 prospective cohort studies reported in 2012 for evaluating carotenoid intakes and breast cancer risk defined by estrogen receptor (ER and progesterone receptor (PR statuses by using the “highest versus lowest intake” method (HLM. By applying the interval collapsing method (ICM to maximize the use of the estimated information, we reevaluated the results of the previous analysis in order to reinterpret the inferences made. METHODS: In order to estimate the summary effect size (sES and its 95% confidence interval (CI, meta-analyses with the random-effects model were conducted for adjusted relative risks and their 95% CI from the second to the fifth interval according to five kinds of carotenoids and ER/PR status. RESULTS: The following new findings were identified: α-Carotene and β-cryptoxanthin have protective effects on overall breast cancer. All five kinds of carotenoids showed protective effects on ER− breast cancer. β-Carotene level increased the risk of ER+ or ER+/PR+ breast cancer. α-Carotene, β-carotene, lutein/zeaxanthin, and lycopene showed a protective effect on ER−/PR+ or ER−/PR− breast cancer. CONCLUSIONS: The new facts support the hypothesis that carotenoids that show anticancer effects with anti-oxygen function might reduce the risk of ER− breast cancer. Based on the new facts, the modification of the effects of α-carotene, β-carotene, and β-cryptoxanthin should be evaluated according to PR and ER statuses.

  14. Fast crawling methods of exploring content distributed over large graphs

    KAUST Repository

    Wang, Pinghui

    2018-03-15

    Despite recent effort to estimate topology characteristics of large graphs (e.g., online social networks and peer-to-peer networks), little attention has been given to develop a formal crawling methodology to characterize the vast amount of content distributed over these networks. Due to the large-scale nature of these networks and a limited query rate imposed by network service providers, exhaustively crawling and enumerating content maintained by each vertex is computationally prohibitive. In this paper, we show how one can obtain content properties by crawling only a small fraction of vertices and collecting their content. We first show that when sampling is naively applied, this can produce a huge bias in content statistics (i.e., average number of content replicas). To remove this bias, one may use maximum likelihood estimation to estimate content characteristics. However, our experimental results show that this straightforward method requires to sample most vertices to obtain accurate estimates. To address this challenge, we propose two efficient estimators: special copy estimator (SCE) and weighted copy estimator (WCE) to estimate content characteristics using available information in sampled content. SCE uses the special content copy indicator to compute the estimate, while WCE derives the estimate based on meta-information in sampled vertices. We conduct experiments on a variety of real-word and synthetic datasets, and the results show that WCE and SCE are cost effective and also “asymptotically unbiased”. Our methodology provides a new tool for researchers to efficiently query content distributed in large-scale networks.

  15. Mathematical methods in physics distributions, Hilbert space operators, variational methods, and applications in quantum physics

    CERN Document Server

    Blanchard, Philippe

    2015-01-01

    The second edition of this textbook presents the basic mathematical knowledge and skills that are needed for courses on modern theoretical physics, such as those on quantum mechanics, classical and quantum field theory, and related areas.  The authors stress that learning mathematical physics is not a passive process and include numerous detailed proofs, examples, and over 200 exercises, as well as hints linking mathematical concepts and results to the relevant physical concepts and theories.  All of the material from the first edition has been updated, and five new chapters have been added on such topics as distributions, Hilbert space operators, and variational methods.   The text is divided into three main parts. Part I is a brief introduction to distribution theory, in which elements from the theories of ultradistributions and hyperfunctions are considered in addition to some deeper results for Schwartz distributions, thus providing a comprehensive introduction to the theory of generalized functions. P...

  16. Risk Assessment for Distribution Systems Using an Improved PEM-Based Method Considering Wind and Photovoltaic Power Distribution

    Directory of Open Access Journals (Sweden)

    Qingwu Gong

    2017-03-01

    Full Text Available The intermittency and variability of permeated distributed generators (DGs could cause many critical security and economy risks to distribution systems. This paper applied a certain mathematical distribution to imitate the output variability and uncertainty of DGs. Then, four risk indices—EENS (expected energy not supplied, PLC (probability of load curtailment, EFLC (expected frequency of load curtailment, and SI (severity index—were established to reflect the system risk level of the distribution system. For the certain mathematical distribution of the DGs’ output power, an improved PEM (point estimate method-based method was proposed to calculate these four system risk indices. In this improved PEM-based method, an enumeration method was used to list the states of distribution systems, and an improved PEM was developed to deal with the uncertainties of DGs, and the value of load curtailment in distribution systems was calculated by an optimal power flow algorithm. Finally, the effectiveness and advantages of this proposed PEM-based method for distribution system assessment were verified by testing a modified IEEE 30-bus system. Simulation results have shown that this proposed PEM-based method has a high computational accuracy and highly reduced computational costs compared with other risk assessment methods and is very effective for risk assessments.

  17. The "Interval Walking in Colorectal Cancer" (I-WALK-CRC) study: Design, methods and recruitment results of a randomized controlled feasibility trial.

    Science.gov (United States)

    Banck-Petersen, Anna; Olsen, Cecilie K; Djurhuus, Sissal S; Herrstedt, Anita; Thorsen-Streit, Sarah; Ried-Larsen, Mathias; Østerlind, Kell; Osterkamp, Jens; Krarup, Peter-Martin; Vistisen, Kirsten; Mosgaard, Camilla S; Pedersen, Bente K; Højman, Pernille; Christensen, Jesper F

    2018-03-01

    Low physical activity level is associated with poor prognosis in patients with colorectal cancer (CRC). To increase physical activity, technology-based platforms are emerging and provide intriguing opportunities to prescribe and monitor active lifestyle interventions. The "Interval Walking in Colorectal Cancer"(I-WALK-CRC) study explores the feasibility and efficacy a home-based interval-walking intervention delivered by a smart-phone application in order to improve cardio-metabolic health profile among CRC survivors. The aim of the present report is to describe the design, methods and recruitment results of the I-WALK-CRC study.Methods/Results: The I-WALK-CRC study is a randomized controlled trial designed to evaluate the feasibility and efficacy of a home-based interval walking intervention compared to a waiting-list control group for physiological and patient-reported outcomes. Patients who had completed surgery for local stage disease and patients who had completed surgery and any adjuvant chemotherapy for locally advanced stage disease were eligible for inclusion. Between October 1st , 2015, and February 1st , 2017, 136 inquiries were recorded; 83 patients were eligible for enrollment, and 42 patients accepted participation. Age and employment status were associated with participation, as participants were significantly younger (60.5 vs 70.8 years, P CRC survivors was feasible but we aim to better the recruitment rate in future studies. Further, the study clearly favored younger participants. The I-WALK-CRC study will provide important information regarding feasibility and efficacy of a home-based walking exercise program in CRC survivors.

  18. Determination of blood circulation in oral formations using Rb86 distribution method and labelled micropearl method

    International Nuclear Information System (INIS)

    Fazekas, A.; Posch, E.; Harsing, L.

    1979-01-01

    The blood circulation of incisors, dental pulp and tongue was detemined using the measurement of 86 Rb distribution in rats. The results were compared with those obtained by a simultaneous micropearl method. It was found that 37 per cent of 86 Rb in dental tissues is localized in the hard propiodentium, with a high proportion diffusing from the periodontium. The 86 Rb fraction localized in the tongue represents its blood circulation. (author)

  19. A Novel Method of Clock Synchronization in Distributed Systems

    Science.gov (United States)

    Li, Gun; Niu, Meng-jie; Chai, Yang-shun; Chen, Xin; Ren, Yan-qiu

    2017-04-01

    Time synchronization plays an important role in the spacecraft formation flight and constellation autonomous navigation, etc. For the application of clock synchronization in a network system, it is not always true that all the observed nodes in the network are interconnected, therefore, it is difficult to achieve the high-precision time synchronization of a network system in the condition that a certain node can only obtain the measurement information of clock from a single neighboring node, but cannot obtain it from other nodes. Aiming at this problem, a novel method of high-precision time synchronization in a network system is proposed. In this paper, each clock is regarded as a node in the network system, and based on the definition of different topological structures of a distributed system, the three control algorithms of time synchronization under the following three cases are designed: without a master clock (reference clock), with a master clock (reference clock), and with a fixed communication delay in the network system. And the validity of the designed clock synchronization protocol is proved by both stability analysis and numerical simulation.

  20. An experiment with content distribution methods in touchscreen mobile devices.

    Science.gov (United States)

    Garcia-Lopez, Eva; Garcia-Cabot, Antonio; de-Marcos, Luis

    2015-09-01

    This paper compares the usability of three different content distribution methods (scrolling, paging and internal links) in touchscreen mobile devices as means to display web documents. Usability is operationalized in terms of effectiveness, efficiency and user satisfaction. These dimensions are then measured in an experiment (N = 23) in which users are required to find words in regular-length web documents. Results suggest that scrolling is statistically better in terms of efficiency and user satisfaction. It is also found to be more effective but results were not significant. Our findings are also compared with existing literature to propose the following guideline: "try to use vertical scrolling in web pages for mobile devices instead of paging or internal links, except when the content is too large, then paging is recommended". With an ever increasing number of touchscreen web-enabled mobile devices, this new guideline can be relevant for content developers targeting the mobile web as well as institutions trying to improve the usability of their content for mobile platforms. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  1. Applications of interval computations

    CERN Document Server

    Kreinovich, Vladik

    1996-01-01

    Primary Audience for the Book • Specialists in numerical computations who are interested in algorithms with automatic result verification. • Engineers, scientists, and practitioners who desire results with automatic verification and who would therefore benefit from the experience of suc­ cessful applications. • Students in applied mathematics and computer science who want to learn these methods. Goal Of the Book This book contains surveys of applications of interval computations, i. e. , appli­ cations of numerical methods with automatic result verification, that were pre­ sented at an international workshop on the subject in EI Paso, Texas, February 23-25, 1995. The purpose of this book is to disseminate detailed and surveyed information about existing and potential applications of this new growing field. Brief Description of the Papers At the most fundamental level, interval arithmetic operations work with sets: The result of a single arithmetic operation is the set of all possible results as the o...

  2. Interval ridge regression (iRR) as a fast and robust method for quantitative prediction and variable selection applied to edible oil adulteration.

    Science.gov (United States)

    Jović, Ozren; Smrečki, Neven; Popović, Zora

    2016-04-01

    A novel quantitative prediction and variable selection method called interval ridge regression (iRR) is studied in this work. The method is performed on six data sets of FTIR, two data sets of UV-vis and one data set of DSC. The obtained results show that models built with ridge regression on optimal variables selected with iRR significantly outperfom models built with ridge regression on all variables in both calibration (6 out of 9 cases) and validation (2 out of 9 cases). In this study, iRR is also compared with interval partial least squares regression (iPLS). iRR outperfomed iPLS in validation (insignificantly in 6 out of 9 cases and significantly in one out of 9 cases for poil, a well known health beneficial nutrient, is studied in this work by mixing it with cheap and widely used oils such as soybean (So) oil, rapeseed (R) oil and sunflower (Su) oil. Binary mixture sets of hempseed oil with these three oils (HSo, HR and HSu) and a ternary mixture set of H oil, R oil and Su oil (HRSu) were considered. The obtained accuracy indicates that using iRR on FTIR and UV-vis data, each particular oil can be very successfully quantified (in all 8 cases RMSEPoil (R(2)>0.99). Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Interval selection with machine-dependent intervals

    OpenAIRE

    Bohmova K.; Disser Y.; Mihalak M.; Widmayer P.

    2013-01-01

    We study an offline interval scheduling problem where every job has exactly one associated interval on every machine. To schedule a set of jobs, exactly one of the intervals associated with each job must be selected, and the intervals selected on the same machine must not intersect.We show that deciding whether all jobs can be scheduled is NP-complete already in various simple cases. In particular, by showing the NP-completeness for the case when all the intervals associated with the same job...

  4. Fitting statistical distributions the generalized lambda distribution and generalized bootstrap methods

    CERN Document Server

    Karian, Zaven A

    2000-01-01

    Throughout the physical and social sciences, researchers face the challenge of fitting statistical distributions to their data. Although the study of statistical modelling has made great strides in recent years, the number and variety of distributions to choose from-all with their own formulas, tables, diagrams, and general properties-continue to create problems. For a specific application, which of the dozens of distributions should one use? What if none of them fit well?Fitting Statistical Distributions helps answer those questions. Focusing on techniques used successfully across many fields, the authors present all of the relevant results related to the Generalized Lambda Distribution (GLD), the Generalized Bootstrap (GB), and Monte Carlo simulation (MC). They provide the tables, algorithms, and computer programs needed for fitting continuous probability distributions to data in a wide variety of circumstances-covering bivariate as well as univariate distributions, and including situations where moments do...

  5. Sparsity-weighted outlier FLOODing (OFLOOD) method: Efficient rare event sampling method using sparsity of distribution.

    Science.gov (United States)

    Harada, Ryuhei; Nakamura, Tomotake; Shigeta, Yasuteru

    2016-03-30

    As an extension of the Outlier FLOODing (OFLOOD) method [Harada et al., J. Comput. Chem. 2015, 36, 763], the sparsity of the outliers defined by a hierarchical clustering algorithm, FlexDice, was considered to achieve an efficient conformational search as sparsity-weighted "OFLOOD." In OFLOOD, FlexDice detects areas of sparse distribution as outliers. The outliers are regarded as candidates that have high potential to promote conformational transitions and are employed as initial structures for conformational resampling by restarting molecular dynamics simulations. When detecting outliers, FlexDice defines a rank in the hierarchy for each outlier, which relates to sparsity in the distribution. In this study, we define a lower rank (first ranked), a medium rank (second ranked), and the highest rank (third ranked) outliers, respectively. For instance, the first-ranked outliers are located in a given conformational space away from the clusters (highly sparse distribution), whereas those with the third-ranked outliers are nearby the clusters (a moderately sparse distribution). To achieve the conformational search efficiently, resampling from the outliers with a given rank is performed. As demonstrations, this method was applied to several model systems: Alanine dipeptide, Met-enkephalin, Trp-cage, T4 lysozyme, and glutamine binding protein. In each demonstration, the present method successfully reproduced transitions among metastable states. In particular, the first-ranked OFLOOD highly accelerated the exploration of conformational space by expanding the edges. In contrast, the third-ranked OFLOOD reproduced local transitions among neighboring metastable states intensively. For quantitatively evaluations of sampled snapshots, free energy calculations were performed with a combination of umbrella samplings, providing rigorous landscapes of the biomolecules. © 2015 Wiley Periodicals, Inc.

  6. Automatic NC-Data generation method for 5-axis cutting of turbine-blades by finding Safe heel-angles and adaptive path-intervals

    International Nuclear Information System (INIS)

    Piao, Cheng Dao; Lee, Cheol Soo; Cho, Kyu Zong; Park, Gwang Ryeol

    2004-01-01

    In this paper, an efficient method for generating 5-axis cutting data for a turbine blade is presented. The interference elimination of 5-axis cutting currently is very complicated, and it takes up a lot of time. The proposed method can generate an interference-free tool path, within an allowance range. Generating the cutting data just point to the cutting process and using it to obtain NC data by calculating the feed rate, allows us to maintain the proper feed rate of the 5-axis machine. This paper includes the algorithms for: (1) CL data generation by detecting an interference-free heel angle, (2) finding the optimal tool path interval considering the cusp-height, (3) finding the adaptive feed rate values for each cutter path, and (4) the inverse kinematics depending on the structure of the 5-axis machine, for generating the NC data

  7. comparison of estimation methods for fitting weibull distribution

    African Journals Online (AJOL)

    Tersor

    Tree diameter characterisation using probability distribution functions is essential for determining the structure of forest stands. This has been an intrinsic part of forest management planning, decision-making and research in recent times. The distribution of species and tree size in a forest area gives the structure of the stand.

  8. The efficiency of parameter estimation of latent path analysis using summated rating scale (SRS) and method of successive interval (MSI) for transformation of score to scale

    Science.gov (United States)

    Solimun, Fernandes, Adji Achmad Rinaldo; Arisoesilaningsih, Endang

    2017-12-01

    Research in various fields generally investigates systems and involves latent variables. One method to analyze the model representing the system is path analysis. The data of latent variables measured using questionnaires by applying attitude scale model yields data in the form of score, before analyzed should be transformation so that it becomes data of scale. Path coefficient, is parameter estimator, calculated from scale data using method of successive interval (MSI) and summated rating scale (SRS). In this research will be identifying which data transformation method is better. Path coefficients have smaller varieties are said to be more efficient. The transformation method that produces scaled data and used in path analysis capable of producing path coefficients (parameter estimators) with smaller varieties is said to be better. The result of analysis using real data shows that on the influence of Attitude variable to Intention Entrepreneurship, has relative efficiency (ER) = 1, where it shows that the result of analysis using data transformation of MSI and SRS as efficient. On the other hand, for simulation data, at high correlation between items (0.7-0.9), MSI method is more efficient 1.3 times better than SRS method.

  9. 区间数分级决策的特征选择方法研究%Research on Feature Selection Method for Interval Sorting Decision

    Institute of Scientific and Technical Information of China (English)

    宋鹏; 梁吉业; 钱宇华; 李常洪

    2017-01-01

    In the field of multiple attributes decision making,sorting decision has become an important kind of issue and been widely concerned in many practical application areas.In the process of making sorting decision,the rational and effective feature selection methods can extract informative and pertinent attributes,and thus improve the efficiency of decision making.From the extant literatures,many valuable researches have been provided for more reasonably solving this problem in the context of diverse data types,such as single value,null value and set value.However,very few studies focus on the sorting decision in term of interval-valued data.The objective of this paper is to provide a new feature selection approach for interval sorting decision by using the interval outranking relation.By integrating rough set model and information entropy theory,a new measurement called complementary condition entropy,which investigates the complementary nature of the relevant sets,is proposed for feature evaluation through analyzing the inherent implication of correlation between considered attributes in the problem of interval sorting decision.Furthermore,on the basis of the difference of the values of complementary condition entropy,the representation of the indispensable attributes and the measurement of attributes importance are presented,and then develop a heuristic feature selection algorithm is proposed for interval sorting decision.Finally,two illustrative applications,namely,the issues of venture investment and portfolio selection,are employed to demonstrate the validity of the proposed method.For the problem of multi-stage venture investment decision,through investigating the competitiveness,development capacity and financial capability of 16 investment projects,the corresponding probabilistic decision rules having better generalization capability,which can be used to determine whether to perform further investment.As to the issue of portfolio selection,91 stocks coming

  10. An Interactive Signed Distance Approach for Multiple Criteria Group Decision-Making Based on Simple Additive Weighting Method with Incomplete Preference Information Defined by Interval Type-2 Fuzzy Sets

    OpenAIRE

    Ting-Yu Chen

    2014-01-01

    Interval type-2 fuzzy sets (T2FSs) with interval membership grades are suitable for dealing with imprecision or uncertainties in many real-world problems. In the Interval type-2 fuzzy context, the aim of this paper is to develop an interactive signed distance-based simple additive weighting (SAW) method for solving multiple criteria group decision-making problems with linguistic ratings and incomplete preference information. This paper first formulates a group decision-making problem with unc...

  11. Deterministic and probabilistic interval prediction for short-term wind power generation based on variational mode decomposition and machine learning methods

    International Nuclear Information System (INIS)

    Zhang, Yachao; Liu, Kaipei; Qin, Liang; An, Xueli

    2016-01-01

    Highlights: • Variational mode decomposition is adopted to process original wind power series. • A novel combined model based on machine learning methods is established. • An improved differential evolution algorithm is proposed for weight adjustment. • Probabilistic interval prediction is performed by quantile regression averaging. - Abstract: Due to the increasingly significant energy crisis nowadays, the exploitation and utilization of new clean energy gains more and more attention. As an important category of renewable energy, wind power generation has become the most rapidly growing renewable energy in China. However, the intermittency and volatility of wind power has restricted the large-scale integration of wind turbines into power systems. High-precision wind power forecasting is an effective measure to alleviate the negative influence of wind power generation on the power systems. In this paper, a novel combined model is proposed to improve the prediction performance for the short-term wind power forecasting. Variational mode decomposition is firstly adopted to handle the instability of the raw wind power series, and the subseries can be reconstructed by measuring sample entropy of the decomposed modes. Then the base models can be established for each subseries respectively. On this basis, the combined model is developed based on the optimal virtual prediction scheme, the weight matrix of which is dynamically adjusted by a self-adaptive multi-strategy differential evolution algorithm. Besides, a probabilistic interval prediction model based on quantile regression averaging and variational mode decomposition-based hybrid models is presented to quantify the potential risks of the wind power series. The simulation results indicate that: (1) the normalized mean absolute errors of the proposed combined model from one-step to three-step forecasting are 4.34%, 6.49% and 7.76%, respectively, which are much lower than those of the base models and the hybrid

  12. The Comparison of Two Methods of Exercise (intense interval training and concurrent resistance- endurance training on Fasting Sugar, Insulin and Insulin Resistance in Women with Mellitus Diabetes

    Directory of Open Access Journals (Sweden)

    F Bazyar

    2016-05-01

    Full Text Available Background & aim: Exercise is an important component of health and an integral approach to the management of diabetes mellitus. The purpose of this study was to compare the effects of intense interval training and concurrent resistance- endurance training on fasting sugar, insulin and insulin resistance in women with mellitus diabetes.   Methods: Fifty-two overweight female diabetic type 2 patients (aged 45-60 years old with fasting blood glucose≥ 126 mg/dl were selected to participate in the present study. Participants were assigned to intense interval training group (N=17, concurrent resistance- endurance training group (N=17 and control group (N=18. The exercises incorporated 10 weeks of concurrent resistance- endurance training and intense interval training. Fasting blood sugar, serum insulin concentrations levels were measured. Concurrent training group trained eight weeks, three times a week of endurance training at 60% of maximum heart rate (MHR and two resistance training sessions per week with 70% of one repetition maximum (1-RM. Intense interval training group trained for eight weeks, three sessions per week for 4 to 10 repeats Wingate test on the ergometer 30s performed with maximum effort. The control group did no systematic exercise. At the end of experiment 42 subjects were succeed and completed the study period, and 10 subjects were removed due to illness and absence in the exercise sessions. Fasting blood sugar and insulin levels 24 hours before and 48 hours after the last training session was measured.   Results: The findings indicated that in periodic fasting, the blood sugar in intensive training group had a marked decrease (p= 0.000 however, the fasting blood sugar of exercise and power stamina groups reduced significantly (p=0.062. The results showed no significant difference between the groups (171/0 p =0.171. Fasting insulin (p <0.001 and insulin resistance (0001/0 = p=0.001 in periodic intensive training group were

  13. Method of preparing mercury with an arbitrary isotopic distribution

    Science.gov (United States)

    Grossman, M.W.; George, W.A.

    1986-12-16

    This invention provides for a process for preparing mercury with a predetermined, arbitrary, isotopic distribution. In one embodiment, different isotopic types of Hg[sub 2]Cl[sub 2], corresponding to the predetermined isotopic distribution of Hg desired, are placed in an electrolyte solution of HCl and H[sub 2]O. The resulting mercurous ions are then electrolytically plated onto a cathode wire producing mercury containing the predetermined isotopic distribution. In a similar fashion, Hg with a predetermined isotopic distribution is obtained from different isotopic types of HgO. In this embodiment, the HgO is dissolved in an electrolytic solution of glacial acetic acid and H[sub 2]O. The isotopic specific Hg is then electrolytically plated onto a cathode and then recovered. 1 fig.

  14. The study of distribution and forms of uranium occurrences in Lake Baikal sediments by the SSNTD method

    International Nuclear Information System (INIS)

    Zhmodik, S.M.; Verkhovtseva, N.V.; Soloboeva, E.V.; Mironov, A.G.; Nemirovskaya, N.A.; Ilic, R.; Khlystov, O.M.; Titov, A.T.

    2005-01-01

    Sediments of Lake Baikal drill cores VER-96-1 St8 TW2 (53 deg. 32 ' 15 ' 'E; 107 deg. 56 ' 25 ' 'N) (interval 181.8-235cm from the sediment surface) were studied by means of SSNTD with the aim of defining uranium occurrence in the sediments and the uranium concentration. The neutron-fission ((n,f)-autoradiographic) method allowed a detailed study of uranium distribution of these Lake Baikal sediments within the Academicheskiy Ridge. Layered accumulations of uranium-bearing grained phosphorite, uranium-bearing particles of organic material, and abnormal uranium concentration in diatomite of unknown origin were discovered

  15. Optimal reactive power and voltage control in distribution networks with distributed generators by fuzzy adaptive hybrid particle swarm optimisation method

    DEFF Research Database (Denmark)

    Chen, Shuheng; Hu, Weihao; Su, Chi

    2015-01-01

    A new and efficient methodology for optimal reactive power and voltage control of distribution networks with distributed generators based on fuzzy adaptive hybrid PSO (FAHPSO) is proposed. The objective is to minimize comprehensive cost, consisting of power loss and operation cost of transformers...... that the proposed method can search a more promising control schedule of all transformers, all capacitors and all distributed generators with less time consumption, compared with other listed artificial intelligent methods....... algorithm is implemented in VC++ 6.0 program language and the corresponding numerical experiments are finished on the modified version of the IEEE 33-node distribution system with two newly installed distributed generators and eight newly installed capacitors banks. The numerical results prove...

  16. A Capacity Dimensioning Method for Broadband Distribution Networks

    DEFF Research Database (Denmark)

    Shawky, Ahmed; Pedersen, Jens Myrup; Bergheim, Hans

    2010-01-01

    This paper presents capacity dimensioning for a hypothetical distribution network in the Danish municipality of Aalborg. The number of customers in need for a better service level and the continuous increase in network traffic makes it harder for ISPs to deliver high levels of service to their cu......This paper presents capacity dimensioning for a hypothetical distribution network in the Danish municipality of Aalborg. The number of customers in need for a better service level and the continuous increase in network traffic makes it harder for ISPs to deliver high levels of service...... to their customers. This paper starts by defining three levels of services, together with traffic demands based on research of traffic distribution and generation in networks. Calculations for network dimension are then calculated. The results from the dimensioning are used to compare different network topologies...

  17. A new kind of droplet space distribution measuring method

    International Nuclear Information System (INIS)

    Ma Chao; Bo Hanliang

    2012-01-01

    A new kind of droplet space distribution measuring technique was introduced mainly, and the experimental device which was designed for the measuring the space distribution and traces of the flying film droplet produced by the bubble breaking up near the free surface of the water. This experiment was designed with a kind of water-sensitivity test paper (rice paper) which could record the position and size of the colored scattering droplets precisely. The rice papers were rolled into cylinders with different diameters by using tools. The bubbles broke up exactly in the center of the cylinder, and the space distribution and the traces of the droplets would be received by analysing all the positions of the droplets produced by the same size bubble on the rice papers. (authors)

  18. Power operation, measurement and methods of calculation of power distribution

    International Nuclear Information System (INIS)

    Lindahl, S.O.; Bernander, O.; Olsson, S.

    1982-01-01

    During the initial fuel loading of a BWR core, extensive checks and measurements of the fuel are performed. The measurements are designed to verify that the reactor can always be safely operated in compliance with the regulatory constraints. The power distribution within the reactor core is evaluated by means of instrumentation and elaborate computer calculations. The power distribution forms the basis for the evaluation of thermal limits. The behaviour of the reactor during the ordinary modes of operation as well as during transients shall be well understood and such that the integrity of the fuel and the reactor systems is always well preserved. (author)

  19. A comparison of confidence/credible interval methods for the area under the ROC curve for continuous diagnostic tests with small sample size.

    Science.gov (United States)

    Feng, Dai; Cortese, Giuliana; Baumgartner, Richard

    2017-12-01

    The receiver operating characteristic (ROC) curve is frequently used as a measure of accuracy of continuous markers in diagnostic tests. The area under the ROC curve (AUC) is arguably the most widely used summary index for the ROC curve. Although the small sample size scenario is common in medical tests, a comprehensive study of small sample size properties of various methods for the construction of the confidence/credible interval (CI) for the AUC has been by and large missing in the literature. In this paper, we describe and compare 29 non-parametric and parametric methods for the construction of the CI for the AUC when the number of available observations is small. The methods considered include not only those that have been widely adopted, but also those that have been less frequently mentioned or, to our knowledge, never applied to the AUC context. To compare different methods, we carried out a simulation study with data generated from binormal models with equal and unequal variances and from exponential models with various parameters and with equal and unequal small sample sizes. We found that the larger the true AUC value and the smaller the sample size, the larger the discrepancy among the results of different approaches. When the model is correctly specified, the parametric approaches tend to outperform the non-parametric ones. Moreover, in the non-parametric domain, we found that a method based on the Mann-Whitney statistic is in general superior to the others. We further elucidate potential issues and provide possible solutions to along with general guidance on the CI construction for the AUC when the sample size is small. Finally, we illustrate the utility of different methods through real life examples.

  20. DISTRIBUTED ELECTRICAL POWER PRODUCTION SYSTEM AND METHOD OF CONTROL THEREOF

    DEFF Research Database (Denmark)

    2010-01-01

    The present invention relates to a distributed electrical power production system wherein two or more electrical power units comprise respective sets of power supply attributes. Each set of power supply attributes is associated with a dynamic operating state of a particular electrical power unit....

  1. The effects of different irrigation methods on root distribution ...

    African Journals Online (AJOL)

    drip, subsurface drip, surface and under-tree micro sprinkler) on the root distribution, intensity and effective root depth of “Williams Pride” and “Jersey Mac” apple cultivars budded on M9, rapidly grown in Isparta Region. The rootstocks were ...

  2. Chaos on the interval

    CERN Document Server

    Ruette, Sylvie

    2017-01-01

    The aim of this book is to survey the relations between the various kinds of chaos and related notions for continuous interval maps from a topological point of view. The papers on this topic are numerous and widely scattered in the literature; some of them are little known, difficult to find, or originally published in Russian, Ukrainian, or Chinese. Dynamical systems given by the iteration of a continuous map on an interval have been broadly studied because they are simple but nevertheless exhibit complex behaviors. They also allow numerical simulations, which enabled the discovery of some chaotic phenomena. Moreover, the "most interesting" part of some higher-dimensional systems can be of lower dimension, which allows, in some cases, boiling it down to systems in dimension one. Some of the more recent developments such as distributional chaos, the relation between entropy and Li-Yorke chaos, sequence entropy, and maps with infinitely many branches are presented in book form for the first time. The author gi...

  3. Comparing interval estimates for small sample ordinal CFA models.

    Science.gov (United States)

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  4. Convex Interval Games

    NARCIS (Netherlands)

    Alparslan-Gok, S.Z.; Brânzei, R.; Tijs, S.H.

    2008-01-01

    In this paper, convex interval games are introduced and some characterizations are given. Some economic situations leading to convex interval games are discussed. The Weber set and the Shapley value are defined for a suitable class of interval games and their relations with the interval core for

  5. Bootstrap Prediction Intervals in Non-Parametric Regression with Applications to Anomaly Detection

    Science.gov (United States)

    Kumar, Sricharan; Srivistava, Ashok N.

    2012-01-01

    Prediction intervals provide a measure of the probable interval in which the outputs of a regression model can be expected to occur. Subsequently, these prediction intervals can be used to determine if the observed output is anomalous or not, conditioned on the input. In this paper, a procedure for determining prediction intervals for outputs of nonparametric regression models using bootstrap methods is proposed. Bootstrap methods allow for a non-parametric approach to computing prediction intervals with no specific assumptions about the sampling distribution of the noise or the data. The asymptotic fidelity of the proposed prediction intervals is theoretically proved. Subsequently, the validity of the bootstrap based prediction intervals is illustrated via simulations. Finally, the bootstrap prediction intervals are applied to the problem of anomaly detection on aviation data.

  6. Combination of the method of basic precipitation of lanthanons with the ion exchange distribution method by means of ammonium acetate

    International Nuclear Information System (INIS)

    Hubicki, W.; Hubicka, H.

    1980-01-01

    The method of basic precipitation of lanthanons was combined with the ion exchange distribution method using ammonium acetate. As a result of chromatogram development 1:2 the good results of distribution of Sm -Nd, the fractions 99,9% Nd 2 O 3 and Pr 6 O 11 and 99,5% La 2 O 3 were obtained. It was found that the way of packing the column influenced greatly the efficiency of ion distribution. (author)

  7. A robust fusion method for multiview distributed video coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Ascenso, Joao; Brites, Catarina

    2014-01-01

    Distributed video coding (DVC) is a coding paradigm which exploits the redundancy of the source (video) at the decoder side, as opposed to predictive coding, where the encoder leverages the redundancy. To exploit the correlation between views, multiview predictive video codecs require the encoder...... with a robust fusion system able to improve the quality of the fused SI along the decoding process through a learning process using already decoded data. We shall here take the approach to fuse the estimated distributions of the SIs as opposed to a conventional fusion algorithm based on the fusion of pixel...... values. The proposed solution is able to achieve gains up to 0.9 dB in Bjøntegaard difference when compared with the best-performing (in a RD sense) single SI DVC decoder, chosen as the best of an inter-view and a temporal SI-based decoder one....

  8. Visual Method for Spectral Energy Distribution Calculation of ...

    Indian Academy of Sciences (India)

    Abstract. In this work, we propose to use 'The Geometer's Sketchpad' to the fitting of a spectral energy distribution of blazar based on three effective spectral indices, αRO, αOX, and αRX and the flux density in the radio band. It can make us to see the fitting in detail with both the peak frequency and peak luminosity given ...

  9. A fully distributed method for dynamic spectrum sharing in femtocells

    DEFF Research Database (Denmark)

    Da Costa, Gustavo Wagner Oliveira; Cattoni, Andrea Fabio; Kovacs, Istvan

    2012-01-01

    such characteristics are combined the traditional network planning and optimization of cellular networks fails to be cost effective. Therefore, a greater deal of automation is needed in femtocells. In particular, this paper proposes a novel method for autonomous selection of spectrum/ channels in femtocells....... This method effectively mitigates cotier interference with no signaling at all across different femtocells. Still, the method has a remarkable simple implementation. The efficiency of the proposed method was evaluated by system level simulations. The results show large throughput gains for the cells...

  10. Apparatus and method for data communication in an energy distribution network

    Science.gov (United States)

    Hussain, Mohsin; LaPorte, Brock; Uebel, Udo; Zia, Aftab

    2014-07-08

    A system for communicating information on an energy distribution network is disclosed. In one embodiment, the system includes a local supervisor on a communication network, wherein the local supervisor can collect data from one or more energy generation/monitoring devices. The system also includes a command center on the communication network, wherein the command center can generate one or more commands for controlling the one or more energy generation devices. The local supervisor can periodically transmit a data signal indicative of the data to the command center via a first channel of the communication network at a first interval. The local supervisor can also periodically transmit a request for a command to the command center via a second channel of the communication network at a second interval shorter than the first interval. This channel configuration provides effective data communication without a significant increase in the use of network resources.

  11. Analysis of the structure of events by the method of rapidity intervals in K-p interactions at 32 GeV/c and pp interactions at 69 GeV/c

    International Nuclear Information System (INIS)

    Babintsev, V.V.; Bumazhnov, V.A.; Kruglov, N.A.; Moiseev, A.M.; Proskuryakov, A.S.; Smirnova, L.N.; Ukhanov, M.N.

    1981-01-01

    We present an analysis of the structure of distributions in the magnitude r/sup n//sub m/ of rapidity intervals containing m charged particles in events with n charged particles in K - p interactions at 32 GeV/c and pp interactions at 69 GeV/c. It is found that all distributions correspond to a smooth curve with a single maximum. A comparison is made between the shape of the experimental distributions for K - p interactions and the shape of the distributions for generated events corresponding to the multi-Regge model

  12. Congestion management of electric distribution networks through market based methods

    DEFF Research Database (Denmark)

    Huang, Shaojun

     EVs and HPs. Market-based congestion management methods are the focus of the thesis. They handle the potential congestion at the energy planning stage; therefore, the aggregators can optimally plan the energy consumption and have the least impact on the customers. After reviewing and identifying...... the shortcomings of the existing methods, the thesis fully studies and improves the dynamic tariff (DT) method, and proposes two  new market-based  congestion management methods,  namely the  dynamic subsidy (DS) method and the flexible demand swap method. The thesis improves the DT method from four aspects......Rapidly increasing share of intermittent renewable energy production poses a great challenge of the management and operation of the modern power systems. Deployment of a large number of flexible demands, such as electrical vehicles (EVs) and heat pumps (HPs), is believed to be a promising solution...

  13. Distributed Cooperation Solution Method of Complex System Based on MAS

    Science.gov (United States)

    Weijin, Jiang; Yuhui, Xu

    To adapt the model in reconfiguring fault diagnosing to dynamic environment and the needs of solving the tasks of complex system fully, the paper introduced multi-Agent and related technology to the complicated fault diagnosis, an integrated intelligent control system is studied in this paper. Based on the thought of the structure of diagnostic decision and hierarchy in modeling, based on multi-layer decomposition strategy of diagnosis task, a multi-agent synchronous diagnosis federation integrated different knowledge expression modes and inference mechanisms are presented, the functions of management agent, diagnosis agent and decision agent are analyzed, the organization and evolution of agents in the system are proposed, and the corresponding conflict resolution algorithm in given, Layered structure of abstract agent with public attributes is build. System architecture is realized based on MAS distributed layered blackboard. The real world application shows that the proposed control structure successfully solves the fault diagnose problem of the complex plant, and the special advantage in the distributed domain.

  14. Distributed Research Project Scheduling Based on Multi-Agent Methods

    Directory of Open Access Journals (Sweden)

    Constanta Nicoleta Bodea

    2011-01-01

    Full Text Available Different project planning and scheduling approaches have been developed. The Operational Research (OR provides two major planning techniques: CPM (Critical Path Method and PERT (Program Evaluation and Review Technique. Due to projects complexity and difficulty to use classical methods, new approaches were developed. Artificial Intelligence (AI initially promoted the automatic planner concept, but model-based planning and scheduling methods emerged later on. The paper adresses the project scheduling optimization problem, when projects are seen as Complex Adaptive Systems (CAS. Taken into consideration two different approaches for project scheduling optimization: TCPSP (Time- Constrained Project Scheduling and RCPSP (Resource-Constrained Project Scheduling, the paper focuses on a multiagent implementation in MATLAB for TCSP. Using the research project as a case study, the paper includes a comparison between two multi-agent methods: Genetic Algorithm (GA and Ant Colony Algorithm (ACO.

  15. Standardization of 32P activity determination method in soil-root cores for root distribution studies

    International Nuclear Information System (INIS)

    Sharma, R.B.; Ghildyal, B.P.

    1976-01-01

    The root distribution of wheat variety UP 301 was obtained by determining the 32 P activity in soil-root cores by two methods, viz., ignition and triacid digestion. Root distribution obtained by these two methods was compared with that by standard root core washing procedure. The percent error in root distribution as determined by triacid digestion method was within +- 2.1 to +- 9.0 as against +- 5.5 to +- 21.2 by ignition method. Thus triacid digestion method proved better over the ignition method. (author)

  16. Method for measuring the size distribution of airborne rhinovirus

    International Nuclear Information System (INIS)

    Russell, M.L.; Goth-Goldstein, R.; Apte, M.G.; Fisk, W.J.

    2002-01-01

    About 50% of viral-induced respiratory illnesses are caused by the human rhinovirus (HRV). Measurements of the concentrations and sizes of bioaerosols are critical for research on building characteristics, aerosol transport, and mitigation measures. We developed a quantitative reverse transcription-coupled polymerase chain reaction (RT-PCR) assay for HRV and verified that this assay detects HRV in nasal lavage samples. A quantitation standard was used to determine a detection limit of 5 fg of HRV RNA with a linear range over 1000-fold. To measure the size distribution of HRV aerosols, volunteers with a head cold spent two hours in a ventilated research chamber. Airborne particles from the chamber were collected using an Andersen Six-Stage Cascade Impactor. Each stage of the impactor was analyzed by quantitative RT-PCR for HRV. For the first two volunteers with confirmed HRV infection, but with mild symptoms, we were unable to detect HRV on any stage of the impactor

  17. Development of methods for DSM and distribution automation planning

    Energy Technology Data Exchange (ETDEWEB)

    Lehtonen, M.; Seppaelae, A.; Kekkonen, V.; Koreneff, G. [VTT Energy, Espoo (Finland)

    1996-12-31

    In the de-regulated electricity market, the power trading companies have to face new problems. The biggest challenges are caused by the uncertainty in the load magnitudes. In order to minimize the risks in power purchase and also in retail sales, the power traders should have as reliable and accurate estimates for hourly demands of their customers as possible. New tools have been developed for the distribution load estimation and for the management of energy balances of the trading companies. These tools are based on the flexible combination of the information available from several sources, like direct customer measurements, network measurements, load models and statistical data. These functions also serve as an information source for higher level activities of the electricity selling companies. These activities and the associated functions have been studied in the prototype system called DEM, which is now being developed for the operation of Finnish utilities in the newly de-regulated power market

  18. Development of methods for DSM and distribution automation planning

    International Nuclear Information System (INIS)

    Lehtonen, M.; Seppaelae, A.; Kekkonen, V.; Koreneff, G.

    1996-01-01

    In the de-regulated electricity market, the power trading companies have to face new problems. The biggest challenges are caused by the uncertainty in the load magnitudes. In order to minimize the risks in power purchase and also in retail sales, the power traders should have as reliable and accurate estimates for hourly demands of their customers as possible. New tools have been developed for the distribution load estimation and for the management of energy balances of the trading companies. These tools are based on the flexible combination of the information available from several sources, like direct customer measurements, network measurements, load models and statistical data. These functions also serve as an information source for higher level activities of the electricity selling companies. These activities and the associated functions have been studied in the prototype system called DEM, which is now being developed for the operation of Finnish utilities in the newly de-regulated power market

  19. Method for measuring the size distribution of airborne rhinovirus

    Energy Technology Data Exchange (ETDEWEB)

    Russell, M.L.; Goth-Goldstein, R.; Apte, M.G.; Fisk, W.J.

    2002-01-01

    About 50% of viral-induced respiratory illnesses are caused by the human rhinovirus (HRV). Measurements of the concentrations and sizes of bioaerosols are critical for research on building characteristics, aerosol transport, and mitigation measures. We developed a quantitative reverse transcription-coupled polymerase chain reaction (RT-PCR) assay for HRV and verified that this assay detects HRV in nasal lavage samples. A quantitation standard was used to determine a detection limit of 5 fg of HRV RNA with a linear range over 1000-fold. To measure the size distribution of HRV aerosols, volunteers with a head cold spent two hours in a ventilated research chamber. Airborne particles from the chamber were collected using an Andersen Six-Stage Cascade Impactor. Each stage of the impactor was analyzed by quantitative RT-PCR for HRV. For the first two volunteers with confirmed HRV infection, but with mild symptoms, we were unable to detect HRV on any stage of the impactor.

  20. Development of methods for DSM and distribution automation planning

    Energy Technology Data Exchange (ETDEWEB)

    Lehtonen, M; Seppaelae, A; Kekkonen, V; Koreneff, G [VTT Energy, Espoo (Finland)

    1997-12-31

    In the de-regulated electricity market, the power trading companies have to face new problems. The biggest challenges are caused by the uncertainty in the load magnitudes. In order to minimize the risks in power purchase and also in retail sales, the power traders should have as reliable and accurate estimates for hourly demands of their customers as possible. New tools have been developed for the distribution load estimation and for the management of energy balances of the trading companies. These tools are based on the flexible combination of the information available from several sources, like direct customer measurements, network measurements, load models and statistical data. These functions also serve as an information source for higher level activities of the electricity selling companies. These activities and the associated functions have been studied in the prototype system called DEM, which is now being developed for the operation of Finnish utilities in the newly de-regulated power market

  1. Iterative methods for distributed parameter estimation in parabolic PDE

    Energy Technology Data Exchange (ETDEWEB)

    Vogel, C.R. [Montana State Univ., Bozeman, MT (United States); Wade, J.G. [Bowling Green State Univ., OH (United States)

    1994-12-31

    The goal of the work presented is the development of effective iterative techniques for large-scale inverse or parameter estimation problems. In this extended abstract, a detailed description of the mathematical framework in which the authors view these problem is presented, followed by an outline of the ideas and algorithms developed. Distributed parameter estimation problems often arise in mathematical modeling with partial differential equations. They can be viewed as inverse problems; the `forward problem` is that of using the fully specified model to predict the behavior of the system. The inverse or parameter estimation problem is: given the form of the model and some observed data from the system being modeled, determine the unknown parameters of the model. These problems are of great practical and mathematical interest, and the development of efficient computational algorithms is an active area of study.

  2. Load forecasting method considering temperature effect for distribution network

    Directory of Open Access Journals (Sweden)

    Meng Xiao Fang

    2016-01-01

    Full Text Available To improve the accuracy of load forecasting, the temperature factor was introduced into the load forecasting in this paper. This paper analyzed the characteristics of power load variation, and researched the rule of the load with the temperature change. Based on the linear regression analysis, the mathematical model of load forecasting was presented with considering the temperature effect, and the steps of load forecasting were given. Used MATLAB, the temperature regression coefficient was calculated. Using the load forecasting model, the full-day load forecasting and time-sharing load forecasting were carried out. By comparing and analyzing the forecast error, the results showed that the error of time-sharing load forecasting method was small in this paper. The forecasting method is an effective method to improve the accuracy of load forecasting.

  3. A method for scientific code coupling in a distributed environment

    International Nuclear Information System (INIS)

    Caremoli, C.; Beaucourt, D.; Chen, O.; Nicolas, G.; Peniguel, C.; Rascle, P.; Richard, N.; Thai Van, D.; Yessayan, A.

    1994-12-01

    This guide book deals with coupling of big scientific codes. First, the context is introduced: big scientific codes devoted to a specific discipline coming to maturity, and more and more needs in terms of multi discipline studies. Then we describe different kinds of code coupling and an example of code coupling: 3D thermal-hydraulic code THYC and 3D neutronics code COCCINELLE. With this example we identify problems to be solved to realize a coupling. We present the different numerical methods usable for the resolution of coupling terms. This leads to define two kinds of coupling: with the leak coupling, we can use explicit methods, and with the strong coupling we need to use implicit methods. On both cases, we analyze the link with the way of parallelizing code. For translation of data from one code to another, we define the notion of Standard Coupling Interface based on a general structure for data. This general structure constitutes an intermediary between the codes, thus allowing a relative independence of the codes from a specific coupling. The proposed method for the implementation of a coupling leads to a simultaneous run of the different codes, while they exchange data. Two kinds of data communication with message exchange are proposed: direct communication between codes with the use of PVM product (Parallel Virtual Machine) and indirect communication with a coupling tool. This second way, with a general code coupling tool, is based on a coupling method, and we strongly recommended to use it. This method is based on the two following principles: re-usability, that means few modifications on existing codes, and definition of a code usable for coupling, that leads to separate the design of a code usable for coupling from the realization of a specific coupling. This coupling tool available from beginning of 1994 is described in general terms. (authors). figs., tabs

  4. Method of trial distribution function for quantum turbulence

    International Nuclear Information System (INIS)

    Nemirovskii, Sergey K.

    2012-01-01

    Studying quantum turbulence the necessity of calculation the various characteristics of the vortex tangle (VT) appears. Some of 'crude' quantities can be expressed directly via the total length of vortex lines (per unit of volume) or the vortex line density L(t) and the structure parameters of the VT. Other more 'subtle' quantities require knowledge of the vortex line configurations {s(xi,t) }. Usually, the corresponding calculations are carried out with the use of more or less truthful speculations concerning arrangement of the VT. In this paper we review other way to solution of this problem. It is based on the trial distribution functional (TDF) in space of vortex loop configurations. The TDF is constructed on the basis of well established properties of the vortex tangle. It is designed to calculate various averages taken over stochastic vortex loop configurations. In this paper we also review several applications of the use this model to calculate some important characteristics of the vortex tangle. In particular we discussed the average superfluid mass current J induced by vortices and its dynamics. We also describe the diffusion-like processes in the nonuniform vortex tangle and propagation of turbulent fronts.

  5. Distributed Coordinate Descent Method for Learning with Big Data

    OpenAIRE

    Richtárik, Peter; Takáč, Martin

    2013-01-01

    In this paper we develop and analyze Hydra: HYbriD cooRdinAte descent method for solving loss minimization problems with big data. We initially partition the coordinates (features) and assign each partition to a different node of a cluster. At every iteration, each node picks a random subset of the coordinates from those it owns, independently from the other computers, and in parallel computes and applies updates to the selected coordinates based on a simple closed-form formula. We give bound...

  6. Seasonal comparison of two spatially distributed evapotranspiration mapping methods

    Science.gov (United States)

    Kisfaludi, Balázs; Csáki, Péter; Péterfalvi, József; Primusz, Péter

    2017-04-01

    More rainfall is disposed of through evapotranspiration (ET) on a global scale than through runoff and storage combined. In Hungary, about 90% of the precipitation evapotranspirates from the land and only 10% goes to surface runoff and groundwater recharge. Therefore, evapotranspiration is a very important element of the water balance, so it is a suitable parameter for the calibration of hydrological models. Monthly ET values of two MODIS-data based ET products were compared for the area of Hungary and for the vegetation period of the year 2008. The differences were assessed by land cover types and by elevation zones. One ET map was the MOD16, aiming at global coverage and provided by the MODIS Global Evaporation Project. The other method is called CREMAP, it was developed at the Budapest University of Technology and Economics for regional scale ET mapping. CREMAP was validated for the area of Hungary with good results, but ET maps were produced only for the period of 2000-2008. The aim of this research was to evaluate the performance of the MOD16 product compared to the CREMAP method. The average difference between the two products was the highest during summer, CREMAP estimating higher ET values by about 25 mm/month. In the spring and autumn, MOD16 ET values were higher by an average of 6 mm/month. The differences by land cover types showed a similar seasonal pattern to the average differences, and they correlated strongly with each other. Practically the same difference values could be calculated for arable lands and forests that together cover nearly 75% of the area of the country. Therefore, it can be said that the seasonal changes had the same effect on the two method's ET estimations in each land cover type areas. The analysis by elevation zones showed that on elevations lower than 200 m AMSL the trends of the difference values were similar to the average differences. The correlation between the values of these elevation zones was also strong. However weaker

  7. Computerized method for X-ray angular distribution simulation in radiological systems

    International Nuclear Information System (INIS)

    Marques, Marcio A.; Oliveira, Henrique J.Q. de; Frere, Annie F.; Schiabel, Homero; Marques, Paulo M.A.

    1996-01-01

    A method to simulate the changes in X-ray angular distribution (the Heel effect) for radiologic imaging systems is presented. This simulation method is described as to predict images for any exposure technique considering that the distribution is the cause of the intensity variation along the radiation field

  8. Pseudospectral methods on a semi-infinite interval with application to the hydrogen atom: a comparison of the mapped Fourier-sine method with Laguerre series and rational Chebyshev expansions

    International Nuclear Information System (INIS)

    Boyd, John P.; Rangan, C.; Bucksbaum, P.H.

    2003-01-01

    The Fourier-sine-with-mapping pseudospectral algorithm of Fattal et al. [Phys. Rev. E 53 (1996) 1217] has been applied in several quantum physics problems. Here, we compare it with pseudospectral methods using Laguerre functions and rational Chebyshev functions. We show that Laguerre and Chebyshev expansions are better suited for solving problems in the interval r in R set of [0,∞] (for example, the Coulomb-Schroedinger equation), than the Fourier-sine-mapping scheme. All three methods give similar accuracy for the hydrogen atom when the scaling parameter L is optimum, but the Laguerre and Chebyshev methods are less sensitive to variations in L. We introduce a new variant of rational Chebyshev functions which has a more uniform spacing of grid points for large r, and gives somewhat better results than the rational Chebyshev functions of Boyd [J. Comp. Phys. 70 (1987) 63

  9. Determination of reference intervals and comparison of venous blood gas parameters using standard and non-standard collection methods in 24 cats.

    Science.gov (United States)

    Bachmann, Karin; Kutter, Annette Pn; Schefer, Rahel Jud; Marly-Voquer, Charlotte; Sigrist, Nadja

    2017-08-01

    Objectives The aim of this study was to determine in-house reference intervals (RIs) for venous blood analysis with the RAPIDPoint 500 blood gas analyser using blood gas syringes (BGSs) and to determine whether immediate analysis of venous blood collected into lithium heparin (LH) tubes can replace anaerobic blood sampling into BGSs. Methods Venous blood was collected from 24 healthy cats and directly transferred into a BGS and an LH tube. The BGS was immediately analysed on the RAPIDPoint 500 followed by the LH tube. The BGSs and LH tubes were compared using paired t-test or Wilcoxon matched-pairs signed-rank test, Bland-Altman and Passing-Bablok analysis. To assess clinical relevance, bias or percentage bias between BGSs and LH tubes was compared with the allowable total error (TEa) recommended for the respective parameter. Results Based on the values obtained from the BGSs, RIs were calculated for the evaluated parameters, including blood gases, electrolytes, glucose and lactate. Values derived from LH tubes showed no significant difference for standard bicarbonate, whole blood base excess, haematocrit, total haemoglobin, sodium, potassium, chloride, glucose and lactate, while pH, partial pressure of carbon dioxide and oxygen, actual bicarbonate, extracellular base excess, ionised calcium and anion gap were significantly different to the samples collected in BGSs ( P glucose and lactate can be made based on blood collected in LH tubes and analysed within 5 mins. For pH, partial pressure of carbon dioxide and oxygen, extracellular base excess, anion gap and ionised calcium the clinically relevant alterations have to be considered if analysed in LH tubes.

  10. The effect of changes in core body temperature on the QT interval in beagle dogs: a previously ignored phenomenon, with a method for correction.

    Science.gov (United States)

    van der Linde, H J; Van Deuren, B; Teisman, A; Towart, R; Gallacher, D J

    2008-08-01

    Body core temperature (Tc) changes affect the QT interval, but correction for this has not been systematically investigated. It may be important to correct QT intervals for drug-induced changes in Tc. Anaesthetized beagle dogs were artificially cooled (34.2 degrees C) or warmed (42.1 degrees C). The relationship between corrected QT intervals (QTcV; QT interval corrected according to the Van de Water formula) and Tc was analysed. This relationship was also examined in conscious dogs where Tc was increased by exercise. When QTcV intervals were plotted against changes in Tc, linear correlations were observed in all individual dogs. The slopes did not significantly differ between cooling (-14.85+/-2.08) or heating (-13.12+/-3.46) protocols. We propose a correction formula to compensate for the influence of Tc changes and standardize the QTcV duration to 37.5 degrees C: QTcVcT (QTcV corrected for changes in core temperature)=QTcV-14 (37.5 - Tc). Furthermore, cooled dogs were re-warmed (from 34.2 to 40.0 degrees C) and marked QTcV shortening (-29%) was induced. After Tc correction, using the above formula, this decrease was abolished. In these re-warmed dogs, we observed significant increases in T-wave amplitude and in serum [K(+)] levels. No arrhythmias or increase in pro-arrhythmic biomarkers were observed. In exercising dogs, the above formula completely compensated QTcV for the temperature increase. This study shows the importance of correcting QTcV intervals for changes in Tc, to avoid misleading interpretations of apparent QTcV interval changes. We recommend that all ICH S7A, conscious animal safety studies should routinely measure core body temperature and correct QTcV appropriately, if body temperature and heart rate changes are observed.

  11. A Review of Distributed Parameter Groundwater Management Modeling Methods

    Science.gov (United States)

    Gorelick, Steven M.

    1983-04-01

    Models which solve the governing groundwater flow or solute transport equations in conjunction with optimization techniques, such as linear and quadratic programing, are powerful aquifer management tools. Groundwater management models fall in two general categories: hydraulics or policy evaluation and water allocation. Groundwater hydraulic management models enable the determination of optimal locations and pumping rates of numerous wells under a variety of restrictions placed upon local drawdown, hydraulic gradients, and water production targets. Groundwater policy evaluation and allocation models can be used to study the influence upon regional groundwater use of institutional policies such as taxes and quotas. Furthermore, fairly complex groundwater-surface water allocation problems can be handled using system decomposition and multilevel optimization. Experience from the few real world applications of groundwater optimization-management techniques is summarized. Classified separately are methods for groundwater quality management aimed at optimal waste disposal in the subsurface. This classification is composed of steady state and transient management models that determine disposal patterns in such a way that water quality is protected at supply locations. Classes of research missing from the literature are groundwater quality management models involving nonlinear constraints, models which join groundwater hydraulic and quality simulations with political-economic management considerations, and management models that include parameter uncertainty.

  12. Agent-based method for distributed clustering of textual information

    Science.gov (United States)

    Potok, Thomas E [Oak Ridge, TN; Reed, Joel W [Knoxville, TN; Elmore, Mark T [Oak Ridge, TN; Treadwell, Jim N [Louisville, TN

    2010-09-28

    A computer method and system for storing, retrieving and displaying information has a multiplexing agent (20) that calculates a new document vector (25) for a new document (21) to be added to the system and transmits the new document vector (25) to master cluster agents (22) and cluster agents (23) for evaluation. These agents (22, 23) perform the evaluation and return values upstream to the multiplexing agent (20) based on the similarity of the document to documents stored under their control. The multiplexing agent (20) then sends the document (21) and the document vector (25) to the master cluster agent (22), which then forwards it to a cluster agent (23) or creates a new cluster agent (23) to manage the document (21). The system also searches for stored documents according to a search query having at least one term and identifying the documents found in the search, and displays the documents in a clustering display (80) of similarity so as to indicate similarity of the documents to each other.

  13. PROGRAMMING OF METHODS FOR THE NEEDS OF LOGISTICS DISTRIBUTION SOLVING PROBLEMS

    Directory of Open Access Journals (Sweden)

    Andrea Štangová

    2014-06-01

    Full Text Available Logistics has become one of the dominant factors which is affecting the successful management, competitiveness and mentality of the global economy. Distribution logistics materializes the connesciton of production and consumer marke. It uses different methodology and methods of multicriterial evaluation and allocation. This thesis adresses the problem of the costs of securing the distribution of product. It was therefore relevant to design a software product thet would be helpful in solvin the problems related to distribution logistics. Elodis – electronic distribution logistics program was designed on the basis of theoretical analysis of the issue of distribution logistics and on the analysis of the software products market. The program uses a multicriterial evaluation methods to deremine the appropriate type and mathematical and geometrical method to determine an appropriate allocation of the distribution center, warehouse and company.

  14. Identification of reactor failure states using noise methods, and spatial power distribution

    International Nuclear Information System (INIS)

    Vavrin, J.; Blazek, J.

    1981-01-01

    A survey is given of the results achieved. Methodical means and programs were developed for the control computer which may be used in noise diagnostics and in the control of reactor power distribution. Statistical methods of processing the noise components of the signals of measured variables were used for identifying failures of reactors. The method of the synthesis of the neutron flux was used for modelling and evaluating the reactor power distribution. For monitoring and controlling the power distribution a mathematical model of the reactor was constructed suitable for control computers. The uses of noise analysis methods are recommended and directions of further development shown. (J.P.)

  15. Application of the photoelastic experimental hybrid method with new numerical method to the high stress distribution

    International Nuclear Information System (INIS)

    Hawong, Jai Sug; Lee, Dong Hun; Lee, Dong Ha; Tche, Konstantin

    2004-01-01

    In this research, the photoelastic experimental hybrid method with Hook-Jeeves numerical method has been developed: This method is more precise and stable than the photoelastic experimental hybrid method with Newton-Rapson numerical method with Gaussian elimination method. Using the photoelastic experimental hybrid method with Hook-Jeeves numerical method, we can separate stress components from isochromatics only and stress intensity factors and stress concentration factors can be determined. The photoelastic experimental hybrid method with Hook-Jeeves had better be used in the full field experiment than the photoelastic experimental hybrid method with Newton-Rapson with Gaussian elimination method

  16. Non-iterative method to calculate the periodical distribution of temperature in reactors with thermal regeneration

    International Nuclear Information System (INIS)

    Sanchez de Alsina, O.L.; Scaricabarozzi, R.A.

    1982-01-01

    A matrix non-iterative method to calculate the periodical distribution in reactors with thermal regeneration is presented. In case of exothermic reaction, a source term will be included. A computer code was developed to calculate the final temperature distribution in solids and in the outlet temperatures of the gases. The results obtained from ethane oxidation calculation in air, using the Dietrich kinetic data are presented. This method is more advantageous than iterative methods. (E.G.) [pt

  17. Programming with Intervals

    Science.gov (United States)

    Matsakis, Nicholas D.; Gross, Thomas R.

    Intervals are a new, higher-level primitive for parallel programming with which programmers directly construct the program schedule. Programs using intervals can be statically analyzed to ensure that they do not deadlock or contain data races. In this paper, we demonstrate the flexibility of intervals by showing how to use them to emulate common parallel control-flow constructs like barriers and signals, as well as higher-level patterns such as bounded-buffer producer-consumer. We have implemented intervals as a publicly available library for Java and Scala.

  18. Test of methods for retrospective activity size distribution determination from filter samples

    International Nuclear Information System (INIS)

    Meisenberg, Oliver; Tschiersch, Jochen

    2015-01-01

    Determining the activity size distribution of radioactive aerosol particles requires sophisticated and heavy equipment, which makes measurements at large number of sites difficult and expensive. Therefore three methods for a retrospective determination of size distributions from aerosol filter samples in the laboratory were tested for their applicability. Extraction into a carrier liquid with subsequent nebulisation showed size distributions with a slight but correctable bias towards larger diameters compared with the original size distribution. Yields in the order of magnitude of 1% could be achieved. Sonication-assisted extraction into a carrier liquid caused a coagulation mode to appear in the size distribution. Sonication-assisted extraction into the air did not show acceptable results due to small yields. The method of extraction into a carrier liquid without sonication was applied to aerosol samples from Chernobyl in order to calculate inhalation dose coefficients for 137 Cs based on the individual size distribution. The effective dose coefficient is about half of that calculated with a default reference size distribution. - Highlights: • Activity size distributions can be recovered after aerosol sampling on filters. • Extraction into a carrier liquid and subsequent nebulisation is appropriate. • This facilitates the determination of activity size distributions for individuals. • Size distributions from this method can be used for individual dose coefficients. • Dose coefficients were calculated for the workers at the new Chernobyl shelter

  19. Sensitivity Analysis of Dynamic Tariff Method for Congestion Management in Distribution Networks

    DEFF Research Database (Denmark)

    Huang, Shaojun; Wu, Qiuwei; Liu, Zhaoxi

    2015-01-01

    The dynamic tariff (DT) method is designed for the distribution system operator (DSO) to alleviate the congestions that might occur in a distribution network with high penetration of distribute energy resources (DERs). Sensitivity analysis of the DT method is crucial because of its decentralized...... control manner. The sensitivity analysis can obtain the changes of the optimal energy planning and thereby the line loading profiles over the infinitely small changes of parameters by differentiating the KKT conditions of the convex quadratic programming, over which the DT method is formed. Three case...

  20. Nuclear method for determination of nitrogen depth distributions in single seeds. [/sup 14/N tracer technique

    Energy Technology Data Exchange (ETDEWEB)

    Sundqvist, B; Gonczi, L; Koersner, I; Bergman, R; Lindh, U

    1974-01-01

    (d,p) reactions in /sup 14/N were used for probing single kernels of seed for nitrogen content and nitrogen depth distributions. Comparison with the Kjeldahl method was made on individual peas and beans. The results were found to be strongly correlated. The technique to obtain depth distributions of nitrogen was also used on high- and low-lysine varieties of barley for which large differences in nitrogen distributions were found.

  1. Size distributions of micro-bubbles generated by a pressurized dissolution method

    Science.gov (United States)

    Taya, C.; Maeda, Y.; Hosokawa, S.; Tomiyama, A.; Ito, Y.

    2012-03-01

    Size of micro-bubbles is widely distributed in the range of one to several hundreds micrometers and depends on generation methods, flow conditions and elapsed times after the bubble generation. Although a size distribution of micro-bubbles should be taken into account to improve accuracy in numerical simulations of flows with micro-bubbles, a variety of the size distribution makes it difficult to introduce the size distribution in the simulations. On the other hand, several models such as the Rosin-Rammler equation and the Nukiyama-Tanazawa equation have been proposed to represent the size distribution of particles or droplets. Applicability of these models to the size distribution of micro-bubbles has not been examined yet. In this study, we therefore measure size distribution of micro-bubbles generated by a pressurized dissolution method by using a phase Doppler anemometry (PDA), and investigate the applicability of the available models to the size distributions of micro-bubbles. Experimental apparatus consists of a pressurized tank in which air is dissolved in liquid under high pressure condition, a decompression nozzle in which micro-bubbles are generated due to pressure reduction, a rectangular duct and an upper tank. Experiments are conducted for several liquid volumetric fluxes in the decompression nozzle. Measurements are carried out at the downstream region of the decompression nozzle and in the upper tank. The experimental results indicate that (1) the Nukiyama-Tanasawa equation well represents the size distribution of micro-bubbles generated by the pressurized dissolution method, whereas the Rosin-Rammler equation fails in the representation, (2) the bubble size distribution of micro-bubbles can be evaluated by using the Nukiyama-Tanasawa equation without individual bubble diameters, when mean bubble diameter and skewness of the bubble distribution are given, and (3) an evaluation method of visibility based on the bubble size distribution and bubble

  2. Distribution of conductive minerals as associated with uranium minerals at Dendang Arai sector by induced polarization method

    International Nuclear Information System (INIS)

    Nurdin, M.; Nikijuluw, N.; Subardjo; Sudarto, S.

    2000-01-01

    Based on previous investigation results, a favourable zone of 20-80 meters in wide, 80-240 meters in length and in the direction of East-West to Northwest-Southeast was found. The favourable zone is conductor, associated with sulfide. Induced polarization method has been applied to find vertical and horizontal sulfide distribution. The measurement was conducted in perpendicular to lateral direction of the conductive zone in an interval of 20 meters. Properties measured are apparent resistivity and charge ability. Measurement results indicated the presence of sulfide zone with the position and dip are sub-vertical. Sulfide zones were found on the fault cross-point with the directions being East-West to East South East-West North West by fault is North-South. This anomalies were then represented in 3 (three) dimension tomographic model. (author)

  3. Three-dimensional visualization and measurement of water distributions in PEFC by dynamic CT method on neutron radiography

    International Nuclear Information System (INIS)

    Hashimoto, Michinori; Murakawa, Hideki; Sugimoto, Katsumi; Asano, Hitoshi; Takenaka, Nobuyuki; Mochiki, Koh-ichi

    2011-01-01

    Visualization of dynamic three-dimensional water behavior in a PEFC stack was carried out by neutron CT for clarifying water effects on performances of a Polymer Electrolyte Fuel Cell (PEFC) stack. Neutron radiography system at JRR-3 in Japan Atomic Energy Agency was used. An operating stack with three cells based on Japan Automobile Research Institute standard was visualized. A consecutive CT reconstruction method by rotating the fuel stack continuously was developed by using a neutron image intensifier and a C-MOS high speed video camera. The dynamic water behavior in channels in the operating PEFC stack was clearly visualized 15 sec in interval by the developed dynamic neutron CT system. From the CT reconstructed images, evaluation of water amount in each cell was carried out. It was shown that the water distribution in each cell was correlated well with power generation characteristics in each cell. (author)

  4. A study of the up-and-down method for non-normal distribution functions

    DEFF Research Database (Denmark)

    Vibholm, Svend; Thyregod, Poul

    1988-01-01

    The assessment of breakdown probabilities is examined by the up-and-down method. The exact maximum-likelihood estimates for a number of response patterns are calculated for three different distribution functions and are compared with the estimates corresponding to the normal distribution. Estimates...

  5. Distributed AC power flow method for AC and AC-DC hybrid ...

    African Journals Online (AJOL)

    ... on voltage level and R/X ratio in the formulation itself. DPFM is applied on a 10 bus, low voltage, microgrid system giving a better voltage profile.. Keywords: Microgrid (MG), Distributed Energy Resources (DER), Particle Swarm Optimization (OPF), Time varying inertia weight (TVIW), Distributed power flow method (DPFM) ...

  6. Experimental comparison of phase retrieval methods which use intensity distribution at different planes

    International Nuclear Information System (INIS)

    Shevkunov, I A; Petrov, N V

    2014-01-01

    Performance of the three phase retrieval methods that use spatial intensity distributions was investigated in dealing with a task of reconstruction of the amplitude characteristics of the test object. These methods differ both by mathematical models and order of iteration execution. The single-beam multiple-intensity reconstruction method showed the best efficiency in terms of quality of reconstruction and time consumption.

  7. An improved in situ method for determining depth distributions of gamma-ray emitting radionuclides

    International Nuclear Information System (INIS)

    Benke, R.R.; Kearfott, K.J.

    2001-01-01

    In situ gamma-ray spectrometry determines the quantities of radionuclides in some medium with a portable detector. The main limitation of in situ gamma-ray spectrometry lies in determining the depth distribution of radionuclides. This limitation is addressed by developing an improved in situ method for determining the depth distributions of gamma-ray emitting radionuclides in large area sources. This paper implements a unique collimator design with conventional radiation detection equipment. Cylindrically symmetric collimators were fabricated to allow only those gamma-rays emitted from a selected range of polar angles (measured off the detector axis) to be detected. Positioned with its axis normal to surface of the media, each collimator enables the detection of gamma-rays emitted from a different range of polar angles and preferential depths. Previous in situ methods require a priori knowledge of the depth distribution shape. However, the absolute method presented in this paper determines the depth distribution as a histogram and does not rely on such assumptions. Other advantages over previous in situ methods are that this method only requires a single gamma-ray emission, provides more detailed depth information, and offers a superior ability for characterizing complex depth distributions. Collimated spectrometer measurements of buried area sources demonstrated the ability of the method to yield accurate depth information. Based on the results of actual measurements, this method increases the potential of in situ gamma-ray spectrometry as an independent characterization tool in situations with unknown radionuclide depth distributions

  8. Analytical method for reconstruction pin to pin of the nuclear power density distribution

    Energy Technology Data Exchange (ETDEWEB)

    Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S., E-mail: ppessoa@con.ufrj.br, E-mail: fernando@con.ufrj.br, E-mail: aquilino@imp.ufrj.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil)

    2013-07-01

    An accurate and efficient method for reconstructing pin to pin of the nuclear power density distribution, involving the analytical solution of the diffusion equation for two-dimensional neutron energy groups in homogeneous nodes, is presented. The boundary conditions used for analytic as solution are the four currents or fluxes on the surface of the node, which are obtained by Nodal Expansion Method (known as NEM) and four fluxes at the vertices of a node calculated using the finite difference method. The analytical solution found is the homogeneous distribution of neutron flux. Detailed distributions pin to pin inside a fuel assembly are estimated by the product of homogeneous flux distribution by local heterogeneous form function. Furthermore, the form functions of flux and power are used. The results obtained with this method have a good accuracy when compared with reference values. (author)

  9. Analytical method for reconstruction pin to pin of the nuclear power density distribution

    International Nuclear Information System (INIS)

    Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S.

    2013-01-01

    An accurate and efficient method for reconstructing pin to pin of the nuclear power density distribution, involving the analytical solution of the diffusion equation for two-dimensional neutron energy groups in homogeneous nodes, is presented. The boundary conditions used for analytic as solution are the four currents or fluxes on the surface of the node, which are obtained by Nodal Expansion Method (known as NEM) and four fluxes at the vertices of a node calculated using the finite difference method. The analytical solution found is the homogeneous distribution of neutron flux. Detailed distributions pin to pin inside a fuel assembly are estimated by the product of homogeneous flux distribution by local heterogeneous form function. Furthermore, the form functions of flux and power are used. The results obtained with this method have a good accuracy when compared with reference values. (author)

  10. Proposal for a new method of reactor neutron flux distribution determination

    Energy Technology Data Exchange (ETDEWEB)

    Popic, V R [Institute of nuclear sciences Boris Kidric, Vinca, Beograd (Serbia and Montenegro)

    1964-01-15

    A method, based on the measurements of the activity produced in a medium flowing with variable velocity through a reactor, for the determination of the neutron flux distribution inside a reactor is considered theoretically (author)

  11. Advanced airflow distribution methods for reduction of personal exposure to indoor pollutants

    DEFF Research Database (Denmark)

    Cao, Guangyu; Kosonen, Risto; Melikov, Arsen

    2016-01-01

    The main objective of this study is to recognize possible airflow distribution methods to protect the occupants from exposure to various indoor pollutants. The fact of the increasing exposure of occupants to various indoor pollutants shows that there is an urgent need to develop advanced airflow ...... distribution methods to reduce indoor exposure to various indoor pollutants. This article presents some of the latest development of advanced airflow distribution methods to reduce indoor exposure in various types of buildings.......The main objective of this study is to recognize possible airflow distribution methods to protect the occupants from exposure to various indoor pollutants. The fact of the increasing exposure of occupants to various indoor pollutants shows that there is an urgent need to develop advanced airflow...

  12. A Method of Visualizing Three-Dimensional Distribution of Yeast in Bread Dough

    Science.gov (United States)

    Maeda, Tatsurou; Do, Gab-Soo; Sugiyama, Junichi; Oguchi, Kosei; Shiraga, Seizaburou; Ueda, Mitsuyoshi; Takeya, Koji; Endo, Shigeru

    A novel technique was developed to monitor the change in three-dimensional (3D) distribution of yeast in frozen bread dough samples in accordance with the progress of mixing process. Application of a surface engineering technology allowed the identification of yeast in bread dough by bonding EGFP (Enhanced Green Fluorescent Protein) to the surface of yeast cells. The fluorescent yeast (a biomarker) was recognized as bright spots at the wavelength of 520 nm. A Micro-Slicer Image Processing System (MSIPS) with a fluorescence microscope was utilized to acquire cross-sectional images of frozen dough samples sliced at intervals of 1 μm. A set of successive two-dimensional images was reconstructed to analyze 3D distribution of yeast. Samples were taken from each of four normal mixing stages (i.e., pick up, clean up, development, and final stages) and also from over mixing stage. In the pick up stage yeast distribution was uneven with local areas of dense yeast. As the mixing progressed from clean up to final stages, the yeast became more evenly distributed throughout the dough sample. However, the uniformity in yeast distribution was lost in the over mixing stage possibly due to the breakdown of gluten structure within the dough sample.

  13. Distribution Route Planning of Clean Coal Based on Nearest Insertion Method

    Science.gov (United States)

    Wang, Yunrui

    2018-01-01

    Clean coal technology has made some achievements for several ten years, but the research in its distribution field is very small, the distribution efficiency would directly affect the comprehensive development of clean coal technology, it is the key to improve the efficiency of distribution by planning distribution route rationally. The object of this paper was a clean coal distribution system which be built in a county. Through the surveying of the customer demand and distribution route, distribution vehicle in previous years, it was found that the vehicle deployment was only distributed by experiences, and the number of vehicles which used each day changed, this resulted a waste of transport process and an increase in energy consumption. Thus, the mathematical model was established here in order to aim at shortest path as objective function, and the distribution route was re-planned by using nearest-insertion method which been improved. The results showed that the transportation distance saved 37 km and the number of vehicles used had also been decreased from the past average of 5 to fixed 4 every day, as well the real loading of vehicles increased by 16.25% while the current distribution volume staying same. It realized the efficient distribution of clean coal, achieved the purpose of saving energy and reducing consumption.

  14. New method for exact measurement of thermal neutron distribution in elementary cell

    International Nuclear Information System (INIS)

    Takac, S.M.; Krcevinac, S.B.

    1966-06-01

    Exact measurement of thermal neutron density distribution in an elementary cell necessitates the knowledge of the perturbations involved in the cell by the measuring device. A new method has been developed in which a special stress is made to evaluate these perturbations by measuring the response from the perturbations introduced in the elementary cell. The unperturbed distribution was obtained by extrapolation to zero perturbation. The final distributions for different lattice pitches were compared with a THERMOS-type calculation. As a pleasing fact a very good agreement has been reached, which dissolves the long existing disagreement between THERMOS calculations and measured density distribution (author)

  15. Prediction method for thermal ratcheting of a cylinder subjected to axially moving temperature distribution

    International Nuclear Information System (INIS)

    Wada, Hiroshi; Igari, Toshihide; Kitade, Shoji.

    1989-01-01

    A prediction method was proposed for plastic ratcheting of a cylinder, which was subjected to axially moving temperature distribution without primary stress. First, a mechanism of this ratcheting was proposed, which considered the movement of temperature distribution as a driving force of this phenomenon. Predictive equations of the ratcheting strain for two representative temperature distributions were proposed based on this mechanism by assuming the elastic-perfectly-plastic material behavior. Secondly, an elastic-plastic analysis was made on a cylinder subjected to the representative two temperature distributions. Analytical results coincided well with the predicted results, and the applicability of the proposed equations was confirmed. (author)

  16. New method for exact measurement of thermal neutron distribution in elementary cell

    Energy Technology Data Exchange (ETDEWEB)

    Takac, S M; Krcevinac, S B [Institute of nuclear sciences Boris Kidric, Vinca, Beograd (Yugoslavia)

    1966-06-15

    Exact measurement of thermal neutron density distribution in an elementary cell necessitates the knowledge of the perturbations involved in the cell by the measuring device. A new method has been developed in which a special stress is made to evaluate these perturbations by measuring the response from the perturbations introduced in the elementary cell. The unperturbed distribution was obtained by extrapolation to zero perturbation. The final distributions for different lattice pitches were compared with a THERMOS-type calculation. As a pleasing fact a very good agreement has been reached, which dissolves the long existing disagreement between THERMOS calculations and measured density distribution (author)

  17. Confidence Limits for the Indirect Effect: Distribution of the Product and Resampling Methods

    Science.gov (United States)

    MacKinnon, David P.; Lockwood, Chondra M.; Williams, Jason

    2010-01-01

    The most commonly used method to test an indirect effect is to divide the estimate of the indirect effect by its standard error and compare the resulting z statistic with a critical value from the standard normal distribution. Confidence limits for the indirect effect are also typically based on critical values from the standard normal distribution. This article uses a simulation study to demonstrate that confidence limits are imbalanced because the distribution of the indirect effect is normal only in special cases. Two alternatives for improving the performance of confidence limits for the indirect effect are evaluated: (a) a method based on the distribution of the product of two normal random variables, and (b) resampling methods. In Study 1, confidence limits based on the distribution of the product are more accurate than methods based on an assumed normal distribution but confidence limits are still imbalanced. Study 2 demonstrates that more accurate confidence limits are obtained using resampling methods, with the bias-corrected bootstrap the best method overall. PMID:20157642

  18. 用Delta法估计多维测验合成信度的置信区间%Estimating the Confidence Interval of Composite Reliability of a Multidimensional Test With the Delta Method

    Institute of Scientific and Technical Information of China (English)

    叶宝娟; 温忠麟

    2012-01-01

    Reliability is very important in evaluating the quality of a test. Based on the confirmatory factor analysis, composite reliabili- ty is a good index to estimate the test reliability for general applications. As is well known, point estimate contains limited information a- bout a population parameter and cannot indicate how far it can be from the population parameter. The confidence interval of the parame- ter can provide more information. In evaluating the quality of a test, the confidence interval of composite reliability has received atten- tion in recent years. There are three approaches to estimating the confidence interval of composite reliability of an unidimensional test: the Bootstrap method, the Delta method, and the direct use of the standard error of a software output (e. g. , LISREL). The Bootstrap method pro- vides empirical results of the standard error, and is the most credible method. But it needs data simulation techniques, and its computa- tion process is rather complex. The Delta method computes the standard error of composite reliability by approximate calculation. It is simpler than the Bootstrap method. The LISREL software can directly prompt the standard error, and it is the easiest among the three methods. By simulation study, it had been found that the interval estimates obtained by the Delta method and the Bootstrap method were almost identical, whereas the results obtained by LISREL and by the Bootstrap method were substantially different ( Ye & Wen, 2011 ). The Delta method is recommended when the confidence interval of composite reliability of a unidimensional test is estimated, because the Delta method is simpler than the Bootstrap method. There was little research about how to compute the confidence interval of composite reliability of a multidimensional test. We de- duced a formula by using the Delta method for computing the standard error of composite reliability of a multidimensional test. Based on the standard error, the

  19. Calculating emittance for Gaussian and Non-Gaussian distributions by the method of correlations for slits

    International Nuclear Information System (INIS)

    Tan, Cheng-Yang; Fermilab

    2006-01-01

    One common way for measuring the emittance of an electron beam is with the slits method. The usual approach for analyzing the data is to calculate an emittance that is a subset of the parent emittance. This paper shows an alternative way by using the method of correlations which ties the parameters derived from the beamlets to the actual parameters of the parent emittance. For parent distributions that are Gaussian, this method yields exact results. For non-Gaussian beam distributions, this method yields an effective emittance that can serve as a yardstick for emittance comparisons

  20. Predictive Distribution of the Dirichlet Mixture Model by the Local Variational Inference Method

    DEFF Research Database (Denmark)

    Ma, Zhanyu; Leijon, Arne; Tan, Zheng-Hua

    2014-01-01

    the predictive likelihood of the new upcoming data, especially when the amount of training data is small. The Bayesian estimation of a Dirichlet mixture model (DMM) is, in general, not analytically tractable. In our previous work, we have proposed a global variational inference-based method for approximately...... calculating the posterior distributions of the parameters in the DMM analytically. In this paper, we extend our previous study for the DMM and propose an algorithm to calculate the predictive distribution of the DMM with the local variational inference (LVI) method. The true predictive distribution of the DMM...... is analytically intractable. By considering the concave property of the multivariate inverse beta function, we introduce an upper-bound to the true predictive distribution. As the global minimum of this upper-bound exists, the problem is reduced to seek an approximation to the true predictive distribution...

  1. Uncertainty Management of Dynamic Tariff Method for Congestion Management in Distribution Networks

    DEFF Research Database (Denmark)

    Huang, Shaojun; Wu, Qiuwei; Cheng, Lin

    2016-01-01

    The dynamic tariff (DT) method is designed for the distribution system operator (DSO) to alleviate congestions that might occur in a distribution network with high penetration of distributed energy resources (DERs). Uncertainty management is required for the decentralized DT method because the DT...... is de- termined based on optimal day-ahead energy planning with forecasted parameters such as day-ahead energy prices and en- ergy needs which might be different from the parameters used by aggregators. The uncertainty management is to quantify and mitigate the risk of the congestion when employing...

  2. Transformation of an empirical distribution to normal distribution by the use of Johnson system of translation and symmetrical quantile method

    OpenAIRE

    Ludvík Friebel; Jana Friebelová

    2006-01-01

    This article deals with approximation of empirical distribution to standard normal distribution using Johnson transformation. This transformation enables us to approximate wide spectrum of continuous distributions with a normal distribution. The estimation of parameters of transformation formulas is based on percentiles of empirical distribution. There are derived theoretical probability distribution functions of random variable obtained on the base of backward transformation standard normal ...

  3. Coupled double-distribution-function lattice Boltzmann method for the compressible Navier-Stokes equations.

    Science.gov (United States)

    Li, Q; He, Y L; Wang, Y; Tao, W Q

    2007-11-01

    A coupled double-distribution-function lattice Boltzmann method is developed for the compressible Navier-Stokes equations. Different from existing thermal lattice Boltzmann methods, this method can recover the compressible Navier-Stokes equations with a flexible specific-heat ratio and Prandtl number. In the method, a density distribution function based on a multispeed lattice is used to recover the compressible continuity and momentum equations, while the compressible energy equation is recovered by an energy distribution function. The energy distribution function is then coupled to the density distribution function via the thermal equation of state. In order to obtain an adjustable specific-heat ratio, a constant related to the specific-heat ratio is introduced into the equilibrium energy distribution function. Two different coupled double-distribution-function lattice Boltzmann models are also proposed in the paper. Numerical simulations are performed for the Riemann problem, the double-Mach-reflection problem, and the Couette flow with a range of specific-heat ratios and Prandtl numbers. The numerical results are found to be in excellent agreement with analytical and/or other solutions.

  4. New method for extracting tumors in PET/CT images based on the probability distribution

    International Nuclear Information System (INIS)

    Nitta, Shuhei; Hontani, Hidekata; Hukami, Tadanori

    2006-01-01

    In this report, we propose a method for extracting tumors from PET/CT images by referring to the probability distribution of pixel values in the PET image. In the proposed method, first, the organs that normally take up fluorodeoxyglucose (FDG) (e.g., the liver, kidneys, and brain) are extracted. Then, the tumors are extracted from the images. The distribution of pixel values in PET images differs in each region of the body. Therefore, the threshold for detecting tumors is adaptively determined by referring to the distribution. We applied the proposed method to 37 cases and evaluated its performance. This report also presents the results of experiments comparing the proposed method and another method in which the pixel values are normalized for extracting tumors. (author)

  5. Demonstration of a collimated in situ method for determining depth distributions using gamma-ray spectrometry

    CERN Document Server

    Benke, R R

    2002-01-01

    In situ gamma-ray spectrometry uses a portable detector to quantify radionuclides in materials. The main shortcoming of in situ gamma-ray spectrometry has been its inability to determine radionuclide depth distributions. Novel collimator designs were paired with a commercial in situ gamma-ray spectrometry system to overcome this limitation for large area sources. Positioned with their axes normal to the material surface, the cylindrically symmetric collimators limited the detection of un attenuated gamma-rays from a selected range of polar angles (measured off the detector axis). Although this approach does not alleviate the need for some knowledge of the gamma-ray attenuation characteristics of the materials being measured, the collimation method presented in this paper represents an absolute method that determines the depth distribution as a histogram, while other in situ methods require a priori knowledge of the depth distribution shape. Other advantages over previous in situ methods are that this method d...

  6. Assessing different parameters estimation methods of Weibull distribution to compute wind power density

    International Nuclear Information System (INIS)

    Mohammadi, Kasra; Alavi, Omid; Mostafaeipour, Ali; Goudarzi, Navid; Jalilvand, Mahdi

    2016-01-01

    Highlights: • Effectiveness of six numerical methods is evaluated to determine wind power density. • More appropriate method for computing the daily wind power density is estimated. • Four windy stations located in the south part of Alberta, Canada namely is investigated. • The more appropriate parameters estimation method was not identical among all examined stations. - Abstract: In this study, the effectiveness of six numerical methods is evaluated to determine the shape (k) and scale (c) parameters of Weibull distribution function for the purpose of calculating the wind power density. The selected methods are graphical method (GP), empirical method of Justus (EMJ), empirical method of Lysen (EML), energy pattern factor method (EPF), maximum likelihood method (ML) and modified maximum likelihood method (MML). The purpose of this study is to identify the more appropriate method for computing the wind power density in four stations distributed in Alberta province of Canada namely Edmonton City Center Awos, Grande Prairie A, Lethbridge A and Waterton Park Gate. To provide a complete analysis, the evaluations are performed on both daily and monthly scales. The results indicate that the precision of computed wind power density values change when different parameters estimation methods are used to determine the k and c parameters. Four methods of EMJ, EML, EPF and ML present very favorable efficiency while the GP method shows weak ability for all stations. However, it is found that the more effective method is not similar among stations owing to the difference in the wind characteristics.

  7. Uncertainty analysis of the radiological characteristics of radioactive waste using a method based on log-normal distributions

    International Nuclear Information System (INIS)

    Gigase, Yves

    2007-01-01

    Available in abstract form only. Full text of publication follows: The uncertainty on characteristics of radioactive LILW waste packages is difficult to determine and often very large. This results from a lack of knowledge of the constitution of the waste package and of the composition of the radioactive sources inside. To calculate a quantitative estimate of the uncertainty on a characteristic of a waste package one has to combine these various uncertainties. This paper discusses an approach to this problem, based on the use of the log-normal distribution, which is both elegant and easy to use. It can provide as example quantitative estimates of uncertainty intervals that 'make sense'. The purpose is to develop a pragmatic approach that can be integrated into existing characterization methods. In this paper we show how our method can be applied to the scaling factor method. We also explain how it can be used when estimating other more complex characteristics such as the total uncertainty of a collection of waste packages. This method could have applications in radioactive waste management, more in particular in those decision processes where the uncertainty on the amount of activity is considered to be important such as in probability risk assessment or the definition of criteria for acceptance or categorization. (author)

  8. Neutron distribution modeling based on integro-probabilistic approach of discrete ordinates method

    International Nuclear Information System (INIS)

    Khromov, V.V.; Kryuchkov, E.F.; Tikhomirov, G.V.

    1992-01-01

    In this paper is described the universal nodal method for the neutron distribution calculation in reactor and shielding problems, based on using of influence functions and factors of local-integrated volume and surface neutron sources in phase subregions. This method permits to avoid the limited capabilities of collision-probability method concerning with the detailed calculation of angular neutron flux dependence, scattering anisotropy and empty channels. The proposed method may be considered as modification of S n - method with advantage of ray-effects elimination. There are presented the description of method theory and algorithm following by the examples of method applications for calculation of neutron distribution in three-dimensional model of fusion reactor blanket and in highly heterogeneous reactor with empty channel

  9. Environmental DNA method for estimating salamander distribution in headwater streams, and a comparison of water sampling methods.

    Science.gov (United States)

    Katano, Izumi; Harada, Ken; Doi, Hideyuki; Souma, Rio; Minamoto, Toshifumi

    2017-01-01

    Environmental DNA (eDNA) has recently been used for detecting the distribution of macroorganisms in various aquatic habitats. In this study, we applied an eDNA method to estimate the distribution of the Japanese clawed salamander, Onychodactylus japonicus, in headwater streams. Additionally, we compared the detection of eDNA and hand-capturing methods used for determining the distribution of O. japonicus. For eDNA detection, we designed a qPCR primer/probe set for O. japonicus using the 12S rRNA region. We detected the eDNA of O. japonicus at all sites (with the exception of one), where we also observed them by hand-capturing. Additionally, we detected eDNA at two sites where we were unable to observe individuals using the hand-capturing method. Moreover, we found that eDNA concentrations and detection rates of the two water sampling areas (stream surface and under stones) were not significantly different, although the eDNA concentration in the water under stones was more varied than that on the surface. We, therefore, conclude that eDNA methods could be used to determine the distribution of macroorganisms inhabiting headwater systems by using samples collected from the surface of the water.

  10. Estimation of the distribution coefficient by combined application of two different methods

    International Nuclear Information System (INIS)

    Vogl, G.; Gerstenbrand, F.

    1982-01-01

    A simple, non-invasive method is presented which permits determination of the rBCF and, in addition, of the distribution coefficient of the grey matter. The latter, which is closely correlated with the cerebral metabolism, has only been determined in vitro so far. The new method will be a means to check its accuracy. (orig.) [de

  11. Uniform distribution and quasi-Monte Carlo methods discrepancy, integration and applications

    CERN Document Server

    Kritzer, Peter; Pillichshammer, Friedrich; Winterhof, Arne

    2014-01-01

    The survey articles in this book focus on number theoretic point constructions, uniform distribution theory, and quasi-Monte Carlo methods. As deterministic versions of the Monte Carlo method, quasi-Monte Carlo rules enjoy increasing popularity, with many fruitful applications in mathematical practice, as for example in finance, computer graphics, and biology.

  12. A transmission probability method for calculation of neutron flux distributions in hexagonal geometry

    International Nuclear Information System (INIS)

    Wasastjerna, F.; Lux, I.

    1980-03-01

    A transmission probability method implemented in the program TPHEX is described. This program was developed for the calculation of neutron flux distributions in hexagonal light water reactor fuel assemblies. The accuracy appears to be superior to diffusion theory, and the computation time is shorter than that of the collision probability method. (author)

  13. Why liquid displacement methods are sometimes wrong in estimating the pore-size distribution

    NARCIS (Netherlands)

    Gijsbertsen-Abrahamse, A.J.; Boom, R.M.; Padt, van der A.

    2004-01-01

    The liquid displacement method is a commonly used method to determine the pore size distribution of micro- and ultrafiltration membranes. One of the assumptions for the calculation of the pore sizes is that the pores are parallel and thus are not interconnected. To show that the estimated pore size

  14. An optimized encoding method for secure key distribution by swapping quantum entanglement and its extension

    International Nuclear Information System (INIS)

    Gao Gan

    2015-01-01

    Song [Song D 2004 Phys. Rev. A 69 034301] first proposed two key distribution schemes with the symmetry feature. We find that, in the schemes, the private channels which Alice and Bob publicly announce the initial Bell state or the measurement result through are not needed in discovering keys, and Song’s encoding methods do not arrive at the optimization. Here, an optimized encoding method is given so that the efficiencies of Song’s schemes are improved by 7/3 times. Interestingly, this optimized encoding method can be extended to the key distribution scheme composed of generalized Bell states. (paper)

  15. Research on distributed optical fiber sensing data processing method based on LabVIEW

    Science.gov (United States)

    Li, Zhonghu; Yang, Meifang; Wang, Luling; Wang, Jinming; Yan, Junhong; Zuo, Jing

    2018-01-01

    The pipeline leak detection and leak location problem have gotten extensive attention in the industry. In this paper, the distributed optical fiber sensing system is designed based on the heat supply pipeline. The data processing method of distributed optical fiber sensing based on LabVIEW is studied emphatically. The hardware system includes laser, sensing optical fiber, wavelength division multiplexer, photoelectric detector, data acquisition card and computer etc. The software system is developed using LabVIEW. The software system adopts wavelet denoising method to deal with the temperature information, which improved the SNR. By extracting the characteristic value of the fiber temperature information, the system can realize the functions of temperature measurement, leak location and measurement signal storage and inquiry etc. Compared with traditional negative pressure wave method or acoustic signal method, the distributed optical fiber temperature measuring system can measure several temperatures in one measurement and locate the leak point accurately. It has a broad application prospect.

  16. Effects of mixing methods on phase distribution in vertical bubble flow

    International Nuclear Information System (INIS)

    Monji, Hideaki; Matsui, Goichi; Sugiyama, Takayuki.

    1992-01-01

    The mechanism of the phase distribution formation in a bubble flow is one of the most important problems in the control of two-phase flow systems. The effect of mixing methods on the phase distribution was experimentally investigated by using upward nitrogen gas-water bubble flow under the condition of fixed flow rates. The experimental results show that the diameter of the gas injection hole influences the phase distribution through the bubble size. The location of the injection hole and the direction of injection do not influence the phase distribution of fully developed bubble flow. The transitive equivalent bubble size from the coring bubble flow to the sliding bubble flow corresponds to the bubble shape transition. The analytical results show that the phase distribution may be predictable if the phase profile is judged from the bubble size. (author)

  17. Sediment spatial distribution evaluated by three methods and its relation to some soil properties

    Energy Technology Data Exchange (ETDEWEB)

    Bacchi, O O.S. . [Centro de Energia Nuclear na Agricultura-CENA/USP, Laboratorio de Fisica do Solo, Piracicaba, SP (Brazil); Reichardt, K [Centro de Energia Nuclear na Agricultura-CENA/USP, Laboratorio de Fisica do Solo, Piracicaba, SP (Brazil); Departamento de Ciencias Exatas, Escola Superior de Agricultura ' Luiz de Queiroz' ESALQ/USP, Piracicaba, SP (Brazil); Sparovek, G [Departamento de Solos e Nutricao de Plantas, Escola Superior de Agricultura ' Luiz de Queiroz' ESALQ/USP, Piracicaba, SP (Brazil)

    2003-02-15

    An investigation of rates and spatial distribution of sediments on an agricultural field cultivated with sugarcane was undertaken using the {sup 137}Cs technique, USLE and WEPP models. The study was carried out on the Ceveiro watershed of the Piracicaba river basin, state of Sao Paulo, Brazil, experiencing severe soil degradation due to soil erosion. The objectives of the study were to compare the spatial distribution of sediments evaluated by the three methods and its relation to some soil properties. Erosion and sedimentation rates and their spatial distribution estimated by the three methods were completely different. Although not able to show sediment deposition, the spatial distribution of erosion rates evaluated by USLE presented the best correlation with other studied soil properties. (author)

  18. A Study of Economical Incentives for Voltage Profile Control Method in Future Distribution Network

    Science.gov (United States)

    Tsuji, Takao; Sato, Noriyuki; Hashiguchi, Takuhei; Goda, Tadahiro; Tange, Seiji; Nomura, Toshio

    In a future distribution network, it is difficult to maintain system voltage because a large number of distributed generators are introduced to the system. The authors have proposed “voltage profile control method” using power factor control of distributed generators in the previous work. However, the economical disbenefit is caused by the active power decrease when the power factor is controlled in order to increase the reactive power. Therefore, proper incentives must be given to the customers that corporate to the voltage profile control method. Thus, in this paper, we develop a new rules which can decide the economical incentives to the customers. The method is tested in one feeder distribution network model and its effectiveness is shown.

  19. Higher moments method for generalized Pareto distribution in flood frequency analysis

    Science.gov (United States)

    Zhou, C. R.; Chen, Y. F.; Huang, Q.; Gu, S. H.

    2017-08-01

    The generalized Pareto distribution (GPD) has proven to be the ideal distribution in fitting with the peak over threshold series in flood frequency analysis. Several moments-based estimators are applied to estimating the parameters of GPD. Higher linear moments (LH moments) and higher probability weighted moments (HPWM) are the linear combinations of Probability Weighted Moments (PWM). In this study, the relationship between them will be explored. A series of statistical experiments and a case study are used to compare their performances. The results show that if the same PWM are used in LH moments and HPWM methods, the parameter estimated by these two methods is unbiased. Particularly, when the same PWM are used, the PWM method (or the HPWM method when the order equals 0) shows identical results in parameter estimation with the linear Moments (L-Moments) method. Additionally, this phenomenon is significant when r ≥ 1 that the same order PWM are used in HPWM and LH moments method.

  20. Projection methods for the analysis of molecular-frame photoelectron angular distributions

    International Nuclear Information System (INIS)

    Grum-Grzhimailo, A.N.; Lucchese, R.R.; Liu, X.-J.; Pruemper, G.; Morishita, Y.; Saito, N.; Ueda, K.

    2007-01-01

    A projection method is developed for extracting the nondipole contribution from the molecular frame photoelectron angular distributions of linear molecules. A corresponding convenient parametric form for the angular distributions is derived. The analysis was performed for the N 1s photoionization of the NO molecule a few eV above the ionization threshold. No detectable nondipole contribution was found for the photon energy of 412 eV

  1. A method and programme (BREACH) for predicting the flow distribution in water cooled reactor cores

    International Nuclear Information System (INIS)

    Randles, J.; Roberts, H.A.

    1961-03-01

    The method presented here of evaluating the flow rate in individual reactor channels may be applied to any type of water cooled reactor in which boiling occurs The flow distribution is calculated with the aid of a MERCURY autocode programme, BREACH, which is described in detail. This programme computes the steady state longitudinal void distribution and pressure drop in a single channel on the basis of the homogeneous model of two phase flow. (author)

  2. A method and programme (BREACH) for predicting the flow distribution in water cooled reactor cores

    Energy Technology Data Exchange (ETDEWEB)

    Randles, J; Roberts, H A [Technical Assessments and Services Division, Atomic Energy Establishment, Winfrith, Dorchester, Dorset (United Kingdom)

    1961-03-15

    The method presented here of evaluating the flow rate in individual reactor channels may be applied to any type of water cooled reactor in which boiling occurs The flow distribution is calculated with the aid of a MERCURY autocode programme, BREACH, which is described in detail. This programme computes the steady state longitudinal void distribution and pressure drop in a single channel on the basis of the homogeneous model of two phase flow. (author)

  3. Leakage localisation method in a water distribution system based on sensitivity matrix: methodology and real test

    OpenAIRE

    Pascual Pañach, Josep

    2010-01-01

    Leaks are present in all water distribution systems. In this paper a method for leakage detection and localisation is presented. It uses pressure measurements and simulation models. Leakage localisation methodology is based on pressure sensitivity matrix. Sensitivity is normalised and binarised using a common threshold for all nodes, so a signatures matrix is obtained. A pressure sensor optimal distribution methodology is developed too, but it is not used in the real test. To validate this...

  4. Evaluation of a post-analysis method for cumulative dose distribution in stereotactic body radiotherapy

    International Nuclear Information System (INIS)

    Imae, Toshikazu; Takenaka, Shigeharu; Saotome, Naoya

    2016-01-01

    The purpose of this study was to evaluate a post-analysis method for cumulative dose distribution in stereotactic body radiotherapy (SBRT) using volumetric modulated arc therapy (VMAT). VMAT is capable of acquiring respiratory signals derived from projection images and machine parameters based on machine logs during VMAT delivery. Dose distributions were reconstructed from the respiratory signals and machine parameters in the condition where respiratory signals were without division, divided into 4 and 10 phases. The dose distribution of each respiratory phase was calculated on the planned four-dimensional CT (4DCT). Summation of the dose distributions was carried out using deformable image registration (DIR), and cumulative dose distributions were compared with those of the corresponding plans. Without division, dose differences between cumulative distribution and plan were not significant. In the condition Where respiratory signals were divided, dose differences were observed over dose in cranial region and under dose in caudal region of planning target volume (PTV). Differences between 4 and 10 phases were not significant. The present method Was feasible for evaluating cumulative dose distribution in VMAT-SBRT using 4DCT and DIR. (author)

  5. METHODS OF MANAGING TRAFFIC DISTRIBUTION IN INFORMATION AND COMMUNICATION NETWORKS OF CRITICAL INFRASTRUCTURE SYSTEMS

    OpenAIRE

    Kosenko, Viktor; Persiyanova, Elena; Belotskyy, Oleksiy; Malyeyeva, Olga

    2017-01-01

    The subject matter of the article is information and communication networks (ICN) of critical infrastructure systems (CIS). The goal of the work is to create methods for managing the data flows and resources of the ICN of CIS to improve the efficiency of information processing. The following tasks were solved in the article: the data flow model of multi-level ICN structure was developed, the method of adaptive distribution of data flows was developed, the method of network resource assignment...

  6. Method of determining local distribution of water or aqueous solutions penetrated into plastics

    International Nuclear Information System (INIS)

    Krejci, M.; Joks, Z.

    1983-01-01

    Penetrating water is labelled with tritium and the distribution is autoradiographically monitored. The discovery consists in that the plastic with the penetrating water or aqueous solution is cooled with liquid nitrogen and under the stream of liquid nitrogen the plastic is cut and exposed on the autoradiographic film in the freezer at temperatures from -15 to -30 degC. The autoradiogram will show the distribution of water in the whole area of the section. The described method may be used to detect water distribution also in filled plastics. (J.P.)

  7. Finite difference applied to the reconstruction method of the nuclear power density distribution

    International Nuclear Information System (INIS)

    Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S.

    2016-01-01

    Highlights: • A method for reconstruction of the power density distribution is presented. • The method uses discretization by finite differences of 2D neutrons diffusion equation. • The discretization is performed homogeneous meshes with dimensions of a fuel cell. • The discretization is combined with flux distributions on the four node surfaces. • The maximum errors in reconstruction occur in the peripheral water region. - Abstract: In this reconstruction method the two-dimensional (2D) neutron diffusion equation is discretized by finite differences, employed to two energy groups (2G) and meshes with fuel-pin cell dimensions. The Nodal Expansion Method (NEM) makes use of surface discontinuity factors of the node and provides for reconstruction method the effective multiplication factor of the problem and the four surface average fluxes in homogeneous nodes with size of a fuel assembly (FA). The reconstruction process combines the discretized 2D diffusion equation by finite differences with fluxes distribution on four surfaces of the nodes. These distributions are obtained for each surfaces from a fourth order one-dimensional (1D) polynomial expansion with five coefficients to be determined. The conditions necessary for coefficients determination are three average fluxes on consecutive surfaces of the three nodes and two fluxes in corners between these three surface fluxes. Corner fluxes of the node are determined using a third order 1D polynomial expansion with four coefficients. This reconstruction method uses heterogeneous nuclear parameters directly providing the heterogeneous neutron flux distribution and the detailed nuclear power density distribution within the FAs. The results obtained with this method has good accuracy and efficiency when compared with reference values.

  8. Extraction and LOD control of colored interval volumes

    Science.gov (United States)

    Miyamura, Hiroko N.; Takeshima, Yuriko; Fujishiro, Issei; Saito, Takafumi

    2005-03-01

    Interval volume serves as a generalized isosurface and represents a three-dimensional subvolume for which the associated scalar filed values lie within a user-specified closed interval. In general, it is not an easy task for novices to specify the scalar field interval corresponding to their ROIs. In order to extract interval volumes from which desirable geometric features can be mined effectively, we propose a suggestive technique which extracts interval volumes automatically based on the global examination of the field contrast structure. Also proposed here is a simplification scheme for decimating resultant triangle patches to realize efficient transmission and rendition of large-scale interval volumes. Color distributions as well as geometric features are taken into account to select best edges to be collapsed. In addition, when a user wants to selectively display and analyze the original dataset, the simplified dataset is restructured to the original quality. Several simulated and acquired datasets are used to demonstrate the effectiveness of the present methods.

  9. Comparison of Two Methods Used to Model Shape Parameters of Pareto Distributions

    Science.gov (United States)

    Liu, C.; Charpentier, R.R.; Su, J.

    2011-01-01

    Two methods are compared for estimating the shape parameters of Pareto field-size (or pool-size) distributions for petroleum resource assessment. Both methods assume mature exploration in which most of the larger fields have been discovered. Both methods use the sizes of larger discovered fields to estimate the numbers and sizes of smaller fields: (1) the tail-truncated method uses a plot of field size versus size rank, and (2) the log-geometric method uses data binned in field-size classes and the ratios of adjacent bin counts. Simulation experiments were conducted using discovered oil and gas pool-size distributions from four petroleum systems in Alberta, Canada and using Pareto distributions generated by Monte Carlo simulation. The estimates of the shape parameters of the Pareto distributions, calculated by both the tail-truncated and log-geometric methods, generally stabilize where discovered pool numbers are greater than 100. However, with fewer than 100 discoveries, these estimates can vary greatly with each new discovery. The estimated shape parameters of the tail-truncated method are more stable and larger than those of the log-geometric method where the number of discovered pools is more than 100. Both methods, however, tend to underestimate the shape parameter. Monte Carlo simulation was also used to create sequences of discovered pool sizes by sampling from a Pareto distribution with a discovery process model using a defined exploration efficiency (in order to show how biased the sampling was in favor of larger fields being discovered first). A higher (more biased) exploration efficiency gives better estimates of the Pareto shape parameters. ?? 2011 International Association for Mathematical Geosciences.

  10. A FPGA-based identity authority method in quantum key distribution system

    International Nuclear Information System (INIS)

    Cui Ke; Luo Chunli; Zhang Hongfei; Lin Shengzhao; Jin Ge; Wang Jian

    2012-01-01

    In this article, an identity authority method realized in hardware is developed which is used in quantum key distribution (QKD) systems. This method is based on LFSR-Teoplitz hashing matrix. Its benefits relay on its easy implementation in hardware and high secure coefficient. It can gain very high security by means of splitting part of the final key generated from QKD systems as the seed where it is required in the identity authority method. We propose an specific flow of the identity authority method according to the problems and features of the hardware. The proposed method can satisfy many kinds of QKD systems. (authors)

  11. Overconfidence in Interval Estimates

    Science.gov (United States)

    Soll, Jack B.; Klayman, Joshua

    2004-01-01

    Judges were asked to make numerical estimates (e.g., "In what year was the first flight of a hot air balloon?"). Judges provided high and low estimates such that they were X% sure that the correct answer lay between them. They exhibited substantial overconfidence: The correct answer fell inside their intervals much less than X% of the time. This…

  12. ACORN—A new method for generating sequences of uniformly distributed Pseudo-random Numbers

    Science.gov (United States)

    Wikramaratna, R. S.

    1989-07-01

    A new family of pseudo-random number generators, the ACORN ( additive congruential random number) generators, is proposed. The resulting numbers are distributed uniformly in the interval [0, 1). The ACORN generators are defined recursively, and the ( k + 1)th order generator is easily derived from the kth order generator. Some theorems concerning the period length are presented and compared with existing results for linear congruential generators. A range of statistical tests are applied to the ACORN generators, and their performance is compared with that of the linear congruential generators and the Chebyshev generators. The tests show the ACORN generators to be statistically superior to the Chebyshev generators, while being statistically similar to the linear congruential generators. However, the ACORN generators execute faster than linear congruential generators for the same statistical faithfulness. The main advantages of the ACORN generator are speed of execution, long period length, and simplicity of coding.

  13. A Study of Transmission Control Method for Distributed Parameters Measurement in Large Factories and Storehouses

    Directory of Open Access Journals (Sweden)

    Shujing Su

    2015-01-01

    Full Text Available For the characteristics of parameters dispersion in large factories, storehouses, and other applications, a distributed parameter measurement system is designed that is based on the ring network. The structure of the system and the circuit design of the master-slave node are described briefly. The basic protocol architecture about transmission communication is introduced, and then this paper comes up with two kinds of distributed transmission control methods. Finally, the reliability, extendibility, and control characteristic of these two methods are tested through a series of experiments. Moreover, the measurement results are compared and discussed.

  14. Networked and Distributed Control Method with Optimal Power Dispatch for Islanded Microgrids

    DEFF Research Database (Denmark)

    Li, Qiang; Peng, Congbo; Chen, Minyou

    2017-01-01

    of controllable agents. The distributed control laws derived from the first subgraph guarantee the supply-demand balance, while further control laws from the second subgraph reassign the outputs of controllable distributed generators, which ensure active and reactive power are dispatched optimally. However...... according to our proposition. Finally, the method is evaluated over seven cases via simulation. The results show that the system performs as desired, even if environmental conditions and load demand fluctuate significantly. In summary, the method can rapidly respond to fluctuations resulting in optimal...

  15. An improved method for calculating force distributions in moment-stiff timber connections

    DEFF Research Database (Denmark)

    Ormarsson, Sigurdur; Blond, Mette

    2012-01-01

    An improved method for calculating force distributions in moment-stiff metal dowel-type timber connections is presented, a method based on use of three-dimensional finite element simulations of timber connections subjected to moment action. The study that was carried out aimed at determining how...... the slip modulus varies with the angle between the direction of the dowel forces and the fibres in question, as well as how the orthotropic stiffness behaviour of the wood material affects the direction and the size of the forces. It was assumed that the force distribution generated by the moment action...

  16. Methods to determine fast-ion distribution functions from multi-diagnostic measurements

    DEFF Research Database (Denmark)

    Jacobsen, Asger Schou; Salewski, Mirko

    -ion diagnostic views, it is possible to infer the distribution function using a tomography approach. Several inversion methods for solving this tomography problem in velocity space are implemented and compared. It is found that the best quality it obtained when using inversion methods which penalise steep......Understanding the behaviour of fast ions in a fusion plasma is very important, since the fusion-born alpha particles are expected to be the main source of heating in a fusion power plant. Preferably, the entire fast-ion velocity-space distribution function would be measured. However, no fast...

  17. Distribution

    Science.gov (United States)

    John R. Jones

    1985-01-01

    Quaking aspen is the most widely distributed native North American tree species (Little 1971, Sargent 1890). It grows in a great diversity of regions, environments, and communities (Harshberger 1911). Only one deciduous tree species in the world, the closely related Eurasian aspen (Populus tremula), has a wider range (Weigle and Frothingham 1911)....

  18. A method to describe inelastic gamma field distribution in neutron gamma density logging.

    Science.gov (United States)

    Zhang, Feng; Zhang, Quanying; Liu, Juntao; Wang, Xinguang; Wu, He; Jia, Wenbao; Ti, Yongzhou; Qiu, Fei; Zhang, Xiaoyang

    2017-11-01

    Pulsed neutron gamma density logging (NGD) is of great significance for radioprotection and density measurement in LWD, however, the current methods have difficulty in quantitative calculation and single factor analysis for the inelastic gamma field distribution. In order to clarify the NGD mechanism, a new method is developed to describe the inelastic gamma field distribution. Based on the fast-neutron scattering and gamma attenuation, the inelastic gamma field distribution is characterized by the inelastic scattering cross section, fast-neutron scattering free path, formation density and other parameters. And the contribution of formation parameters on the field distribution is quantitatively analyzed. The results shows the contribution of density attenuation is opposite to that of inelastic scattering cross section and fast-neutron scattering free path. And as the detector-spacing increases, the density attenuation gradually plays a dominant role in the gamma field distribution, which means large detector-spacing is more favorable for the density measurement. Besides, the relationship of density sensitivity and detector spacing was studied according to this gamma field distribution, therefore, the spacing of near and far gamma ray detector is determined. The research provides theoretical guidance for the tool parameter design and density determination of pulsed neutron gamma density logging technique. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Design method of freeform light distribution lens for LED automotive headlamp based on DMD

    Science.gov (United States)

    Ma, Jianshe; Huang, Jianwei; Su, Ping; Cui, Yao

    2018-01-01

    We propose a new method to design freeform light distribution lens for light-emitting diode (LED) automotive headlamp based on digital micro mirror device (DMD). With the Parallel optical path architecture, the exit pupil of the illuminating system is set in infinity. Thus the principal incident rays of micro lens in DMD is parallel. DMD is made of high speed digital optical reflection array, the function of distribution lens is to distribute the emergent parallel rays from DMD and get a lighting pattern that fully comply with the national regulation GB 25991-2010.We use DLP 4500 to design the light distribution lens, mesh the target plane regulated by the national regulation GB 25991-2010 and correlate the mesh grids with the active mirror array of DLP4500. With the mapping relations and the refraction law, we can build the mathematics model and get the parameters of freeform light distribution lens. Then we import its parameter into the three-dimensional (3D) software CATIA to construct its 3D model. The ray tracing results using Tracepro demonstrate that the Illumination value of target plane is easily adjustable and fully comply with the requirement of the national regulation GB 25991-2010 by adjusting the exit brightness value of DMD. The theoretical optical efficiencies of the light distribution lens designed using this method could be up to 92% without any other auxiliary lens.

  20. Pediatric reference value distributions and covariate-stratified reference intervals for 29 endocrine and special chemistry biomarkers on the Beckman Coulter Immunoassay Systems: a CALIPER study of healthy community children.

    Science.gov (United States)

    Karbasy, Kimiya; Lin, Danny C C; Stoianov, Alexandra; Chan, Man Khun; Bevilacqua, Victoria; Chen, Yunqi; Adeli, Khosrow

    2016-04-01

    The CALIPER program is a national research initiative aimed at closing the gaps in pediatric reference intervals. CALIPER previously reported reference intervals for endocrine and special chemistry markers on Abbott immunoassays. We now report new pediatric reference intervals for immunoassays on the Beckman Coulter Immunoassay Systems and assess platform-specific differences in reference values. A total of 711 healthy children and adolescents from birth to reference intervals calculated in accordance with Clinical and Laboratory Standards Institute (CLSI) EP28-A3c guidelines. Complex profiles were observed for all 29 analytes, necessitating unique age and/or sex-specific partitions. Overall, changes in analyte concentrations observed over the course of development were similar to trends previously reported, and are consistent with biochemical and physiological changes that occur during childhood. Marked differences were observed for some assays including progesterone, luteinizing hormone and follicle-stimulating hormone where reference intervals were higher than those reported on Abbott immunoassays and parathyroid hormone where intervals were lower. This study highlights the importance of determining reference intervals specific for each analytical platform. The CALIPER Pediatric Reference Interval database will enable accurate diagnosis and laboratory assessment of children monitored by Beckman Coulter Immunoassay Systems in health care institutions worldwide. These reference intervals must however be validated by individual labs for the local pediatric population as recommended by CLSI.

  1. Improving Distributed Denial of Service (DDOS Detection using Entropy Method in Software Defined Network (SDN

    Directory of Open Access Journals (Sweden)

    Maman Abdurohman

    2017-12-01

    Full Text Available This research proposed a new method to enhance Distributed Denial of Service (DDoS detection attack on Software Defined Network (SDN environment. This research utilized the OpenFlow controller of SDN for DDoS attack detection using modified method and regarding entropy value. The new method would check whether the traffic was a normal traffic or DDoS attack by measuring the randomness of the packets. This method consisted of two steps, detecting attack and checking the entropy. The result shows that the new method can reduce false positive when there is a temporary and sudden increase in normal traffic. The new method succeeds in not detecting this as a DDoS attack. Compared to previous methods, this proposed method can enhance DDoS attack detection on SDN environment.

  2. Thermodynamic method for generating random stress distributions on an earthquake fault

    Science.gov (United States)

    Barall, Michael; Harris, Ruth A.

    2012-01-01

    This report presents a new method for generating random stress distributions on an earthquake fault, suitable for use as initial conditions in a dynamic rupture simulation. The method employs concepts from thermodynamics and statistical mechanics. A pattern of fault slip is considered to be analogous to a micro-state of a thermodynamic system. The energy of the micro-state is taken to be the elastic energy stored in the surrounding medium. Then, the Boltzmann distribution gives the probability of a given pattern of fault slip and stress. We show how to decompose the system into independent degrees of freedom, which makes it computationally feasible to select a random state. However, due to the equipartition theorem, straightforward application of the Boltzmann distribution leads to a divergence which predicts infinite stress. To avoid equipartition, we show that the finite strength of the fault acts to restrict the possible states of the system. By analyzing a set of earthquake scaling relations, we derive a new formula for the expected power spectral density of the stress distribution, which allows us to construct a computer algorithm free of infinities. We then present a new technique for controlling the extent of the rupture by generating a random stress distribution thousands of times larger than the fault surface, and selecting a portion which, by chance, has a positive stress perturbation of the desired size. Finally, we present a new two-stage nucleation method that combines a small zone of forced rupture with a larger zone of reduced fracture energy.

  3. Simulation by the method of inverse cumulative distribution function applied in optimising of foundry plant production

    Directory of Open Access Journals (Sweden)

    J. Szymszal

    2009-01-01

    Full Text Available The study discusses application of computer simulation based on the method of inverse cumulative distribution function. The simulationrefers to an elementary static case, which can also be solved by physical experiment, consisting mainly in observations of foundryproduction in a selected foundry plant. For the simulation and forecasting of foundry production quality in selected cast iron grade, arandom number generator of Excel calculation sheet was chosen. Very wide potentials of this type of simulation when applied to theevaluation of foundry production quality were demonstrated, using a number generator of even distribution for generation of a variable ofan arbitrary distribution, especially of a preset empirical distribution, without any need of adjusting to this variable the smooth theoreticaldistributions.

  4. ON ESTIMATION AND HYPOTHESIS TESTING OF THE GRAIN SIZE DISTRIBUTION BY THE SALTYKOV METHOD

    Directory of Open Access Journals (Sweden)

    Yuri Gulbin

    2011-05-01

    Full Text Available The paper considers the problem of validity of unfolding the grain size distribution with the back-substitution method. Due to the ill-conditioned nature of unfolding matrices, it is necessary to evaluate the accuracy and precision of parameter estimation and to verify the possibility of expected grain size distribution testing on the basis of intersection size histogram data. In order to review these questions, the computer modeling was used to compare size distributions obtained stereologically with those possessed by three-dimensional model aggregates of grains with a specified shape and random size. Results of simulations are reported and ways of improving the conventional stereological techniques are suggested. It is shown that new improvements in estimating and testing procedures enable grain size distributions to be unfolded more efficiently.

  5. Mathematical models and methods of assisting state subsidy distribution at the regional level

    Science.gov (United States)

    Bondarenko, Yu V.; Azarnova, T. V.; Kashirina, I. L.; Goroshko, I. V.

    2018-03-01

    One of the most common forms of state support in the world is subsidization. By providing direct financial support to businesses, local authorities get an opportunity to set certain performance targets. Successful achievement of such targets depends not only on the amount of the budgetary allocations, but also on the distribution mechanisms adopted by the regional authorities. Analysis of the existing mechanisms of subsidies distribution in Russian regions shows that in most cases the choice of subsidy calculation formula and its parameters depends on the experts’ subjective opinion. The authors offer a new approach to assisting subsidy distribution at the regional level, which is based on mathematical models and methods, allowing to evaluate the influence of subsidy distribution on the region’s social and economic development. The results of calculations were discussed with the regional administration representatives who confirmed their significance for decision-making in the sphere of state control.

  6. Establishment of reference intervals for serum thyroid-stimulating hormone, free and total thyroxine, and free and total triiodothyronine for the Beckman Coulter DxI-800 analyzers by indirect method using data obtained from Chinese population in Zhejiang Province, China.

    Science.gov (United States)

    Wang, Yan; Zhang, Yu-Xia; Zhou, Yong-Lie; Xia, Jun

    2017-07-01

    In order to establish suitable reference intervals of thyroid-stimulating hormone (TSH), free (unbound) T4 (FT4), free triiodothyronine (FT3), total thyroxine (T4), and total triiodothyronine (T3) for the patients collected in Zhejiang, China, an indirect method was developed using the data from the people presented for routine health check-up. Fifteen thousand nine hundred and fifty-six person's results were reviewed. Box-Cox or Case Rank was used to transform the data to normal distribution. Tukey and Box-Plot methods were used to exclude the outliers. Nonparametric method was used to establish the reference intervals following the EP28-A3c guideline. Pearson correlation was used to evaluate the correlation between hormone levels and age, while Mann-Whitney U test was employed for quantification of concentration differences on the people who are younger and older than 50 years old. Reference intervals were 0.66-4.95 mIU/L (TSH), 8.97-14.71 pmol/L (FT4), 3.75-5.81 pmol/L (FT3), 73.45-138.93 nmol/L (total T4), and 1.24-2.18 nmol/L (total T3) in male; conversely, reference intervals for female were 0.72-5.84 mIU/L (TSH), 8.62-14.35 pmol/L (FT4), 3.59-5.56 pmol/L (FT3), 73.45-138.93 nmol/L (total T4), and 1.20-2.10 nmol/L (total T3). FT4, FT3, and total T3 levels in male and FT4 level in female had an inverse correlation with age. Total T4 and TSH levels in female were directly correlated. Significant differences in these hormones were also found between younger and older than 50 years old except FT3 in female. Indirect method can be applied for establishment of reference intervals for TSH, FT4, FT3, total T4, and total T3. The reference intervals are narrower than those previously established. Age factor should also be considered. © 2016 Wiley Periodicals, Inc.

  7. A new evolutionary solution method for dynamic expansion planning of DG-integrated primary distribution networks

    International Nuclear Information System (INIS)

    Ahmadigorji, Masoud; Amjady, Nima

    2014-01-01

    Highlights: • A new dynamic distribution network expansion planning model is presented. • A Binary Enhanced Particle Swarm Optimization (BEPSO) algorithm is proposed. • A Modified Differential Evolution (MDE) algorithm is proposed. • A new bi-level optimization approach composed of BEPSO and MDE is presented. • The effectiveness of the proposed optimization approach is extensively illustrated. - Abstract: Reconstruction in the power system and appearing of new technologies for generation capacity of electrical energy has led to significant innovation in Distribution Network Expansion Planning (DNEP). Distributed Generation (DG) includes the application of small/medium generation units located in power distribution networks and/or near the load centers. Appropriate utilization of DG can affect the various technical and operational indices of the distribution network such as the feeder loading, energy losses and voltage profile. In addition, application of DG in proper size is an essential tool to achieve the DG maximum potential benefits. In this paper, a time-based (dynamic) model for DNEP is proposed to determine the optimal size, location and installation year of DG in distribution system. Also, in this model, the Optimal Power Flow (OPF) is exerted to determine the optimal generation of DGs for every potential solution in order to minimize the investment and operation costs following the load growth in a specified planning period. Besides, the reinforcement requirements of existing distribution feeders are considered, simultaneously. The proposed optimization problem is solved by the combination of evolutionary methods of a new Binary Enhanced Particle Swarm Optimization (BEPSO) and Modified Differential Evolution (MDE) to find the optimal expansion strategy and solve OPF, respectively. The proposed planning approach is applied to two typical primary distribution networks and compared with several other methods. These comparisons illustrate the

  8. A method for the calculation of the cumulative failure probability distribution of complex repairable systems

    International Nuclear Information System (INIS)

    Caldarola, L.

    1976-01-01

    A method is proposed for the analytical evaluation of the cumulative failure probability distribution of complex repairable systems. The method is based on a set of integral equations each one referring to a specific minimal cut set of the system. Each integral equation links the unavailability of a minimal cut set to its failure probability density distribution and to the probability that the minimal cut set is down at the time t under the condition that it was down at time t'(t'<=t). The limitations for the applicability of the method are also discussed. It has been concluded that the method is applicable if the process describing the failure of a minimal cut set is a 'delayed semi-regenerative process'. (Auth.)

  9. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    of generic run-time error types, design of methods of observing application software behaviorduring execution and design of methods of evaluating run time constraints. In the definition of error types it is attempted to cover all relevant aspects of the application softwaree behavior. Methods of observation......In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...

  10. Prestressing force monitoring method for a box girder through distributed long-gauge FBG sensors

    Science.gov (United States)

    Chen, Shi-Zhi; Wu, Gang; Xing, Tuo; Feng, De-Cheng

    2018-01-01

    Monitoring prestressing forces is essential for prestressed concrete box girder bridges. However, the current monitoring methods used for prestressing force were not applicable for a box girder neither because of the sensor’s setup being constrained or shear lag effect not being properly considered. Through combining with the previous analysis model of shear lag effect in the box girder, this paper proposed an indirect monitoring method for on-site determination of prestressing force in a concrete box girder utilizing the distributed long-gauge fiber Bragg grating sensor. The performance of this method was initially verified using numerical simulation for three different distribution forms of prestressing tendons. Then, an experiment involving two concrete box girders was conducted to study the feasibility of this method under different prestressing levels preliminarily. The results of both numerical simulation and lab experiment validated this method’s practicability in a box girder.

  11. Voltage Based Detection Method for High Impedance Fault in a Distribution System

    Science.gov (United States)

    Thomas, Mini Shaji; Bhaskar, Namrata; Prakash, Anupama

    2016-09-01

    High-impedance faults (HIFs) on distribution feeders cannot be detected by conventional protection schemes, as HIFs are characterized by their low fault current level and waveform distortion due to the nonlinearity of the ground return path. This paper proposes a method to identify the HIFs in distribution system and isolate the faulty section, to reduce downtime. This method is based on voltage measurements along the distribution feeder and utilizes the sequence components of the voltages. Three models of high impedance faults have been considered and source side and load side breaking of the conductor have been studied in this work to capture a wide range of scenarios. The effect of neutral grounding of the source side transformer is also accounted in this study. The results show that the algorithm detects the HIFs accurately and rapidly. Thus, the faulty section can be isolated and service can be restored to the rest of the consumers.

  12. A computed torque method based attitude control with optimal force distribution for articulated body mobile robots

    International Nuclear Information System (INIS)

    Fukushima, Edwardo F.; Hirose, Shigeo

    2000-01-01

    This paper introduces an attitude control scheme based in optimal force distribution using quadratic programming which minimizes joint energy consumption. This method shares similarities with force distribution for multifingered hands, multiple coordinated manipulators and legged walking robots. In particular, an attitude control scheme was introduced inside the force distribution problem, and successfully implemented for control of the articulated body mobile robot KR-II. This is an actual mobile robot composed of cylindrical segments linked in series by prismatic joints and has a long snake-like appearance. These prismatic joints are force controlled so that each segment's vertical motion can automatically follow the terrain irregularities. An attitude control is necessary because this system acts like a system of wheeled inverted pendulum carts connected in series, being unstable by nature. The validity and effectiveness of the proposed method is verified by computer simulation and experiments with the robot KR-II. (author)

  13. Cellular Neural Network-Based Methods for Distributed Network Intrusion Detection

    Directory of Open Access Journals (Sweden)

    Kang Xie

    2015-01-01

    Full Text Available According to the problems of current distributed architecture intrusion detection systems (DIDS, a new online distributed intrusion detection model based on cellular neural network (CNN was proposed, in which discrete-time CNN (DTCNN was used as weak classifier in each local node and state-controlled CNN (SCCNN was used as global detection method, respectively. We further proposed a new method for design template parameters of SCCNN via solving Linear Matrix Inequality. Experimental results based on KDD CUP 99 dataset show its feasibility and effectiveness. Emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI implementation which allows the distributed intrusion detection to be performed better.

  14. An analog computer method for solving flux distribution problems in multi region nuclear reactors

    Energy Technology Data Exchange (ETDEWEB)

    Radanovic, L; Bingulac, S; Lazarevic, B; Matausek, M [Boris Kidric Institute of Nuclear Sciences Vinca, Beograd (Yugoslavia)

    1963-04-15

    The paper describes a method developed for determining criticality conditions and plotting flux distribution curves in multi region nuclear reactors on a standard analog computer. The method, which is based on the one-dimensional two group treatment, avoids iterative procedures normally used for boundary value problems and is practically insensitive to errors in initial conditions. The amount of analog equipment required is reduced to a minimum and is independent of the number of core regions and reflectors. (author)

  15. Conventional and Alternative Disinfection Methods of Legionella in Water Distribution Systems – Review

    Directory of Open Access Journals (Sweden)

    Pūle Daina

    2016-12-01

    Full Text Available Prevalence of Legionella in drinking water distribution systems is a widespread problem. Outbreaks of Legionella caused diseases occur despite various disinfectants are used in order to control Legionella. Conventional methods like thermal disinfection, silver/copper ionization, ultraviolet irradiation or chlorine-based disinfection have not been effective in the long term for control of biofilm bacteria. Therefore, research to develop more effective disinfection methods is still necessary.

  16. A method for exploring the distribution of radioelements at depth using gamma-ray spectrometric data

    International Nuclear Information System (INIS)

    Li Qingyang

    1997-01-01

    Based on the inherent relation between radioelements and terrestrial heat flow, theoretically shows the possibility of exploring the distribution of radioelements at depth using gamma-ray spectrometric data, and a data-processing and synthesizing method has been adopted to deduce the calculation formula. The practical application in the uranium mineralized area No. 2801 in Yunnan Province proves that this method is of practical value, and it has been successfully applied to the data processing and good results have been obtained

  17. Methods to estimate distribution and range extent of grizzly bears in the Greater Yellowstone Ecosystem

    Science.gov (United States)

    Haroldson, Mark A.; Schwartz, Charles C.; Thompson, Daniel J.; Bjornlie, Daniel D.; Gunther, Kerry A.; Cain, Steven L.; Tyers, Daniel B.; Frey, Kevin L.; Aber, Bryan C.

    2014-01-01

    The distribution of the Greater Yellowstone Ecosystem grizzly bear (Ursus arctos) population has expanded into areas unoccupied since the early 20th century. Up-to-date information on the area and extent of this distribution is crucial for federal, state, and tribal wildlife and land managers to make informed decisions regarding grizzly bear management. The most recent estimate of grizzly bear distribution (2004) utilized fixed-kernel density estimators to describe distribution. This method was complex and computationally time consuming and excluded observations of unmarked bears. Our objective was to develop a technique to estimate grizzly bear distribution that would allow for the use of all verified grizzly bear location data, as well as provide the simplicity to be updated more frequently. We placed all verified grizzly bear locations from all sources from 1990 to 2004 and 1990 to 2010 onto a 3-km × 3-km grid and used zonal analysis and ordinary kriging to develop a predicted surface of grizzly bear distribution. We compared the area and extent of the 2004 kriging surface with the previous 2004 effort and evaluated changes in grizzly bear distribution from 2004 to 2010. The 2004 kriging surface was 2.4% smaller than the previous fixed-kernel estimate, but more closely represented the data. Grizzly bear distribution increased 38.3% from 2004 to 2010, with most expansion in the northern and southern regions of the range. This technique can be used to provide a current estimate of grizzly bear distribution for management and conservation applications.

  18. Multisite-multivariable sensitivity analysis of distributed watershed models: enhancing the perceptions from computationally frugal methods

    Science.gov (United States)

    This paper assesses the impact of different likelihood functions in identifying sensitive parameters of the highly parameterized, spatially distributed Soil and Water Assessment Tool (SWAT) watershed model for multiple variables at multiple sites. The global one-factor-at-a-time (OAT) method of Morr...

  19. Projection methods for the analysis of molecular-frame photoelectron angular distributions

    International Nuclear Information System (INIS)

    Lucchese, R.R.; Montuoro, R.; Grum-Grzhimailo, A.N.; Liu, X.-J.; Pruemper, G.; Morishita, Y.; Saito, N.; Ueda, K.

    2007-01-01

    The analysis of the molecular-frame photoelectron angular distributions (MFPADs) is discussed within the dipole approximation. The general expressions are reviewed and strategies for extracting the maximum amount of information from different types of experimental measurements are considered. The analysis of the N 1s photoionization of NO is given to illustrate the method

  20. Distributed Solutions for Loosely Coupled Feasibility Problems Using Proximal Splitting Methods

    DEFF Research Database (Denmark)

    Pakazad, Sina Khoshfetrat; Andersen, Martin Skovgaard; Hansson, Anders

    2014-01-01

    In this paper,we consider convex feasibility problems (CFPs) where the underlying sets are loosely coupled, and we propose several algorithms to solve such problems in a distributed manner. These algorithms are obtained by applying proximal splitting methods to convex minimization reformulations ...

  1. A simple nodal force distribution method in refined finite element meshes

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jai Hak [Chungbuk National University, Chungju (Korea, Republic of); Shin, Kyu In [Gentec Co., Daejeon (Korea, Republic of); Lee, Dong Won [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Cho, Seungyon [National Fusion Research Institute, Daejeon (Korea, Republic of)

    2017-05-15

    In finite element analyses, mesh refinement is frequently performed to obtain accurate stress or strain values or to accurately define the geometry. After mesh refinement, equivalent nodal forces should be calculated at the nodes in the refined mesh. If field variables and material properties are available at the integration points in each element, then the accurate equivalent nodal forces can be calculated using an adequate numerical integration. However, in certain circumstances, equivalent nodal forces cannot be calculated because field variable data are not available. In this study, a very simple nodal force distribution method was proposed. Nodal forces of the original finite element mesh are distributed to the nodes of refined meshes to satisfy the equilibrium conditions. The effect of element size should also be considered in determining the magnitude of the distributing nodal forces. A program was developed based on the proposed method, and several example problems were solved to verify the accuracy and effectiveness of the proposed method. From the results, accurate stress field can be recognized to be obtained from refined meshes using the proposed nodal force distribution method. In example problems, the difference between the obtained maximum stress and target stress value was less than 6 % in models with 8-node hexahedral elements and less than 1 % in models with 20-node hexahedral elements or 10-node tetrahedral elements.

  2. The overlapping distribution method to compute chemical potentials of chain molecules

    NARCIS (Netherlands)

    Mooij, G.C.A.M.; Frenkel, D.

    1994-01-01

    The chemical potential of continuously deformable chain molecules can be estimated by measuring the average Rosenbluth weight associated with the virtual insertion of a molecule. We show how to generalize the overlapping-distribution method of Bennett to histograms of Rosenbluth weights. In this way

  3. Analysis of the distribution of X-ray characteristic production using the Monte Carlo methods

    International Nuclear Information System (INIS)

    Del Giorgio, Marcelo; Brizuela, Horacio; Riveros, J.A.

    1987-01-01

    The Monte Carlo method has been applied for the simulation of electron trajectories in a bulk sample, and therefore for the distribution of signals produced in an electron microprobe. Results for the function φ(ρz) are compared with experimental data. Some conclusions are drawn with respect to the parameters involved in the gaussian model. (Author) [es

  4. An evaluation of the methods of determining excited state population distributions from sputtering sources

    International Nuclear Information System (INIS)

    Snowdon, K.J.; Andresen, B.; Veje, E.

    1978-01-01

    The method of calculating relative initial level populations of excited states of sputtered atoms is developed in principle and compared with those in current use. The reason that the latter, although mathematically different, have generally led to similar population distributions is outlined. (Auth.)

  5. An Empirical Method to Fuse Partially Overlapping State Vectors for Distributed State Estimation

    NARCIS (Netherlands)

    Sijs, J.; Hanebeck, U.; Noack, B.

    2013-01-01

    State fusion is a method for merging multiple estimates of the same state into a single fused estimate. Dealing with multiple estimates is one of the main concerns in distributed state estimation, where an estimated value of the desired state vector is computed in each node of a networked system.

  6. Air method measurements of apple vessel length distributions with improved apparatus and theory

    Science.gov (United States)

    Shabtal Cohen; John Bennink; Mel Tyree

    2003-01-01

    Studies showing that rootstock dwarfing potential is related to plant hydraulic conductance led to the hypothesis that xylem properties are also related. Vessel length distribution and other properties of apple wood from a series of varieties were measured using the 'air method' in order to test this hypothesis. Apparatus was built to measure and monitor...

  7. A method to calculate flux distribution in reactor systems containing materials with grain structure

    International Nuclear Information System (INIS)

    Stepanek, J.

    1980-01-01

    A method is proposed to compute the neutron flux spatial distribution in slab, spherical or cylindrical systems containing zones with close grain structure of material. Several different types of equally distributed particles embedded in the matrix material are allowed in one or more zones. The multi-energy group structure of the flux is considered. The collision probability method is used to compute the fluxes in the grains and in an ''effective'' part of the matrix material. Then the overall structure of the flux distribution in the zones with homogenized materials is determined using the DPN ''surface flux'' method. Both computations are connected using the balance equation during the outer iterations. The proposed method is written in the code SURCU-DH. Two testcases are computed and discussed. One testcase is the computation of the eigenvalue in simplified slab geometry of an LWR container of one zone with boral grains equally distributed in an aluminium matrix. The second is the computation of the eigenvalue in spherical geometry of the HTR pebble-bed cell with spherical particles embedded in a graphite matrix. The results are compared to those obtained by repeated use of the WIMS Code. (author)

  8. Distance Determination Method for Normally Distributed Obstacle Avoidance of Mobile Robots in Stochastic Environments

    Directory of Open Access Journals (Sweden)

    Jinhong Noh

    2016-04-01

    Full Text Available Obstacle avoidance methods require knowledge of the distance between a mobile robot and obstacles in the environment. However, in stochastic environments, distance determination is difficult because objects have position uncertainty. The purpose of this paper is to determine the distance between a robot and obstacles represented by probability distributions. Distance determination for obstacle avoidance should consider position uncertainty, computational cost and collision probability. The proposed method considers all of these conditions, unlike conventional methods. It determines the obstacle region using the collision probability density threshold. Furthermore, it defines a minimum distance function to the boundary of the obstacle region with a Lagrange multiplier method. Finally, it computes the distance numerically. Simulations were executed in order to compare the performance of the distance determination methods. Our method demonstrated a faster and more accurate performance than conventional methods. It may help overcome position uncertainty issues pertaining to obstacle avoidance, such as low accuracy sensors, environments with poor visibility or unpredictable obstacle motion.

  9. A new hydraulic regulation method on district heating system with distributed variable-speed pumps

    International Nuclear Information System (INIS)

    Wang, Hai; Wang, Haiying; Zhu, Tong

    2017-01-01

    Highlights: • A hydraulic regulation method was presented for district heating with distributed variable speed pumps. • Information and automation technologies were utilized to support the proposed method. • A new hydraulic model was developed for distributed variable speed pumps. • A new optimization model was developed based on genetic algorithm. • Two scenarios of a multi-source looped system was illustrated to validate the method. - Abstract: Compared with the hydraulic configuration based on the conventional central circulating pump, a district heating system with distributed variable-speed-pumps configuration can often save 30–50% power consumption on circulating pumps with frequency inverters. However, the hydraulic regulations on distributed variable-speed-pumps configuration could be more complicated than ever while all distributed pumps need to be adjusted to their designated flow rates. Especially in a multi-source looped structure heating network where the distributed pumps have strongly coupled and severe non-linear hydraulic connections with each other, it would be rather difficult to maintain the hydraulic balance during the regulations. In this paper, with the help of the advanced automation and information technologies, a new hydraulic regulation method was proposed to achieve on-site hydraulic balance for the district heating systems with distributed variable-speed-pumps configuration. The proposed method was comprised of a new hydraulic model, which was developed to adapt the distributed variable-speed-pumps configuration, and a calibration model with genetic algorithm. By carrying out the proposed method step by step, the flow rates of all distributed pumps can be progressively adjusted to their designated values. A hypothetic district heating system with 2 heat sources and 10 substations was taken as a case study to illustrate the feasibility of the proposed method. Two scenarios were investigated respectively. In Scenario I, the

  10. Evaluation of the Environmental DNA Method for Estimating Distribution and Biomass of Submerged Aquatic Plants.

    Science.gov (United States)

    Matsuhashi, Saeko; Doi, Hideyuki; Fujiwara, Ayaka; Watanabe, Sonoko; Minamoto, Toshifumi

    2016-01-01

    The environmental DNA (eDNA) method has increasingly been recognized as a powerful tool for monitoring aquatic animal species; however, its application for monitoring aquatic plants is limited. To evaluate eDNA analysis for estimating the distribution of aquatic plants, we compared its estimated distributions with eDNA analysis, visual observation, and past distribution records for the submerged species Hydrilla verticillata. Moreover, we conducted aquarium experiments using H. verticillata and Egeria densa and analyzed the relationships between eDNA concentrations and plant biomass to investigate the potential for biomass estimation. The occurrences estimated by eDNA analysis closely corresponded to past distribution records, and eDNA detections were more frequent than visual observations, indicating that the method is potentially more sensitive. The results of the aquarium experiments showed a positive relationship between plant biomass and eDNA concentration; however, the relationship was not always significant. The eDNA concentration peaked within three days of the start of the experiment in most cases, suggesting that plants do not release constant amounts of DNA. These results showed that eDNA analysis can be used for distribution surveys, and has the potential to estimate the biomass of aquatic plants.

  11. Development of advanced methods for planning electric energy distribution systems. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Goenen, T.; Foote, B.L.; Thompson, J.C.; Fagan, J.E.

    1979-10-01

    An extensive search was made for the identification and collection of reports published in the open literature which describes distribution planning methods and techniques. In addition, a questionnaire has been prepared and sent to a large number of electric power utility companies. A large number of these companies were visited and/or their distribution planners interviewed for the identification and description of distribution system planning methods and techniques used by these electric power utility companies and other commercial entities. Distribution systems planning models were reviewed and a set of new mixed-integer programming models were developed for the optimal expansion of the distribution systems. The models help the planner to select: (1) optimum substation locations; (2) optimum substation expansions; (3) optimum substation transformer sizes; (4) optimum load transfers between substations; (5) optimum feeder routes and sizes subject to a set of specified constraints. The models permit following existing right-of-ways and avoid areas where feeders and substations cannot be constructed. The results of computer runs were analyzed for adequacy in serving projected loads within regulation limits for both normal and emergency operation.

  12. Packing simulation code to calculate distribution function of hard spheres by Monte Carlo method : MCRDF

    International Nuclear Information System (INIS)

    Murata, Isao; Mori, Takamasa; Nakagawa, Masayuki; Shirai, Hiroshi.

    1996-03-01

    High Temperature Gas-cooled Reactors (HTGRs) employ spherical fuels named coated fuel particles (CFPs) consisting of a microsphere of low enriched UO 2 with coating layers in order to prevent FP release. There exist many spherical fuels distributed randomly in the cores. Therefore, the nuclear design of HTGRs is generally performed on the basis of the multigroup approximation using a diffusion code, S N transport code or group-wise Monte Carlo code. This report summarizes a Monte Carlo hard sphere packing simulation code to simulate the packing of equal hard spheres and evaluate the necessary probability distribution of them, which is used for the application of the new Monte Carlo calculation method developed to treat randomly distributed spherical fuels with the continuous energy Monte Carlo method. By using this code, obtained are the various statistical values, namely Radial Distribution Function (RDF), Nearest Neighbor Distribution (NND), 2-dimensional RDF and so on, for random packing as well as ordered close packing of FCC and BCC. (author)

  13. Interval Forecast for Smooth Transition Autoregressive Model ...

    African Journals Online (AJOL)

    In this paper, we propose a simple method for constructing interval forecast for smooth transition autoregressive (STAR) model. This interval forecast is based on bootstrapping the residual error of the estimated STAR model for each forecast horizon and computing various Akaike information criterion (AIC) function. This new ...

  14. Confidence interval procedures for Monte Carlo transport simulations

    International Nuclear Information System (INIS)

    Pederson, S.P.

    1997-01-01

    The problem of obtaining valid confidence intervals based on estimates from sampled distributions using Monte Carlo particle transport simulation codes such as MCNP is examined. Such intervals can cover the true parameter of interest at a lower than nominal rate if the sampled distribution is extremely right-skewed by large tallies. Modifications to the standard theory of confidence intervals are discussed and compared with some existing heuristics, including batched means normality tests. Two new types of diagnostics are introduced to assess whether the conditions of central limit theorem-type results are satisfied: the relative variance of the variance determines whether the sample size is sufficiently large, and estimators of the slope of the right tail of the distribution are used to indicate the number of moments that exist. A simulation study is conducted to quantify the relationship between various diagnostics and coverage rates and to find sample-based quantities useful in indicating when intervals are expected to be valid. Simulated tally distributions are chosen to emulate behavior seen in difficult particle transport problems. Measures of variation in the sample variance s 2 are found to be much more effective than existing methods in predicting when coverage will be near nominal rates. Batched means tests are found to be overly conservative in this regard. A simple but pathological MCNP problem is presented as an example of false convergence using existing heuristics. The new methods readily detect the false convergence and show that the results of the problem, which are a factor of 4 too small, should not be used. Recommendations are made for applying these techniques in practice, using the statistical output currently produced by MCNP

  15. Correct Bayesian and frequentist intervals are similar

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1986-01-01

    This paper argues that Bayesians and frequentists will normally reach numerically similar conclusions, when dealing with vague data or sparse data. It is shown that both statistical methodologies can deal reasonably with vague data. With sparse data, in many important practical cases Bayesian interval estimates and frequentist confidence intervals are approximately equal, although with discrete data the frequentist intervals are somewhat longer. This is not to say that the two methodologies are equally easy to use: The construction of a frequentist confidence interval may require new theoretical development. Bayesians methods typically require numerical integration, perhaps over many variables. Also, Bayesian can easily fall into the trap of over-optimism about their amount of prior knowledge. But in cases where both intervals are found correctly, the two intervals are usually not very different. (orig.)

  16. S-curve networks and an approximate method for estimating degree distributions of complex networks

    International Nuclear Information System (INIS)

    Guo Jin-Li

    2010-01-01

    In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference values for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, it proposes a finite network model with a bulk growth. The model is said to be an S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barabási-Albert method) is not suitable for the network. It develops an approximate method to predict the growth dynamics of the individual nodes, and uses this to calculate analytically the degree distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-law form. This method can overcome a shortcoming of Barabási-Albert method commonly used in current network research. (general)

  17. S-curve networks and an approximate method for estimating degree distributions of complex networks

    Science.gov (United States)

    Guo, Jin-Li

    2010-12-01

    In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference values for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, it proposes a finite network model with a bulk growth. The model is said to be an S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barabási-Albert method) is not suitable for the network. It develops an approximate method to predict the growth dynamics of the individual nodes, and uses this to calculate analytically the degree distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-law form. This method can overcome a shortcoming of Barabási-Albert method commonly used in current network research.

  18. Extension of the Accurate Voltage-Sag Fault Location Method in Electrical Power Distribution Systems

    Directory of Open Access Journals (Sweden)

    Youssef Menchafou

    2016-03-01

    Full Text Available Accurate Fault location in an Electric Power Distribution System (EPDS is important in maintaining system reliability. Several methods have been proposed in the past. However, the performances of these methods either show to be inefficient or are a function of the fault type (Fault Classification, because they require the use of an appropriate algorithm for each fault type. In contrast to traditional approaches, an accurate impedance-based Fault Location (FL method is presented in this paper. It is based on the voltage-sag calculation between two measurement points chosen carefully from the available strategic measurement points of the line, network topology and current measurements at substation. The effectiveness and the accuracy of the proposed technique are demonstrated for different fault types using a radial power flow system. The test results are achieved from the numerical simulation using the data of a distribution line recognized in the literature.

  19. a New Method for Calculating Fractal Dimensions of Porous Media Based on Pore Size Distribution

    Science.gov (United States)

    Xia, Yuxuan; Cai, Jianchao; Wei, Wei; Hu, Xiangyun; Wang, Xin; Ge, Xinmin

    Fractal theory has been widely used in petrophysical properties of porous rocks over several decades and determination of fractal dimensions is always the focus of researches and applications by means of fractal-based methods. In this work, a new method for calculating pore space fractal dimension and tortuosity fractal dimension of porous media is derived based on fractal capillary model assumption. The presented work establishes relationship between fractal dimensions and pore size distribution, which can be directly used to calculate the fractal dimensions. The published pore size distribution data for eight sandstone samples are used to calculate the fractal dimensions and simultaneously compared with prediction results from analytical expression. In addition, the proposed fractal dimension method is also tested through Micro-CT images of three sandstone cores, and are compared with fractal dimensions by box-counting algorithm. The test results also prove a self-similar fractal range in sandstone when excluding smaller pores.

  20. Monte-Carlo Method Python Library for dose distribution Calculation in Brachytherapy

    Energy Technology Data Exchange (ETDEWEB)

    Randriantsizafy, R D; Ramanandraibe, M J [Madagascar Institut National des Sciences et Techniques Nucleaires, Antananarivo (Madagascar); Raboanary, R [Institut of astro and High-Energy Physics Madagascar, University of Antananarivo, Antananarivo (Madagascar)

    2007-07-01

    The Cs-137 Brachytherapy treatment is performed in Madagascar since 2005. Time treatment calculation for prescribed dose is made manually. Monte-Carlo Method Python library written at Madagascar INSTN is experimentally used to calculate the dose distribution on the tumour and around it. The first validation of the code was done by comparing the library curves with the Nucletron company curves. To reduce the duration of the calculation, a Grid of PC's is set up with listner patch run on each PC. The library will be used to modelize the dose distribution in the CT scan patient picture for individual and better accuracy time calculation for a prescribed dose.

  1. Monte-Carlo Method Python Library for dose distribution Calculation in Brachytherapy

    International Nuclear Information System (INIS)

    Randriantsizafy, R.D.; Ramanandraibe, M.J.; Raboanary, R.

    2007-01-01

    The Cs-137 Brachytherapy treatment is performed in Madagascar since 2005. Time treatment calculation for prescribed dose is made manually. Monte-Carlo Method Python library written at Madagascar INSTN is experimentally used to calculate the dose distribution on the tumour and around it. The first validation of the code was done by comparing the library curves with the Nucletron company curves. To reduce the duration of the calculation, a Grid of PC's is set up with listner patch run on each PC. The library will be used to modelize the dose distribution in the CT scan patient picture for individual and better accuracy time calculation for a prescribed dose.

  2. A practical method for in-situ thickness determination using energy distribution of beta particles

    International Nuclear Information System (INIS)

    Yalcin, S.; Gurler, O.; Gundogdu, O.; Bradley, D.A.

    2012-01-01

    This paper discusses a method to determine the thickness of an absorber using the energy distribution of beta particles. An empirical relationship was obtained between the absorber thickness and the energy distribution of beta particles transmitted through. The thickness of a polyethylene radioactive source cover was determined by exploiting this relationship, which has largely been left unexploited allowing us to determine the in-situ cover thickness of beta sources in a fast, cheap and non-destructive way. - Highlights: ► A practical and in-situ unknown cover thickness determination ► Cheap and readily available compared to other techniques. ► Beta energy spectrum.

  3. Application of automatic change of interval to de Vogelaere's method of the solution of the differential equation y'' = f (x, y)

    International Nuclear Information System (INIS)

    Rogers, M.H.

    1960-11-01

    The paper gives an extension to de Vogelaere's method for the solution of systems of second order differential equations from which first derivatives are absent. The extension is a description of the way in which automatic change in step-length can be made to give a prescribed accuracy at each step. (author)

  4. [Abdomen specific bioelectrical impedance analysis (BIA) methods for evaluation of abdominal fat distribution].

    Science.gov (United States)

    Ida, Midori; Hirata, Masakazu; Hosoda, Kiminori; Nakao, Kazuwa

    2013-02-01

    Two novel bioelectrical impedance analysis (BIA) methods have been developed recently for evaluation of intra-abdominal fat accumulation. Both methods use electrodes that are placed on abdominal wall and allow evaluation of intra-abdominal fat area (IAFA) easily without radiation exposure. Of these, "abdominal BIA" method measures impedance distribution along abdominal anterior-posterior axis, and IAFA by BIA method(BIA-IAFA) is calculated from waist circumference and the voltage occurring at the flank. Dual BIA method measures impedance of trunk and body surface at the abdominal level and calculates BIA-IAFA from transverse and antero-posterior diameters of the abdomen and the impedance of trunk and abdominal surface. BIA-IAFA by these two BIA methods correlated well with IAFA measured by abdominal CT (CT-IAFA) with correlatipn coefficient of 0.88 (n = 91, p abdominal adiposity in clinical study and routine clinical practice of metabolic syndrome and obesity.

  5. Conditional maximum-entropy method for selecting prior distributions in Bayesian statistics

    Science.gov (United States)

    Abe, Sumiyoshi

    2014-11-01

    The conditional maximum-entropy method (abbreviated here as C-MaxEnt) is formulated for selecting prior probability distributions in Bayesian statistics for parameter estimation. This method is inspired by a statistical-mechanical approach to systems governed by dynamics with largely separated time scales and is based on three key concepts: conjugate pairs of variables, dimensionless integration measures with coarse-graining factors and partial maximization of the joint entropy. The method enables one to calculate a prior purely from a likelihood in a simple way. It is shown, in particular, how it not only yields Jeffreys's rules but also reveals new structures hidden behind them.

  6. Concise method for evaluating the probability distribution of the marginal cost of power generation

    International Nuclear Information System (INIS)

    Zhang, S.H.; Li, Y.Z.

    2000-01-01

    In the developing electricity market, many questions on electricity pricing and the risk modelling of forward contracts require the evaluation of the expected value and probability distribution of the short-run marginal cost of power generation at any given time. A concise forecasting method is provided, which is consistent with the definitions of marginal costs and the techniques of probabilistic production costing. The method embodies clear physical concepts, so that it can be easily understood theoretically and computationally realised. A numerical example has been used to test the proposed method. (author)

  7. On-line reconstruction of in-core power distribution by harmonics expansion method

    International Nuclear Information System (INIS)

    Wang Changhui; Wu Hongchun; Cao Liangzhi; Yang Ping

    2011-01-01

    Highlights: → A harmonics expansion method for the on-line in-core power reconstruction is proposed. → A harmonics data library is pre-generated off-line and a code named COMS is developed. → Numerical results show that the maximum relative error of the reconstruction is less than 5.5%. → This method has a high computational speed compared to traditional methods. - Abstract: Fixed in-core detectors are most suitable in real-time response to in-core power distributions in pressurized water reactors (PWRs). In this paper, a harmonics expansion method is used to reconstruct the in-core power distribution of a PWR on-line. In this method, the in-core power distribution is expanded by the harmonics of one reference case. The expansion coefficients are calculated using signals provided by fixed in-core detectors. To conserve computing time and improve reconstruction precision, a harmonics data library containing the harmonics of different reference cases is constructed. Upon reconstruction of the in-core power distribution on-line, the two closest reference cases are searched from the harmonics data library to produce expanded harmonics by interpolation. The Unit 1 reactor of DayaBay Nuclear Power Plant (DayaBay NPP) in China is considered for verification. The maximum relative error between the measurement and reconstruction results is less than 5.5%, and the computing time is about 0.53 s for a single reconstruction, indicating that this method is suitable for the on-line monitoring of PWRs.

  8. Critical review and hydrologic application of threshold detection methods for the generalized Pareto (GP) distribution

    Science.gov (United States)

    Mamalakis, Antonios; Langousis, Andreas; Deidda, Roberto

    2016-04-01

    Estimation of extreme rainfall from data constitutes one of the most important issues in statistical hydrology, as it is associated with the design of hydraulic structures and flood water management. To that extent, based on asymptotic arguments from Extreme Excess (EE) theory, several studies have focused on developing new, or improving existing methods to fit a generalized Pareto (GP) distribution model to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches, such as non-parametric methods that are intended to locate the changing point between extreme and non-extreme regions of the data, graphical methods where one studies the dependence of GP distribution parameters (or related metrics) on the threshold level u, and Goodness of Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GP distribution model is applicable. In this work, we review representative methods for GP threshold detection, discuss fundamental differences in their theoretical bases, and apply them to 1714 daily rainfall records from the NOAA-NCDC open-access database, with more than 110 years of data. We find that non-parametric methods that are intended to locate the changing point between extreme and non-extreme regions of the data are generally not reliable, while methods that are based on asymptotic properties of the upper distribution tail lead to unrealistically high threshold and shape parameter estimates. The latter is justified by theoretical arguments, and it is especially the case in rainfall applications, where the shape parameter of the GP distribution is low; i.e. on the order of 0.1 ÷ 0.2. Better performance is demonstrated by graphical methods and GoF metrics that rely on pre-asymptotic properties of the GP distribution. For daily rainfall, we find that GP threshold estimates range between 2÷12 mm/d with a mean value of 6.5 mm/d, while the existence of quantization in the

  9. Assessing protein conformational sampling methods based on bivariate lag-distributions of backbone angles

    KAUST Repository

    Maadooliat, Mehdi; Gao, Xin; Huang, Jianhua Z.

    2012-01-01

    Despite considerable progress in the past decades, protein structure prediction remains one of the major unsolved problems in computational biology. Angular-sampling-based methods have been extensively studied recently due to their ability to capture the continuous conformational space of protein structures. The literature has focused on using a variety of parametric models of the sequential dependencies between angle pairs along the protein chains. In this article, we present a thorough review of angular-sampling-based methods by assessing three main questions: What is the best distribution type to model the protein angles? What is a reasonable number of components in a mixture model that should be considered to accurately parameterize the joint distribution of the angles? and What is the order of the local sequence-structure dependency that should be considered by a prediction method? We assess the model fits for different methods using bivariate lag-distributions of the dihedral/planar angles. Moreover, the main information across the lags can be extracted using a technique called Lag singular value decomposition (LagSVD), which considers the joint distribution of the dihedral/planar angles over different lags using a nonparametric approach and monitors the behavior of the lag-distribution of the angles using singular value decomposition. As a result, we developed graphical tools and numerical measurements to compare and evaluate the performance of different model fits. Furthermore, we developed a web-tool (http://www.stat.tamu. edu/~madoliat/LagSVD) that can be used to produce informative animations. © The Author 2012. Published by Oxford University Press.

  10. Assessing protein conformational sampling methods based on bivariate lag-distributions of backbone angles

    KAUST Repository

    Maadooliat, Mehdi

    2012-08-27

    Despite considerable progress in the past decades, protein structure prediction remains one of the major unsolved problems in computational biology. Angular-sampling-based methods have been extensively studied recently due to their ability to capture the continuous conformational space of protein structures. The literature has focused on using a variety of parametric models of the sequential dependencies between angle pairs along the protein chains. In this article, we present a thorough review of angular-sampling-based methods by assessing three main questions: What is the best distribution type to model the protein angles? What is a reasonable number of components in a mixture model that should be considered to accurately parameterize the joint distribution of the angles? and What is the order of the local sequence-structure dependency that should be considered by a prediction method? We assess the model fits for different methods using bivariate lag-distributions of the dihedral/planar angles. Moreover, the main information across the lags can be extracted using a technique called Lag singular value decomposition (LagSVD), which considers the joint distribution of the dihedral/planar angles over different lags using a nonparametric approach and monitors the behavior of the lag-distribution of the angles using singular value decomposition. As a result, we developed graphical tools and numerical measurements to compare and evaluate the performance of different model fits. Furthermore, we developed a web-tool (http://www.stat.tamu. edu/~madoliat/LagSVD) that can be used to produce informative animations. © The Author 2012. Published by Oxford University Press.

  11. Surveillance test interval optimization

    International Nuclear Information System (INIS)

    Cepin, M.; Mavko, B.

    1995-01-01

    Technical specifications have been developed on the bases of deterministic analyses, engineering judgment, and expert opinion. This paper introduces our risk-based approach to surveillance test interval (STI) optimization. This approach consists of three main levels. The first level is the component level, which serves as a rough estimation of the optimal STI and can be calculated analytically by a differentiating equation for mean unavailability. The second and third levels give more representative results. They take into account the results of probabilistic risk assessment (PRA) calculated by a personal computer (PC) based code and are based on system unavailability at the system level and on core damage frequency at the plant level

  12. PROCESS CAPABILITY ESTIMATION FOR NON-NORMALLY DISTRIBUTED DATA USING ROBUST METHODS - A COMPARATIVE STUDY

    Directory of Open Access Journals (Sweden)

    Yerriswamy Wooluru

    2016-06-01

    Full Text Available Process capability indices are very important process quality assessment tools in automotive industries. The common process capability indices (PCIs Cp, Cpk, Cpm are widely used in practice. The use of these PCIs based on the assumption that process is in control and its output is normally distributed. In practice, normality is not always fulfilled. Indices developed based on normality assumption are very sensitive to non- normal processes. When distribution of a product quality characteristic is non-normal, Cp and Cpk indices calculated using conventional methods often lead to erroneous interpretation of process capability. In the literature, various methods have been proposed for surrogate process capability indices under non normality but few literature sources offer their comprehensive evaluation and comparison of their ability to capture true capability in non-normal situation. In this paper, five methods have been reviewed and capability evaluation is carried out for the data pertaining to resistivity of silicon wafer. The final results revealed that the Burr based percentile method is better than Clements method. Modelling of non-normal data and Box-Cox transformation method using statistical software (Minitab 14 provides reasonably good result as they are very promising methods for non - normal and moderately skewed data (Skewness <= 1.5.

  13. Bayesian analysis of general failure data from an ageing distribution: advances in numerical methods

    International Nuclear Information System (INIS)

    Procaccia, H.; Villain, B.; Clarotti, C.A.

    1996-01-01

    EDF and ENEA carried out a joint research program for developing the numerical methods and computer codes needed for Bayesian analysis of component-lives in the case of ageing. Early results of this study were presented at ESREL'94. Since then the following further steps have been gone: input data have been generalized to the case that observed lives are censored both on the right and on the left; allowable life distributions are Weibull and gamma - their parameters are both unknown and can be statistically dependent; allowable priors are histograms relative to different parametrizations of the life distribution of concern; first-and-second-order-moments of the posterior distributions can be computed. In particular the covariance will give some important information about the degree of the statistical dependence between the parameters of interest. An application of the code to the appearance of a stress corrosion cracking in a tube of the PWR Steam Generator system is presented. (authors)

  14. Methods to Regulate Unbundled Transmission and Distribution Business on Electricity Markets

    International Nuclear Information System (INIS)

    Forsberg, Kaj; Fritz, Peter

    2003-11-01

    The regulation of distribution utilities is evolving from the traditional approach based on a cost of service or rate of return remuneration, to ways of regulation more specifically focused on providing incentives for improving efficiency, known as performance-based regulation or ratemaking. Modern regulation systems are also, to a higher degree than previously, intended to simulate competitive market conditions. The Market Design 2003-conference gathered people from 18 countries to discuss 'Methods to regulate unbundled transmission and distribution business on electricity markets'. Speakers from nine different countries and backgrounds (academics, industry and regulatory) presented their experiences and most recent works on how to make the regulation of unbundled distribution business as accurate as possible. This paper does not claim to be a fully representative summary of everything that was presented or discussed during the conference. Rather, it is a purposely restricted document where we focus on a few central themes and experiences from different countries

  15. A method for atomic-level noncontact thermometry with electron energy distribution

    Science.gov (United States)

    Kinoshita, Ikuo; Tsukada, Chiharu; Ouchi, Kohei; Kobayashi, Eiichi; Ishii, Juntaro

    2017-04-01

    We devised a new method of determining the temperatures of materials with their electron-energy distributions. The Fermi-Dirac distribution convoluted with a linear combination of Gaussian and Lorentzian distributions was fitted to the photoelectron spectrum measured for the Au(110) single-crystal surface at liquid N2-cooled temperature. The fitting successfully determined the surface-local thermodynamic temperature and the energy resolution simultaneously from the photoelectron spectrum, without any preliminary results of other measurements. The determined thermodynamic temperature was 99 ± 2.1 K, which was in good agreement with the reference temperature of 98.5 ± 0.5 K measured using a silicon diode sensor attached to the sample holder.

  16. Methods to Regulate Unbundled Transmission and Distribution Business on Electricity Markets

    Energy Technology Data Exchange (ETDEWEB)

    Forsberg, Kaj; Fritz, Peter

    2003-11-01

    The regulation of distribution utilities is evolving from the traditional approach based on a cost of service or rate of return remuneration, to ways of regulation more specifically focused on providing incentives for improving efficiency, known as performance-based regulation or ratemaking. Modern regulation systems are also, to a higher degree than previously, intended to simulate competitive market conditions. The Market Design 2003-conference gathered people from 18 countries to discuss 'Methods to regulate unbundled transmission and distribution business on electricity markets'. Speakers from nine different countries and backgrounds (academics, industry and regulatory) presented their experiences and most recent works on how to make the regulation of unbundled distribution business as accurate as possible. This paper does not claim to be a fully representative summary of everything that was presented or discussed during the conference. Rather, it is a purposely restricted document where we focus on a few central themes and experiences from different countries.

  17. The Optimal Confidence Intervals for Agricultural Products’ Price Forecasts Based on Hierarchical Historical Errors

    Directory of Open Access Journals (Sweden)

    Yi Wang

    2016-12-01

    Full Text Available With the levels of confidence and system complexity, interval forecasts and entropy analysis can deliver more information than point forecasts. In this paper, we take receivers’ demands as our starting point, use the trade-off model between accuracy and informativeness as the criterion to construct the optimal confidence interval, derive the theoretical formula of the optimal confidence interval and propose a practical and efficient algorithm based on entropy theory and complexity theory. In order to improve the estimation precision of the error distribution, the point prediction errors are STRATIFIED according to prices and the complexity of the system; the corresponding prediction error samples are obtained by the prices stratification; and the error distributions are estimated by the kernel function method and the stability of the system. In a stable and orderly environment for price forecasting, we obtain point prediction error samples by the weighted local region and RBF (Radial basis function neural network methods, forecast the intervals of the soybean meal and non-GMO (Genetically Modified Organism soybean continuous futures closing prices and implement unconditional coverage, independence and conditional coverage tests for the simulation results. The empirical results are compared from various interval evaluation indicators, different levels of noise, several target confidence levels and different point prediction methods. The analysis shows that the optimal interval construction method is better than the equal probability method and the shortest interval method and has good anti-noise ability with the reduction of system entropy; the hierarchical estimation error method can obtain higher accuracy and better interval estimation than the non-hierarchical method in a stable system.

  18. A New Method for the 2D DOA Estimation of Coherently Distributed Sources

    Directory of Open Access Journals (Sweden)

    Liang Zhou

    2014-03-01

    Full Text Available The purpose of this paper is to develop a new technique for estimating the two- dimensional (2D direction-of-arrivals (DOAs of coherently distributed (CD sources, which can estimate effectively the central azimuth and central elevation of CD sources at the cost of less computational cost. Using the special L-shape array, a new approach for parametric estimation of CD sources is proposed. The proposed method is based on two rotational invariance relations under small angular approximation, and estimates two rotational matrices which depict the relations, using propagator technique. And then the central DOA estimations are obtained by utilizing the primary diagonal elements of two rotational matrices. Simulation results indicate that the proposed method can exhibit a good performance under small angular spread and be applied to the multisource scenario where different sources may have different angular distribution shapes. Without any peak-finding search and the eigendecomposition of the high-dimensional sample covariance matrix, the proposed method has significantly reduced the computational cost compared with the existing methods, and thus is beneficial to real-time processing and engineering realization. In addition, our approach is also a robust estimator which does not depend on the angular distribution shape of CD sources.

  19. Mutual trust method for forwarding information in wireless sensor networks using random secret pre-distribution

    Directory of Open Access Journals (Sweden)

    Chih-Hsueh Lin

    2016-04-01

    Full Text Available In wireless sensor networks, sensing information must be transmitted from sensor nodes to the base station by multiple hopping. Every sensor node is a sender and a relay node that forwards the sensing information that is sent by other nodes. Under an attack, the sensing information may be intercepted, modified, interrupted, or fabricated during transmission. Accordingly, the development of mutual trust to enable a secure path to be established for forwarding information is an important issue. Random key pre-distribution has been proposed to establish mutual trust among sensor nodes. This article modifies the random key pre-distribution to a random secret pre-distribution and incorporates identity-based cryptography to establish an effective method of establishing mutual trust for a wireless sensor network. In the proposed method, base station assigns an identity and embeds n secrets into the private secret keys for every sensor node. Based on the identity and private secret keys, the mutual trust method is utilized to explore the types of trust among neighboring sensor nodes. The novel method can resist malicious attacks and satisfy the requirements of wireless sensor network, which are resistance to compromising attacks, masquerading attacks, forger attacks, replying attacks, authentication of forwarding messages, and security of sensing information.

  20. Linearized image reconstruction method for ultrasound modulated electrical impedance tomography based on power density distribution

    International Nuclear Information System (INIS)

    Song, Xizi; Xu, Yanbin; Dong, Feng

    2017-01-01

    Electrical resistance tomography (ERT) is a promising measurement technique with important industrial and clinical applications. However, with limited effective measurements, it suffers from poor spatial resolution due to the ill-posedness of the inverse problem. Recently, there has been an increasing research interest in hybrid imaging techniques, utilizing couplings of physical modalities, because these techniques obtain much more effective measurement information and promise high resolution. Ultrasound modulated electrical impedance tomography (UMEIT) is one of the newly developed hybrid imaging techniques, which combines electric and acoustic modalities. A linearized image reconstruction method based on power density is proposed for UMEIT. The interior data, power density distribution, is adopted to reconstruct the conductivity distribution with the proposed image reconstruction method. At the same time, relating the power density change to the change in conductivity, the Jacobian matrix is employed to make the nonlinear problem into a linear one. The analytic formulation of this Jacobian matrix is derived and its effectiveness is also verified. In addition, different excitation patterns are tested and analyzed, and opposite excitation provides the best performance with the proposed method. Also, multiple power density distributions are combined to implement image reconstruction. Finally, image reconstruction is implemented with the linear back-projection (LBP) algorithm. Compared with ERT, with the proposed image reconstruction method, UMEIT can produce reconstructed images with higher quality and better quantitative evaluation results. (paper)

  1. A simulation training evaluation method for distribution network fault based on radar chart

    Directory of Open Access Journals (Sweden)

    Yuhang Xu

    2018-01-01

    Full Text Available In order to solve the problem of automatic evaluation of dispatcher fault simulation training in distribution network, a simulation training evaluation method based on radar chart for distribution network fault is proposed. The fault handling information matrix is established to record the dispatcher fault handling operation sequence and operation information. The four situations of the dispatcher fault isolation operation are analyzed. The fault handling anti-misoperation rule set is established to describe the rules prohibiting dispatcher operation. Based on the idea of artificial intelligence reasoning, the feasibility of dispatcher fault handling is described by the feasibility index. The relevant factors and evaluation methods are discussed from the three aspects of the fault handling result feasibility, the anti-misoperation correctness and the operation process conciseness. The detailed calculation formula is given. Combining the independence and correlation between the three evaluation angles, a comprehensive evaluation method of distribution network fault simulation training based on radar chart is proposed. The method can comprehensively reflect the fault handling process of dispatchers, and comprehensively evaluate the fault handling process from various angles, which has good practical value.

  2. Study (Prediction of Main Pipes Break Rates in Water Distribution Systems Using Intelligent and Regression Methods

    Directory of Open Access Journals (Sweden)

    Massoud Tabesh

    2011-07-01

    Full Text Available Optimum operation of water distribution networks is one of the priorities of sustainable development of water resources, considering the issues of increasing efficiency and decreasing the water losses. One of the key subjects in optimum operational management of water distribution systems is preparing rehabilitation and replacement schemes, prediction of pipes break rate and evaluation of their reliability. Several approaches have been presented in recent years regarding prediction of pipe failure rates which each one requires especial data sets. Deterministic models based on age and deterministic multi variables and stochastic group modeling are examples of the solutions which relate pipe break rates to parameters like age, material and diameters. In this paper besides the mentioned parameters, more factors such as pipe depth and hydraulic pressures are considered as well. Then using multi variable regression method, intelligent approaches (Artificial neural network and neuro fuzzy models and Evolutionary polynomial Regression method (EPR pipe burst rate are predicted. To evaluate the results of different approaches, a case study is carried out in a part ofMashhadwater distribution network. The results show the capability and advantages of ANN and EPR methods to predict pipe break rates, in comparison with neuro fuzzy and multi-variable regression methods.

  3. Method for Determining the Activation Energy Distribution Function of Complex Reactions by Sieving and Thermogravimetric Measurements.

    Science.gov (United States)

    Bufalo, Gennaro; Ambrosone, Luigi

    2016-01-14

    A method for studying the kinetics of thermal degradation of complex compounds is suggested. Although the method is applicable to any matrix whose grain size can be measured, herein we focus our investigation on thermogravimetric analysis, under a nitrogen atmosphere, of ground soft wheat and ground maize. The thermogravimetric curves reveal that there are two well-distinct jumps of mass loss. They correspond to volatilization, which is in the temperature range 298-433 K, and decomposition regions go from 450 to 1073 K. Thermal degradation is schematized as a reaction in the solid state whose kinetics is analyzed separately in each of the two regions. By means of a sieving analysis different size fractions of the material are separated and studied. A quasi-Newton fitting algorithm is used to obtain the grain size distribution as best fit to experimental data. The individual fractions are thermogravimetrically analyzed for deriving the functional relationship between activation energy of the degradation reactions and the particle size. Such functional relationship turns out to be crucial to evaluate the moments of the activation energy distribution, which is unknown in terms of the distribution calculated by sieve analysis. From the knowledge of moments one can reconstruct the reaction conversion. The method is applied first to the volatilization region, then to the decomposition region. The comparison with the experimental data reveals that the method reproduces the experimental conversion with an accuracy of 5-10% in the volatilization region and of 3-5% in the decomposition region.

  4. Simple method for highlighting the temperature distribution into a liquid sample heated by microwave power field

    International Nuclear Information System (INIS)

    Surducan, V.; Surducan, E.; Dadarlat, D.

    2013-01-01

    Microwave induced heating is widely used in medical treatments, scientific and industrial applications. The temperature field inside a microwave heated sample is often inhomogenous, therefore multiple temperature sensors are required for an accurate result. Nowadays, non-contact (Infra Red thermography or microwave radiometry) or direct contact temperature measurement methods (expensive and sophisticated fiber optic temperature sensors transparent to microwave radiation) are mainly used. IR thermography gives only the surface temperature and can not be used for measuring temperature distributions in cross sections of a sample. In this paper we present a very simple experimental method for temperature distribution highlighting inside a cross section of a liquid sample, heated by a microwave radiation through a coaxial applicator. The method proposed is able to offer qualitative information about the heating distribution, using a temperature sensitive liquid crystal sheet. Inhomogeneities as smaller as 1°-2°C produced by the symmetry irregularities of the microwave applicator can be easily detected by visual inspection or by computer assisted color to temperature conversion. Therefore, the microwave applicator is tuned and verified with described method until the temperature inhomogeneities are solved

  5. A "total parameter estimation" method in the varification of distributed hydrological models

    Science.gov (United States)

    Wang, M.; Qin, D.; Wang, H.

    2011-12-01

    Conventionally hydrological models are used for runoff or flood forecasting, hence the determination of model parameters are common estimated based on discharge measurements at the catchment outlets. With the advancement in hydrological sciences and computer technology, distributed hydrological models based on the physical mechanism such as SWAT, MIKESHE, and WEP, have gradually become the mainstream models in hydrology sciences. However, the assessments of distributed hydrological models and model parameter determination still rely on runoff and occasionally, groundwater level measurements. It is essential in many countries, including China, to understand the local and regional water cycle: not only do we need to simulate the runoff generation process and for flood forecasting in wet areas, we also need to grasp the water cycle pathways and consumption process of transformation in arid and semi-arid regions for the conservation and integrated water resources management. As distributed hydrological model can simulate physical processes within a catchment, we can get a more realistic representation of the actual water cycle within the simulation model. Runoff is the combined result of various hydrological processes, using runoff for parameter estimation alone is inherits problematic and difficult to assess the accuracy. In particular, in the arid areas, such as the Haihe River Basin in China, runoff accounted for only 17% of the rainfall, and very concentrated during the rainy season from June to August each year. During other months, many of the perennial rivers within the river basin dry up. Thus using single runoff simulation does not fully utilize the distributed hydrological model in arid and semi-arid regions. This paper proposed a "total parameter estimation" method to verify the distributed hydrological models within various water cycle processes, including runoff, evapotranspiration, groundwater, and soil water; and apply it to the Haihe river basin in

  6. Method of Fusion Diagnosis for Dam Service Status Based on Joint Distribution Function of Multiple Points

    Directory of Open Access Journals (Sweden)

    Zhenxiang Jiang

    2016-01-01

    Full Text Available The traditional methods of diagnosing dam service status are always suitable for single measuring point. These methods also reflect the local status of dams without merging multisource data effectively, which is not suitable for diagnosing overall service. This study proposes a new method involving multiple points to diagnose dam service status based on joint distribution function. The function, including monitoring data of multiple points, can be established with t-copula function. Therefore, the possibility, which is an important fusing value in different measuring combinations, can be calculated, and the corresponding diagnosing criterion is established with typical small probability theory. Engineering case study indicates that the fusion diagnosis method can be conducted in real time and the abnormal point can be detected, thereby providing a new early warning method for engineering safety.

  7. Calculations of Neutron Flux Distributions by Means of Integral Transport Methods

    Energy Technology Data Exchange (ETDEWEB)

    Carlvik, I

    1967-05-15

    Flux distributions have been calculated mainly in one energy group, for a number of systems representing geometries interesting for reactor calculations. Integral transport methods of two kinds were utilised, collision probabilities (CP) and the discrete method (DIT). The geometries considered comprise the three one-dimensional geometries, planes, sphericals and annular, and further a square cell with a circular fuel rod and a rod cluster cell with a circular outer boundary. For the annular cells both methods (CP and DIT) were used and the results were compared. The purpose of the work is twofold, firstly to demonstrate the versatility and efficacy of integral transport methods and secondly to serve as a guide for anybody who wants to use the methods.

  8. The experimental method of measurement for spatial distribution of full aperture backscatter light by circular PIN-array

    International Nuclear Information System (INIS)

    Zhao Xuefeng; Wang Chuanke; Hu Feng; Kuang Longyu; Wang Zhebin; Li Sanwei; Liu Shengye; Jiang Gang

    2011-01-01

    The spatial distribution of backscatter light is very important for understanding the production of backscatter light. The experimental method of spatial distribution of full aperture backscatter light is based on the circular PIN array composed of concentric orbicular multi-PIN detectors. The image of backscatter light spatial distribution of full aperture SBS is obtained by measuring spatial distribution of full aperture backscatter light using the method in the experiment of laser hohlraum targets interaction at 'Shenguang II'. A preliminary method to measure spatial distribution of full aperture backscatter light is established. (authors)

  9. Simulation of product distribution at PT Anugrah Citra Boga by using capacitated vehicle routing problem method

    Science.gov (United States)

    Lamdjaya, T.; Jobiliong, E.

    2017-01-01

    PT Anugrah Citra Boga is a food processing industry that produces meatballs as their main product. The distribution system of the products must be considered, because it needs to be more efficient in order to reduce the shipment cost. The purpose of this research is to optimize the distribution time by simulating the distribution channels with capacitated vehicle routing problem method. Firstly, the distribution route is observed in order to calculate the average speed, time capacity and shipping costs. Then build the model using AIMMS software. A few things that are required to simulate the model are customer locations, distances, and the process time. Finally, compare the total distribution cost obtained by the simulation and the historical data. It concludes that the company can reduce the shipping cost around 4.1% or Rp 529,800 per month. By using this model, the utilization rate can be more optimal. The current value for the first vehicle is 104.6% and after the simulation it becomes 88.6%. Meanwhile, the utilization rate of the second vehicle is increase from 59.8% to 74.1%. The simulation model is able to produce the optimal shipping route with time restriction, vehicle capacity, and amount of vehicle.

  10. The interval high rate discharge behavior of Li3V2(PO4)3/C cathode based on in situ polymerization method

    International Nuclear Information System (INIS)

    Mao, Wen-feng; Yan, Ji; Xie, Hui; Tang, Zhi-yuan; Xu, Qiang

    2013-01-01

    An in situ polymerization assisted fast sol–gel method was introduced to synthesize high performance Li 3 V 2 PO 4 /C (LVP/C) cathode material. The crystal structure, surface morphology and electrochemical performances of the LVP/C samples sintered at different temperatures were investigated. The composite sintered at 750 °C exhibits the highest specific discharge capacity of 119.02 mAh g −1 (440.35 Wh g −1 ) at 10 C rate. The Li + diffusion coefficient ranges from 10 −6 to 10 −8 cm 2 s −1 based on different scanning rates and the electronic conductivity is about 10 −5 S cm −1 . For comparison, an ex situ polymerization method was also employed to obtain the LVP/C composite. A novel charge/discharge testing mode was designed to investigate the electrochemical behavior of the as-prepared LVP/C composite for practical application in electric vehicle cells. The obtained high power density and the special testing mode prove the LVP/C composite would be a promising candidate for the electric vehicle application and deserves further investigation

  11. Quantitative SPECT reconstruction for brain distribution with a non-uniform attenuation using a regularizing method

    International Nuclear Information System (INIS)

    Soussaline, F.; Bidaut, L.; Raynaud, C.; Le Coq, G.

    1983-06-01

    An analytical solution to the SPECT reconstruction problem, where the actual attenuation effect can be included, was developped using a regularizing iterative method (RIM). The potential of this approach in quantitative brain studies when using a tracer for cerebrovascular disorders is now under evaluation. Mathematical simulations for a distributed activity in the brain surrounded by the skull and physical phantom studies were performed, using a rotating camera based SPECT system, allowing the calibration of the system and the evaluation of the adapted method to be used. On the simulation studies, the contrast obtained along a profile, was less than 5%, the standard deviation 8% and the quantitative accuracy 13%, for a uniform emission distribution of mean = 100 per pixel and a double attenuation coefficient of μ = 0.115 cm -1 and 0.5 cm -1 . Clinical data obtained after injection of 123 I (AMPI) were reconstructed using the RIM without and with cerebrovascular diseases or lesion defects. Contour finding techniques were used for the delineation of the brain and the skull, and measured attenuation coefficients were assumed within these two regions. Using volumes of interest, selected on homogeneous regions on an hemisphere and reported symetrically, the statistical uncertainty for 300 K events in the tomogram was found to be 12%, the index of symetry was of 4% for normal distribution. These results suggest that quantitative SPECT reconstruction for brain distribution is feasible, and that combined with an adapted tracer and an adequate model physiopathological parameters could be extracted

  12. Leontief Input-Output Method for The Fresh Milk Distribution Linkage Analysis

    Directory of Open Access Journals (Sweden)

    Riski Nur Istiqomah

    2016-11-01

    Full Text Available This research discusses about linkage analysis and identifies the key sector in the fresh milk distribution using Leontief Input-Output method. This method is one of the application of Mathematics in economy. The current fresh milk distribution system includes dairy farmers →collectors→fresh milk processing industries→processed milk distributors→consumers. Then, the distribution is merged between the collectors’ axctivity and the fresh milk processing industry. The data used are primary and secondary data taken in June 2016 in Kecamatan Jabung Kabupaten Malang. The collected data are then analysed using Leontief Input-Output Matriks and Python (PYIO 2.1 software. The result is that the merging of the collectors’ and the fresh milk processing industry’s activities shows high indices of forward linkages and backward linkages. It is shown that merging of the two activities is the key sector which has an important role in developing the whole activities in the fresh milk distribution.

  13. Problem-Solving Methods for the Prospective Development of Urban Power Distribution Network

    Directory of Open Access Journals (Sweden)

    A. P. Karpenko

    2014-01-01

    Full Text Available This article succeeds the former A. P. K nko’ and A. I. Kuzmina’ ubl t on titled "A mathematical model of urban distribution electro-network considering its future development" (electronic scientific and technical magazine "Science and education" No. 5, 2014.The article offers a model of urban power distribution network as a set of transformer and distribution substations and cable lines. All elements of the network and new consumers are determined owing to vectors of parameters consistent with them.A problem of the urban power distribution network design, taking into account a prospective development of the city, is presented as a problem of discrete programming. It is in deciding on the optimal option to connect new consumers to the power supply network, on the number and sites to build new substations, and on the option to include them in the power supply network.Two methods, namely a reduction method for a set the nested tasks of global minimization and a decomposition method are offered to solve the problem.In reduction method the problem of prospective development of power supply network breaks into three subtasks of smaller dimension: a subtask to define the number and sites of new transformer and distribution substations, a subtask to define the option to connect new consumers to the power supply network, and a subtask to include new substations in the power supply network. The vector of the varied parameters is broken into three subvectors consistent with the subtasks. Each subtask is solved using an area of admissible vector values of the varied parameters at the fixed components of the subvectors obtained when solving the higher subtasks.In decomposition method the task is presented as a set of three, similar to reduction method, reductions of subtasks and a problem of coordination. The problem of coordination specifies a sequence of the subtasks solution, defines the moment of calculation termination. Coordination is realized by

  14. Cargo flows distribution over the loading sites of enterprises by using methods of artificial intelligence

    Directory of Open Access Journals (Sweden)

    Олександр Павлович Кіркін

    2017-06-01

    Full Text Available Development of information technologies and market requirements in effective control over cargo flows, forces enterprises to look for new ways and methods of automated control over the technological operations. For rail transportation one of the most complicated tasks of automation is the cargo flows distribution over the sites of loading and unloading. In this article the solution with the use of one of the methods of artificial intelligence – a fuzzy inference has been proposed. The analysis of the last publications showed that the fuzzy inference method is effective for the solution of similar tasks, it makes it possible to accumulate experience, it is stable to temporary impacts of the environmental conditions. The existing methods of the cargo flows distribution over the sites of loading and unloading are too simplified and can lead to incorrect decisions. The purpose of the article is to create a distribution model of cargo flows of the enterprises over the sites of loading and unloading, basing on the fuzzy inference method and to automate the control. To achieve the objective a mathematical model of the cargo flows distribution over the sites of loading and unloading has been made using fuzzy logic. The key input parameters of the model are: «number of loading sites», «arrival of the next set of cars», «availability of additional operations». The output parameter is «a variety of set of cars». Application of the fuzzy inference method made it possible to reduce loading time by 15% and to reduce costs for preparatory operations before loading by 20%. Thus this method is an effective means and holds the greatest promise for railway competitiveness increase. Interaction between different types of transportation and their influence on the cargo flows distribution over the sites of loading and unloading hasn’t been considered. These sites may be busy transshipping at that very time which is characteristic of large enterprises

  15. Comparing performances of clements, box-cox, Johnson methods with weibull distributions for assessing process capability

    Energy Technology Data Exchange (ETDEWEB)

    Senvar, O.; Sennaroglu, B.

    2016-07-01

    This study examines Clements’ Approach (CA), Box-Cox transformation (BCT), and Johnson transformation (JT) methods for process capability assessments through Weibull-distributed data with different parameters to figure out the effects of the tail behaviours on process capability and compares their estimation performances in terms of accuracy and precision. Design/methodology/approach: Usage of process performance index (PPI) Ppu is handled for process capability analysis (PCA) because the comparison issues are performed through generating Weibull data without subgroups. Box plots, descriptive statistics, the root-mean-square deviation (RMSD), which is used as a measure of error, and a radar chart are utilized all together for evaluating the performances of the methods. In addition, the bias of the estimated values is important as the efficiency measured by the mean square error. In this regard, Relative Bias (RB) and the Relative Root Mean Square Error (RRMSE) are also considered. Findings: The results reveal that the performance of a method is dependent on its capability to fit the tail behavior of the Weibull distribution and on targeted values of the PPIs. It is observed that the effect of tail behavior is more significant when the process is more capable. Research limitations/implications: Some other methods such as Weighted Variance method, which also give good results, were also conducted. However, we later realized that it would be confusing in terms of comparison issues between the methods for consistent interpretations... (Author)

  16. Magnetic Resonance Fingerprinting with short relaxation intervals.

    Science.gov (United States)

    Amthor, Thomas; Doneva, Mariya; Koken, Peter; Sommer, Karsten; Meineke, Jakob; Börnert, Peter

    2017-09-01

    The aim of this study was to investigate a technique for improving the performance of Magnetic Resonance Fingerprinting (MRF) in repetitive sampling schemes, in particular for 3D MRF acquisition, by shortening relaxation intervals between MRF pulse train repetitions. A calculation method for MRF dictionaries adapted to short relaxation intervals and non-relaxed initial spin states is presented, based on the concept of stationary fingerprints. The method is applicable to many different k-space sampling schemes in 2D and 3D. For accuracy analysis, T 1 and T 2 values of a phantom are determined by single-slice Cartesian MRF for different relaxation intervals and are compared with quantitative reference measurements. The relevance of slice profile effects is also investigated in this case. To further illustrate the capabilities of the method, an application to in-vivo spiral 3D MRF measurements is demonstrated. The proposed computation method enables accurate parameter estimation even for the shortest relaxation intervals, as investigated for different sampling patterns in 2D and 3D. In 2D Cartesian measurements, we achieved a scan acceleration of more than a factor of two, while maintaining acceptable accuracy: The largest T 1 values of a sample set deviated from their reference values by 0.3% (longest relaxation interval) and 2.4% (shortest relaxation interval). The largest T 2 values showed systematic deviations of up to 10% for all relaxation intervals, which is discussed. The influence of slice profile effects for multislice acquisition is shown to become increasingly relevant for short relaxation intervals. In 3D spiral measurements, a scan time reduction of 36% was achieved, maintaining the quality of in-vivo T1 and T2 maps. Reducing the relaxation interval between MRF sequence repetitions using stationary fingerprint dictionaries is a feasible method to improve the scan efficiency of MRF sequences. The method enables fast implementations of 3D spatially

  17. Perturbation method for experimental determination of neutron spatial distribution in the reactor cell

    International Nuclear Information System (INIS)

    Takac, S.M.

    1972-01-01

    The method is based on perturbation of the reactor cell from a few up to few tens of percent. Measurements were performed for square lattice calls of zero power reactors Anna, NORA and RB, with metal uranium and uranium oxide fuel elements, water, heavy water and graphite moderators. Character and functional dependence of perturbations were obtained from the experimental results. Zero perturbation was determined by extrapolation thus obtaining the real physical neutron flux distribution in the reactor cell. Simple diffusion theory for partial plate cell perturbation was developed for verification of the perturbation method. The results of these calculation proved that introducing the perturbation sample in the fuel results in flattening the thermal neutron density dependent on the amplitude of the applied perturbation. Extrapolation applied for perturbed distributions was found to be justified

  18. Feature Extraction Method for High Impedance Ground Fault Localization in Radial Power Distribution Networks

    DEFF Research Database (Denmark)

    Jensen, Kåre Jean; Munk, Steen M.; Sørensen, John Aasted

    1998-01-01

    A new approach to the localization of high impedance ground faults in compensated radial power distribution networks is presented. The total size of such networks is often very large and a major part of the monitoring of these is carried out manually. The increasing complexity of industrial...... of three phase voltages and currents. The method consists of a feature extractor, based on a grid description of the feeder by impulse responses, and a neural network for ground fault localization. The emphasis of this paper is the feature extractor, and the detection of the time instance of a ground fault...... processes and communication systems lead to demands for improved monitoring of power distribution networks so that the quality of power delivery can be kept at a controlled level. The ground fault localization method for each feeder in a network is based on the centralized frequency broadband measurement...

  19. Method for distributed agent-based non-expert simulation of manufacturing process behavior

    Science.gov (United States)

    Ivezic, Nenad; Potok, Thomas E.

    2004-11-30

    A method for distributed agent based non-expert simulation of manufacturing process behavior on a single-processor computer comprises the steps of: object modeling a manufacturing technique having a plurality of processes; associating a distributed agent with each the process; and, programming each the agent to respond to discrete events corresponding to the manufacturing technique, wherein each discrete event triggers a programmed response. The method can further comprise the step of transmitting the discrete events to each agent in a message loop. In addition, the programming step comprises the step of conditioning each agent to respond to a discrete event selected from the group consisting of a clock tick message, a resources received message, and a request for output production message.

  20. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...... of error detection methods includes a high level software specification. this has the purpose of illustrating that the designed can be used in practice....

  1. Confidence intervals for distinguishing ordinal and disordinal interactions in multiple regression.

    Science.gov (United States)

    Lee, Sunbok; Lei, Man-Kit; Brody, Gene H

    2015-06-01

    Distinguishing between ordinal and disordinal interaction in multiple regression is useful in testing many interesting theoretical hypotheses. Because the distinction is made based on the location of a crossover point of 2 simple regression lines, confidence intervals of the crossover point can be used to distinguish ordinal and disordinal interactions. This study examined 2 factors that need to be considered in constructing confidence intervals of the crossover point: (a) the assumption about the sampling distribution of the crossover point, and (b) the possibility of abnormally wide confidence intervals for the crossover point. A Monte Carlo simulation study was conducted to compare 6 different methods for constructing confidence intervals of the crossover point in terms of the coverage rate, the proportion of true values that fall to the left or right of the confidence intervals, and the average width of the confidence intervals. The methods include the reparameterization, delta, Fieller, basic bootstrap, percentile bootstrap, and bias-corrected accelerated bootstrap methods. The results of our Monte Carlo simulation study suggest that statistical inference using confidence intervals to distinguish ordinal and disordinal interaction requires sample sizes more than 500 to be able to provide sufficiently narrow confidence intervals to identify the location of the crossover point. (c) 2015 APA, all rights reserved).

  2. On Bayesian treatment of systematic uncertainties in confidence interval calculation

    CERN Document Server

    Tegenfeldt, Fredrik

    2005-01-01

    In high energy physics, a widely used method to treat systematic uncertainties in confidence interval calculations is based on combining a frequentist construction of confidence belts with a Bayesian treatment of systematic uncertainties. In this note we present a study of the coverage of this method for the standard Likelihood Ratio (aka Feldman & Cousins) construction for a Poisson process with known background and Gaussian or log-Normal distributed uncertainties in the background or signal efficiency. For uncertainties in the signal efficiency of upto 40 % we find over-coverage on the level of 2 to 4 % depending on the size of uncertainties and the region in signal space. Uncertainties in the background generally have smaller effect on the coverage. A considerable smoothing of the coverage curves is observed. A software package is presented which allows fast calculation of the confidence intervals for a variety of assumptions on shape and size of systematic uncertainties for different nuisance paramete...

  3. Evaluating Bayesian spatial methods for modelling species distributions with clumped and restricted occurrence data.

    Directory of Open Access Journals (Sweden)

    David W Redding

    Full Text Available Statistical approaches for inferring the spatial distribution of taxa (Species Distribution Models, SDMs commonly rely on available occurrence data, which is often clumped and geographically restricted. Although available SDM methods address some of these factors, they could be more directly and accurately modelled using a spatially-explicit approach. Software to fit models with spatial autocorrelation parameters in SDMs are now widely available, but whether such approaches for inferring SDMs aid predictions compared to other methodologies is unknown. Here, within a simulated environment using 1000 generated species' ranges, we compared the performance of two commonly used non-spatial SDM methods (Maximum Entropy Modelling, MAXENT and boosted regression trees, BRT, to a spatial Bayesian SDM method (fitted using R-INLA, when the underlying data exhibit varying combinations of clumping and geographic restriction. Finally, we tested how any recommended methodological settings designed to account for spatially non-random patterns in the data impact inference. Spatial Bayesian SDM method was the most consistently accurate method, being in the top 2 most accurate methods in 7 out of 8 data sampling scenarios. Within high-coverage sample datasets, all methods performed fairly similarly. When sampling points were randomly spread, BRT had a 1-3% greater accuracy over the other methods and when samples were clumped, the spatial Bayesian SDM method had a 4%-8% better AUC score. Alternatively, when sampling points were restricted to a small section of the true range all methods were on average 10-12% less accurate, with greater variation among the methods. Model inference under the recommended settings to account for autocorrelation was not impacted by clumping or restriction of data, except for the complexity of the spatial regression term in the spatial Bayesian model. Methods, such as those made available by R-INLA, can be successfully used to account

  4. Evaluating Bayesian spatial methods for modelling species distributions with clumped and restricted occurrence data.

    Science.gov (United States)

    Redding, David W; Lucas, Tim C D; Blackburn, Tim M; Jones, Kate E

    2017-01-01

    Statistical approaches for inferring the spatial distribution of taxa (Species Distribution Models, SDMs) commonly rely on available occurrence data, which is often clumped and geographically restricted. Although available SDM methods address some of these factors, they could be more directly and accurately modelled using a spatially-explicit approach. Software to fit models with spatial autocorrelation parameters in SDMs are now widely available, but whether such approaches for inferring SDMs aid predictions compared to other methodologies is unknown. Here, within a simulated environment using 1000 generated species' ranges, we compared the performance of two commonly used non-spatial SDM methods (Maximum Entropy Modelling, MAXENT and boosted regression trees, BRT), to a spatial Bayesian SDM method (fitted using R-INLA), when the underlying data exhibit varying combinations of clumping and geographic restriction. Finally, we tested how any recommended methodological settings designed to account for spatially non-random patterns in the data impact inference. Spatial Bayesian SDM method was the most consistently accurate method, being in the top 2 most accurate methods in 7 out of 8 data sampling scenarios. Within high-coverage sample datasets, all methods performed fairly similarly. When sampling points were randomly spread, BRT had a 1-3% greater accuracy over the other methods and when samples were clumped, the spatial Bayesian SDM method had a 4%-8% better AUC score. Alternatively, when sampling points were restricted to a small section of the true range all methods were on average 10-12% less accurate, with greater variation among the methods. Model inference under the recommended settings to account for autocorrelation was not impacted by clumping or restriction of data, except for the complexity of the spatial regression term in the spatial Bayesian model. Methods, such as those made available by R-INLA, can be successfully used to account for spatial

  5. Optimization of hot water transport and distribution networks by analytical method: OPTAL program

    International Nuclear Information System (INIS)

    Barreau, Alain; Caizergues, Robert; Moret-Bailly, Jean

    1977-06-01

    This report presents optimization studies of hot water transport and distribution network by minimizing operating cost. Analytical optimization is used: Lagrange's method of undetermined multipliers. Optimum diameter of each pipe is calculated for minimum network operating cost. The characteristics of the computer program used for calculations, OPTAL, are given in this report. An example of network is calculated and described: 52 branches and 27 customers. Results are discussed [fr

  6. Method of measuring the current density distribution and emittance of pulsed electron beams

    International Nuclear Information System (INIS)

    Schilling, H.B.

    1979-07-01

    This method of current density measurement employs an array of many Faraday cups, each cup being terminated by an integrating capacitor. The voltages of the capacitors are subsequently displayed on a scope, thus giving the complete current density distribution with one shot. In the case of emittance measurements, a moveable small-diameter aperture is inserted at some distance in front of the cup array. Typical results with a two-cathode, two-energy electron source are presented. (orig.)

  7. Catalytic Enzyme-Based Methods for Water Treatment and Water Distribution System Decontamination. 1. Literature Survey

    Science.gov (United States)

    2006-06-01

    best examples of this is glucose isomerase, which has been used in the commercial production of high fructose corn syrup (HFCS) since 1967.230 Most...EDGEWOOD CHEMICAL BIOLOGICAL CENTER U.S. ARMY RESEARCH, DEVELOPMENT AND ENGINEERING COMMAND ECBC-TR-489 CATALYTIC ENZYME-BASED METHODS FOR WATER ...TREATMENT AND WATER DISTRIBUTION SYSTEM DECONTAMINATION 1. LITERATURE SURVEY Joseph J. DeFrank RESEARCH AND TECHNOLOGY DIRECTORATE June 2006 Approved for

  8. Jaws calibration method to get a homogeneous distribution of dose in the junction of hemi fields

    International Nuclear Information System (INIS)

    Cenizo de Castro, E.; Garcia Pareja, S.; Moreno Saiz, C.; Hernandez Rodriguez, R.; Bodineau Gil, C.; Martin-Viera Cueto, J. A.

    2011-01-01

    Hemi fields treatments are widely used in radiotherapy. Because the tolerance established for the positioning of each jaw is 1 mm, may be cases of overlap or separation of up to 2 mm. This implies heterogeneity of doses up to 40% in the joint area. This paper presents an accurate method of calibration of the jaws so as to obtain homogeneous dose distributions when using this type of treatment. (Author)

  9. Examination of measurement and its method of compensation of the sensitivity distribution using phased array coil for body scan

    CERN Document Server

    Kimura, T; Iizuka, A; Taniguchi, Y; Ishikuro, A; Hongo, T; Inoue, H; Ogura, A

    2003-01-01

    The influence on the quality of images by measurement of a sensitivity distribution and the use of a sensitivity compensation filter was considered using an opposite-type phased array coil and volume-type phased array coil. With the opposite-type phased array coil, the relation between coil interval and filter was investigated for the image intensity correction (IIC) filter, surface coil intensity correction (SCIC) filter (GE), and the Normalize filter (SIEMENS). The SCIC filter and Normalize filter showed distance dependability over the coil interval of signal-to-noise ratio (SNR) and uniformity was observed, and the existence of an optimal coil interval was suggested. Moreover, with the IIC filter, distance dependability over a coil interval was small, and the decrease in contrast with use was remarkable. On the other hand, with the volume-type phased array coil, the overlap of an array element was investigated to determine the influence it had on sensitivity distribution. Although the value stabilized in t...

  10. Multichannel interval timer

    International Nuclear Information System (INIS)

    Turko, B.T.

    1983-10-01

    A CAMAC based modular multichannel interval timer is described. The timer comprises twelve high resolution time digitizers with a common start enabling twelve independent stop inputs. Ten time ranges from 2.5 μs to 1.3 μs can be preset. Time can be read out in twelve 24-bit words either via CAMAC Crate Controller or an external FIFO register. LSB time calibration is 78.125 ps. An additional word reads out the operational status of twelve stop channels. The system consists of two modules. The analog module contains a reference clock and 13 analog time stretchers. The digital module contains counters, logic and interface circuits. The timer has an excellent differential linearity, thermal stability and crosstalk free performance

  11. Experimenting with musical intervals

    Science.gov (United States)

    Lo Presto, Michael C.

    2003-07-01

    When two tuning forks of different frequency are sounded simultaneously the result is a complex wave with a repetition frequency that is the fundamental of the harmonic series to which both frequencies belong. The ear perceives this 'musical interval' as a single musical pitch with a sound quality produced by the harmonic spectrum responsible for the waveform. This waveform can be captured and displayed with data collection hardware and software. The fundamental frequency can then be calculated and compared with what would be expected from the frequencies of the tuning forks. Also, graphing software can be used to determine equations for the waveforms and predict their shapes. This experiment could be used in an introductory physics or musical acoustics course as a practical lesson in superposition of waves, basic Fourier series and the relationship between some of the ear's subjective perceptions of sound and the physical properties of the waves that cause them.

  12. A diffusion-theoretical method to calculate the neutron flux distribution in multisphere configurations

    International Nuclear Information System (INIS)

    Schuerrer, F.

    1980-01-01

    For characterizing heterogene configurations of pebble-bed reactors the fine structure of the flux distribution as well as the determination of the macroscopic neutronphysical quantities are of interest. When calculating system parameters of Wigner-Seitz-cells the usual codes for neutron spectra calculation always neglect the modulation of the neutron flux by the influence of neighbouring spheres. To judge the error arising from that procedure it is necessary to determinate the flux distribution in the surrounding of a spherical fuel element. In the present paper an approximation method to calculate the flux distribution in the two-sphere model is developed. This method is based on the exactly solvable problem of the flux determination of a point source of neutrons in an infinite medium, which contains a spherical perturbation zone eccentric to the point source. An iteration method allows by superposing secondary fields and alternately satisfying the conditions of continuity on the surface of each of the two fuel elements to advance to continually improving approximations. (orig.) 891 RW/orig. 892 CKA [de

  13. Quantification of the spatial strain distribution of scoliosis using a thin-plate spline method.

    Science.gov (United States)

    Kiriyama, Yoshimori; Watanabe, Kota; Matsumoto, Morio; Toyama, Yoshiaki; Nagura, Takeo

    2014-01-03

    The objective of this study was to quantify the three-dimensional spatial strain distribution of a scoliotic spine by nonhomogeneous transformation without using a statistically averaged reference spine. The shape of the scoliotic spine was determined from computed tomography images from a female patient with adolescent idiopathic scoliosis. The shape of the scoliotic spine was enclosed in a rectangular grid, and symmetrized using a thin-plate spline method according to the node positions of the grid. The node positions of the grid were determined by numerical optimization to satisfy symmetry. The obtained symmetric spinal shape was enclosed within a new rectangular grid and distorted back to the original scoliotic shape using a thin-plate spline method. The distorted grid was compared to the rectangular grid that surrounded the symmetrical spine. Cobb's angle was reduced from 35° in the scoliotic spine to 7° in the symmetrized spine, and the scoliotic shape was almost fully symmetrized. The scoliotic spine showed a complex Green-Lagrange strain distribution in three dimensions. The vertical and transverse compressive/tensile strains in the frontal plane were consistent with the major scoliotic deformation. The compressive, tensile and shear strains on the convex side of the apical vertebra were opposite to those on the concave side. These results indicate that the proposed method can be used to quantify the three-dimensional spatial strain distribution of a scoliotic spine, and may be useful in quantifying the deformity of scoliosis. © 2013 Elsevier Ltd. All rights reserved.

  14. Distributed Cooperative Search Control Method of Multiple UAVs for Moving Target

    Directory of Open Access Journals (Sweden)

    Chang-jian Ru

    2015-01-01

    Full Text Available To reduce the impact of uncertainties caused by unknown motion parameters on searching plan of moving targets and improve the efficiency of UAV’s searching, a novel distributed Multi-UAVs cooperative search control method for moving target is proposed in this paper. Based on detection results of onboard sensors, target probability map is updated using Bayesian theory. A Gaussian distribution of target transition probability density function is introduced to calculate prediction probability of moving target existence, and then target probability map can be further updated in real-time. A performance index function combining with target cost, environment cost, and cooperative cost is constructed, and the cooperative searching problem can be transformed into a central optimization problem. To improve computational efficiency, the distributed model predictive control method is presented, and thus the control command of each UAV can be obtained. The simulation results have verified that the proposed method can avoid the blindness of UAV searching better and improve overall efficiency of the team effectively.

  15. Joint distribution of temperature and precipitation in the Mediterranean, using the Copula method

    Science.gov (United States)

    Lazoglou, Georgia; Anagnostopoulou, Christina

    2018-03-01

    This study analyses the temperature and precipitation dependence among stations in the Mediterranean. The first station group is located in the eastern Mediterranean (EM) and includes two stations, Athens and Thessaloniki, while the western (WM) one includes Malaga and Barcelona. The data was organized in two time periods, the hot-dry period and the cold-wet one, composed of 5 months, respectively. The analysis is based on a new statistical technique in climatology: the Copula method. Firstly, the calculation of the Kendall tau correlation index showed that temperatures among stations are dependant during both time periods whereas precipitation presents dependency only between the stations located in EM or WM and only during the cold-wet period. Accordingly, the marginal distributions were calculated for each studied station, as they are further used by the copula method. Finally, several copula families, both Archimedean and Elliptical, were tested in order to choose the most appropriate one to model the relation of the studied data sets. Consequently, this study achieves to model the dependence of the main climate parameters (temperature and precipitation) with the Copula method. The Frank copula was identified as the best family to describe the joint distribution of temperature, for the majority of station groups. For precipitation, the best copula families are BB1 and Survival Gumbel. Using the probability distribution diagrams, the probability of a combination of temperature and precipitation values between stations is estimated.

  16. An analytical transport theory method for calculating flux distribution in slab cells

    International Nuclear Information System (INIS)

    Abdel Krim, M.S.

    2001-01-01

    A transport theory method for calculating flux distributions in slab fuel cell is described. Two coupled integral equations for flux in fuel and moderator are obtained; assuming partial reflection at moderator external boundaries. Galerkin technique is used to solve these equations. Numerical results for average fluxes in fuel and moderator and the disadvantage factor are given. Comparison with exact numerical methods, that is for total reflection moderator outer boundaries, show that the Galerkin technique gives accurate results for the disadvantage factor and average fluxes. (orig.)

  17. The compaction of a random distribution of metal cylinders by the discrete element method

    DEFF Research Database (Denmark)

    Redanz, Pia; Fleck, N. A.

    2001-01-01

    -linear springs. The initial packing of the particles is generated by the ballistic deposition method. Salient micromechanical features of closed die and isostatic powder compaction are elucidated for both frictionless and sticking contacts. It is found that substantial rearrangement of frictionless particles......The cold compaction of a 2D random distribution of metal circular cylinders has been investigated numerically by the discrete element method. Each cylindrical particle is located by a node at its centre and the plastic indentation of the contacts between neighbouring particles is represented by non...

  18. Standardization of a method to study the distribution of Americium in purex process

    International Nuclear Information System (INIS)

    Dapolikar, T.T.; Pant, D.K.; Kapur, H.N.; Kumar, Rajendra; Dubey, K.

    2017-01-01

    In the present work the distribution of Americium in PUREX process is investigated in various process streams. For this purpose a method has been standardized for the determination of Am in process samples. The method involves extraction of Am with associated actinides using 30% TRPO-NPH at 0.3M HNO 3 followed by selective stripping of Am from the organic phase into aqueous phase at 6M HNO 3 . The assay of aqueous phase for Am content is carried out by alpha radiometry. The investigation has revealed that 100% Am follows the HLLW route. (author)

  19. Dynamic modeling method of the bolted joint with uneven distribution of joint surface pressure

    Science.gov (United States)

    Li, Shichao; Gao, Hongli; Liu, Qi; Liu, Bokai

    2018-03-01

    The dynamic characteristics of the bolted joints have a significant influence on the dynamic characteristics of the machine tool. Therefore, establishing a reasonable bolted joint dynamics model is helpful to improve the accuracy of machine tool dynamics model. Because the pressure distribution on the joint surface is uneven under the concentrated force of bolts, a dynamic modeling method based on the uneven pressure distribution of the joint surface is presented in this paper to improve the dynamic modeling accuracy of the machine tool. The analytic formulas between the normal, tangential stiffness per unit area and the surface pressure on the joint surface can be deduced based on the Hertz contact theory, and the pressure distribution on the joint surface can be obtained by the finite element software. Futhermore, the normal and tangential stiffness distribution on the joint surface can be obtained by the analytic formula and the pressure distribution on the joint surface, and assigning it into the finite element model of the joint. Qualitatively compared the theoretical mode shapes and the experimental mode shapes, as well as quantitatively compared the theoretical modal frequencies and the experimental modal frequencies. The comparison results show that the relative error between the first four-order theoretical modal frequencies and the first four-order experimental modal frequencies is 0.2% to 4.2%. Besides, the first four-order theoretical mode shapes and the first four-order experimental mode shapes are similar and one-to-one correspondence. Therefore, the validity of the theoretical model is verified. The dynamic modeling method proposed in this paper can provide a theoretical basis for the accurate dynamic modeling of the bolted joint in machine tools.

  20. Distribution functions of magnetic nanoparticles determined by a numerical inversion method

    International Nuclear Information System (INIS)

    Bender, P; Balceris, C; Ludwig, F; Posth, O; Bogart, L K; Szczerba, W; Castro, A; Nilsson, L; Costo, R; Gavilán, H; González-Alonso, D; Pedro, I de; Barquín, L Fernández; Johansson, C

    2017-01-01

    In the present study, we applied a regularized inversion method to extract the particle size, magnetic moment and relaxation-time distribution of magnetic nanoparticles from small-angle x-ray scattering (SAXS), DC magnetization (DCM) and AC susceptibility (ACS) measurements. For the measurements the particles were colloidally dispersed in water. At first approximation the particles could be assumed to be spherically shaped and homogeneously magnetized single-domain particles. As model functions for the inversion, we used the particle form factor of a sphere (SAXS), the Langevin function (DCM) and the Debye model (ACS). The extracted distributions exhibited features/peaks that could be distinctly attributed to the individually dispersed and non-interacting nanoparticles. Further analysis of these peaks enabled, in combination with a prior characterization of the particle ensemble by electron microscopy and dynamic light scattering, a detailed structural and magnetic characterization of the particles. Additionally, all three extracted distributions featured peaks, which indicated deviations of the scattering (SAXS), magnetization (DCM) or relaxation (ACS) behavior from the one expected for individually dispersed, homogeneously magnetized nanoparticles. These deviations could be mainly attributed to partial agglomeration (SAXS, DCM, ACS), uncorrelated surface spins (DCM) and/or intra-well relaxation processes (ACS). The main advantage of the numerical inversion method is that no ad hoc assumptions regarding the line shape of the extracted distribution functions are required, which enabled the detection of these contributions. We highlighted this by comparing the results with the results obtained by standard model fits, where the functional form of the distributions was a priori assumed to be log-normal shaped. (paper)

  1. Reviewing interval cancers: Time well spent?

    International Nuclear Information System (INIS)

    Gower-Thomas, Kate; Fielder, Hilary M.P.; Branston, Lucy; Greening, Sarah; Beer, Helen; Rogers, Cerilan

    2002-01-01

    OBJECTIVES: To categorize interval cancers, and thus identify false-negatives, following prevalent and incident screens in the Welsh breast screening programme. SETTING: Breast Test Wales (BTW) Llandudno, Cardiff and Swansea breast screening units. METHODS: Five hundred and sixty interval breast cancers identified following negative mammographic screening between 1989 and 1997 were reviewed by eight screening radiologists. The blind review was achieved by mixing the screening films of women who subsequently developed an interval cancer with screen negative films of women who did not develop cancer, in a ratio of 4:1. Another radiologist used patients' symptomatic films to record a reference against which the reviewers' reports of the screening films were compared. Interval cancers were categorized as 'true', 'occult', 'false-negative' or 'unclassified' interval cancers or interval cancers with minimal signs, based on the National Health Service breast screening programme (NHSBSP) guidelines. RESULTS: Of the classifiable interval films, 32% were false-negatives, 55% were true intervals and 12% occult. The proportion of false-negatives following incident screens was half that following prevalent screens (P = 0.004). Forty percent of the seed films were recalled by the panel. CONCLUSIONS: Low false-negative interval cancer rates following incident screens (18%) versus prevalent screens (36%) suggest that lower cancer detection rates at incident screens may have resulted from fewer cancers than expected being present, rather than from a failure to detect tumours. The panel method for categorizing interval cancers has significant flaws as the results vary markedly with different protocol and is no more accurate than other, quicker and more timely methods. Gower-Thomas, K. et al. (2002)

  2. Maximum Likelihood-Based Methods for Target Velocity Estimation with Distributed MIMO Radar

    Directory of Open Access Journals (Sweden)

    Zhenxin Cao

    2018-02-01

    Full Text Available The estimation problem for target velocity is addressed in this in the scenario with a distributed multi-input multi-out (MIMO radar system. A maximum likelihood (ML-based estimation method is derived with the knowledge of target position. Then, in the scenario without the knowledge of target position, an iterative method is proposed to estimate the target velocity by updating the position information iteratively. Moreover, the Carmér-Rao Lower Bounds (CRLBs for both scenarios are derived, and the performance degradation of velocity estimation without the position information is also expressed. Simulation results show that the proposed estimation methods can approach the CRLBs, and the velocity estimation performance can be further improved by increasing either the number of radar antennas or the information accuracy of the target position. Furthermore, compared with the existing methods, a better estimation performance can be achieved.

  3. Gravimetric water distribution assessment from geoelectrical methods (ERT and EMI) in municipal solid waste landfill.

    Science.gov (United States)

    Dumont, Gaël; Pilawski, Tamara; Dzaomuho-Lenieregue, Phidias; Hiligsmann, Serge; Delvigne, Frank; Thonart, Philippe; Robert, Tanguy; Nguyen, Frédéric; Hermans, Thomas

    2016-09-01

    The gravimetric water content of the waste material is a key parameter in waste biodegradation. Previous studies suggest a correlation between changes in water content and modification of electrical resistivity. This study, based on field work in Mont-Saint-Guibert landfill (Belgium), aimed, on one hand, at characterizing the relationship between gravimetric water content and electrical resistivity and on the other hand, at assessing geoelectrical methods as tools to characterize the gravimetric water distribution in a landfill. Using excavated waste samples obtained after drilling, we investigated the influences of the temperature, the liquid phase conductivity, the compaction and the water content on the electrical resistivity. Our results demonstrate that Archie's law and Campbell's law accurately describe these relationships in municipal solid waste (MSW). Next, we conducted a geophysical survey in situ using two techniques: borehole electromagnetics (EM) and electrical resistivity tomography (ERT). First, in order to validate the use of EM, EM values obtained in situ were compared to electrical resistivity of excavated waste samples from corresponding depths. The petrophysical laws were used to account for the change of environmental parameters (temperature and compaction). A rather good correlation was obtained between direct measurement on waste samples and borehole electromagnetic data. Second, ERT and EM were used to acquire a spatial distribution of the electrical resistivity. Then, using the petrophysical laws, this information was used to estimate the water content distribution. In summary, our results demonstrate that geoelectrical methods represent a pertinent approach to characterize spatial distribution of water content in municipal landfills when properly interpreted using ground truth data. These methods might therefore prove to be valuable tools in waste biodegradation optimization projects. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Sensitivity analysis and parameter estimation for distributed hydrological modeling: potential of variational methods

    Directory of Open Access Journals (Sweden)

    W. Castaings

    2009-04-01

    Full Text Available Variational methods are widely used for the analysis and control of computationally intensive spatially distributed systems. In particular, the adjoint state method enables a very efficient calculation of the derivatives of an objective function (response function to be analysed or cost function to be optimised with respect to model inputs.

    In this contribution, it is shown that the potential of variational methods for distributed catchment scale hydrology should be considered. A distributed flash flood model, coupling kinematic wave overland flow and Green Ampt infiltration, is applied to a small catchment of the Thoré basin and used as a relatively simple (synthetic observations but didactic application case.

    It is shown that forward and adjoint sensitivity analysis provide a local but extensive insight on the relation between the assigned model parameters and the simulated hydrological response. Spatially distributed parameter sensitivities can be obtained for a very modest calculation effort (~6 times the computing time of a single model run and the singular value decomposition (SVD of the Jacobian matrix provides an interesting perspective for the analysis of the rainfall-runoff relation.

    For the estimation of model parameters, adjoint-based derivatives were found exceedingly efficient in driving a bound-constrained quasi-Newton algorithm. The reference parameter set is retrieved independently from the optimization initial condition when the very common dimension reduction strategy (i.e. scalar multipliers is adopted.

    Furthermore, the sensitivity analysis results suggest that most of the variability in this high-dimensional parameter space can be captured with a few orthogonal directions. A parametrization based on the SVD leading singular vectors was found very promising but should be combined with another regularization strategy in order to prevent overfitting.

  5. Multimodal Dispersion of Nanoparticles: A Comprehensive Evaluation of Size Distribution with 9 Size Measurement Methods.

    Science.gov (United States)

    Varenne, Fanny; Makky, Ali; Gaucher-Delmas, Mireille; Violleau, Frédéric; Vauthier, Christine

    2016-05-01

    Evaluation of particle size distribution (PSD) of multimodal dispersion of nanoparticles is a difficult task due to inherent limitations of size measurement methods. The present work reports the evaluation of PSD of a dispersion of poly(isobutylcyanoacrylate) nanoparticles decorated with dextran known as multimodal and developed as nanomedecine. The nine methods used were classified as batch particle i.e. Static Light Scattering (SLS) and Dynamic Light Scattering (DLS), single particle i.e. Electron Microscopy (EM), Atomic Force Microscopy (AFM), Tunable Resistive Pulse Sensing (TRPS) and Nanoparticle Tracking Analysis (NTA) and separative particle i.e. Asymmetrical Flow Field-Flow Fractionation coupled with DLS (AsFlFFF) size measurement methods. The multimodal dispersion was identified using AFM, TRPS and NTA and results were consistent with those provided with the method based on a separation step prior to on-line size measurements. None of the light scattering batch methods could reveal the complexity of the PSD of the dispersion. Difference between PSD obtained from all size measurement methods tested suggested that study of the PSD of multimodal dispersion required to analyze samples by at least one of the single size particle measurement method or a method that uses a separation step prior PSD measurement.

  6. Optimal Planning Method of On-load Capacity Regulating Distribution Transformers in Urban Distribution Networks after Electric Energy Replacement Considering Uncertainties

    Directory of Open Access Journals (Sweden)

    Yu Su

    2018-06-01

    Full Text Available Electric energy replacement is the umbrella term for the use of electric energy to replace oil (e.g., electric automobiles, coal (e.g., electric heating, and gas (e.g., electric cooking appliances, which increases the electrical load peak, causing greater valley/peak differences. On-load capacity regulating distribution transformers have been used to deal with loads with great valley/peak differences, so reasonably replacing conventional distribution transformers with on-load capacity regulating distribution transformers can effectively cope with load changes after electric energy replacement and reduce the no-load losses of distribution transformers. Before planning for on-load capacity regulating distribution transformers, the nodal effective load considering uncertainties within the life cycle after electric energy replacement was obtained by a Monte Carlo method. Then, according to the loss relation between on-load capacity regulating distribution transformers and conventional distribution transformers, three characteristic indexes of annual continuous apparent power curve and replacement criteria for on-load capacity regulating distribution transformers were put forward in this paper, and a set of distribution transformer replaceable points was obtained. Next, based on cost benefit analysis, a planning model of on-load capacity regulating distribution transformers which consists of investment profitability index within the life cycle, investment cost recouping index and capacity regulating cost index was put forward. The branch and bound method was used to solve the planning model within replaceable point set to obtain upgrading and reconstruction scheme of distribution transformers under a certain investment. Finally, planning analysis of on-load capacity regulating distribution transformers was carried out for electric energy replacement points in one urban distribution network under three scenes: certain load, uncertain load and nodal

  7. Distribution of uranium in dental porcelains by means of the fission track method

    International Nuclear Information System (INIS)

    Shimizu, Masami; Noguchi, Kunikazu; Moriwaki, Kazunari; Sairenji, Eiko

    1980-01-01

    Porcelain teeth, some of which contain uranium compounds for aesthetic purpose, have been widely used in dental clinics. Hazardous effects due to uranium radiation have been suggested by recent publications. In the previous study, the authors reported the uranium content of porcelain teeth and radiation dose by it. In this study, using the fission track method, the authors examined spatial distribution of uranium in dental porcelain teeth (4 brands) which were marketed in Japan. From each sample of porcelain tooth, a 1-mm-thick specimen was sliced, and uranium content was measured at every 0.19 mm from labial side to lingual side for making a uranium distribution chart. Higher uranium concentration was found in Trubyte Bioblend porcelain teeth (USA) and they showed almost uniform distribution of uranium, while those of the Japanese three brands indicated, in most case, comparatively lower concentration and found to be non-uniform distributions. Range of uranium concentration in these brands were N.D. -- 5.2 ppm (Shofu-Ace), N.D. -- 342 ppm (Shofu-Real), N.D. -- 47 ppm (G.C. Livdent) and N.D. -- 235 ppm (Trubyte Bioblend), respectively. (author)

  8. A method for ion distribution function evaluation using escaping neutral atom kinetic energy samples

    International Nuclear Information System (INIS)

    Goncharov, P.R.; Ozaki, T.; Veshchev, E.A.; Sudo, S.

    2008-01-01

    A reliable method to evaluate the probability density function for escaping atom kinetic energies is required for the analysis of neutral particle diagnostic data used to study the fast ion distribution function in fusion plasmas. Digital processing of solid state detector signals is proposed in this paper as an improvement of the simple histogram approach. Probability density function for kinetic energies of neutral particles escaping from the plasma has been derived in a general form taking into account the plasma ion energy distribution, electron capture and loss rates, superposition along the diagnostic sight line and the magnetic surface geometry. A pseudorandom number generator has been realized that enables a sample of escaping neutral particle energies to be simulated for given plasma parameters and experimental conditions. Empirical probability density estimation code has been developed and tested to reconstruct the probability density function from simulated samples assuming. Maxwellian and classical slowing down plasma ion energy distribution shapes for different temperatures and different slowing down times. The application of the developed probability density estimation code to the analysis of experimental data obtained by the novel Angular-Resolved Multi-Sightline Neutral Particle Analyzer has been studied to obtain the suprathermal particle distributions. The optimum bandwidth parameter selection algorithm has also been realized. (author)

  9. Method of estimating thermal power distribution of core of BWR type reactor

    International Nuclear Information System (INIS)

    Sekimizu, Koichi

    1982-01-01

    Purpose: To accurately and rapidly predict the thermal power of the core of a BWR they reactor at load follow-up operating time. Method: A parameter value corrected from a correction coefficient deciding unit and a xenon density distribution value predicted and calculated from a xenon density distributor are inputted to a thermal power distribution predicting devise, the status amount such as coolant flow rate or the like predetermined at this and next high power operating times is substituted for physical model to predict and calculate the thermal power distribution. The status amount of a nuclear reactor at the time of operating in previous high power corresponding to the next high power operation to be predicted is read from the status amount of the reactor stored in time series manner is a reactor core status memory, and the physical model used in the prediction and calculation of the thermal power distribution at the time of next high power operation is corrected. (Sikiya, K.)

  10. A method for evaluating basement exhumation histories from closure age distributions of detrital minerals

    International Nuclear Information System (INIS)

    Lovera, Oscar M.; Grove, Marty; Kimbrough, David L.; Abbott, Patrick L.

    1999-01-01

    We have developed a two-dimensional, thermokinetic model that predicts the closure age distributions of detrital minerals from pervasively intruded and differentially exhumed basement. Using this model, we outline a method to determine the denudation history of orogenic regions on the basis of closure age distributions in synorogenic to postorogenic forearc strata. At relatively high mean denudation rates of 0.5 km m.y.-1 sustained over millions of years, magmatic heating events have minimal influence upon the age distributions of detrital minerals such as K-feldspar that are moderately retentive of radiogenic Ar. At lower rates, however, the effects of batholith emplacement may be substantial. We have applied the approach to detrital K-feldspars from forearc strata derived from the deeply denuded Peninsular Ranges batholith (PRB). Agreement of the denudation history deduced from the detrital K-feldspar data with thermochronologic constraints from exposed PRB basement lead us to conclude that exhumation histories of magmatic arcs should be decipherable solely from closure age distributions of detrital minerals whose depositional age is known. (c) 1999 American Geophysical Union

  11. a Landmark Extraction Method Associated with Geometric Features and Location Distribution

    Science.gov (United States)

    Zhang, W.; Li, J.; Wang, Y.; Xiao, Y.; Liu, P.; Zhang, S.

    2018-04-01

    Landmark plays an important role in spatial cognition and spatial knowledge organization. Significance measuring model is the main method of landmark extraction. It is difficult to take account of the spatial distribution pattern of landmarks because that the significance of landmark is built in one-dimensional space. In this paper, we start with the geometric features of the ground object, an extraction method based on the target height, target gap and field of view is proposed. According to the influence region of Voronoi Diagram, the description of target gap is established to the geometric representation of the distribution of adjacent targets. Then, segmentation process of the visual domain of Voronoi K order adjacent is given to set up target view under the multi view; finally, through three kinds of weighted geometric features, the landmarks are identified. Comparative experiments show that this method has a certain coincidence degree with the results of traditional significance measuring model, which verifies the effectiveness and reliability of the method and reduces the complexity of landmark extraction process without losing the reference value of landmark.

  12. A non-conventional watershed partitioning method for semi-distributed hydrological modelling: the package ALADHYN

    Science.gov (United States)

    Menduni, Giovanni; Pagani, Alessandro; Rulli, Maria Cristina; Rosso, Renzo

    2002-02-01

    The extraction of the river network from a digital elevation model (DEM) plays a fundamental role in modelling spatially distributed hydrological processes. The present paper deals with a new two-step procedure based on the preliminary identification of an ideal drainage network (IDN) from contour lines through a variable mesh size, and the further extraction of the actual drainage network (AND) from the IDN using land morphology. The steepest downslope direction search is used to identify individual channels, which are further merged into a network path draining to a given node of the IDN. The contributing area, peaks and saddles are determined by means of a steepest upslope direction search. The basin area is thus partitioned into physically based finite elements enclosed by irregular polygons. Different methods, i.e. the constant and variable threshold area methods, the contour line curvature method, and a topologic method descending from the Hortonian ordering scheme, are used to extract the ADN from the IDN. The contour line curvature method is shown to provide the most appropriate method from a comparison with field surveys. Using the ADN one can model the hydrological response of any sub-basin using a semi-distributed approach. The model presented here combines storm abstraction by the SCS-CN method with surface runoff routing as a geomorphological dispersion process. This is modelled using the gamma instantaneous unit hydrograph as parameterized by river geomorphology. The results are implemented using a project-oriented software facility for the Analysis of LAnd Digital HYdrological Networks (ALADHYN).

  13. Confidence Intervals from Normalized Data: A correction to Cousineau (2005

    Directory of Open Access Journals (Sweden)

    Richard D. Morey

    2008-09-01

    Full Text Available Presenting confidence intervals around means is a common method of expressing uncertainty in data. Loftus and Masson (1994 describe confidence intervals for means in within-subjects designs. These confidence intervals are based on the ANOVA mean squared error. Cousineau (2005 presents an alternative to the Loftus and Masson method, but his method produces confidence intervals that are smaller than those of Loftus and Masson. I show why this is the case and offer a simple correction that makes the expected size of Cousineau confidence intervals the same as that of Loftus and Masson confidence intervals.

  14. A nodal method of calculating power distributions for LWR-type reactors with square fuel lattices

    International Nuclear Information System (INIS)

    Hoeglund, Randolph.

    1980-06-01

    A nodal model is developed for calculating the power distribution in the core of a light water reactor with a square fuel lattice. The reactor core is divided into a number of more or less cubic nodes and a nodal coupling equation, which gives the thermal power density in one node as a function of the power densities in the neighbour nodes, is derived from the neutron diffusion equations for two energy groups. The three-dimensional power distribution can be computed iteratively using this coupling equation, for example following the point Jacobi, the Gauss-Seidel or the point successive overrelaxation scheme. The method has been included as the neutronic model in a reactor core simulation computer code BOREAS, where it is combined with a thermal-hydraulic model in order to make a simultaneous computation of the interdependent power and void distributions in a boiling water reactor possible. Also described in this report are a method for temporary one-dimensional iteration developed in order to accelerate the iterative solution of the problem and the Haling principle which is widely used in the planning of reloading operations for BWR reactors. (author)

  15. Testing methods for using high-resolution satellite imagery to monitor polar bear abundance and distribution

    Science.gov (United States)

    LaRue, Michelle A.; Stapleton, Seth P.; Porter, Claire; Atkinson, Stephen N.; Atwood, Todd C.; Dyck, Markus; Lecomte, Nicolas

    2015-01-01

    High-resolution satellite imagery is a promising tool for providing coarse information about polar species abundance and distribution, but current applications are limited. With polar bears (Ursus maritimus), the technique has only proven effective on landscapes with little topographic relief that are devoid of snow and ice, and time-consuming manual review of imagery is required to identify bears. Here, we evaluated mechanisms to further develop methods for satellite imagery by examining data from Rowley Island, Canada. We attempted to automate and expedite detection via a supervised spectral classification and image differencing to expedite image review. We also assessed what proportion of a region should be sampled to obtain reliable estimates of density and abundance. Although the spectral signature of polar bears differed from nontarget objects, these differences were insufficient to yield useful results via a supervised classification process. Conversely, automated image differencing—or subtracting one image from another—correctly identified nearly 90% of polar bear locations. This technique, however, also yielded false positives, suggesting that manual review will still be required to confirm polar bear locations. On Rowley Island, bear distribution approximated a Poisson distribution across a range of plot sizes, and resampling suggests that sampling >50% of the site facilitates reliable estimation of density (CV in certain areas, but large-scale applications remain limited because of the challenges in automation and the limited environments in which the method can be effectively applied. Improvements in resolution may expand opportunities for its future uses.

  16. An Analytical Method for Determining the Load Distribution of Single-Column Multibolt Connection

    Directory of Open Access Journals (Sweden)

    Nirut Konkong

    2017-01-01

    Full Text Available The purpose of this research was to investigate the effect of geometric variables on the bolt load distributions of a cold-formed steel bolt connection. The study was conducted using an experimental test, finite element analysis, and an analytical method. The experimental study was performed using single-lap shear testing of a concentrically loaded bolt connection fabricated from G550 cold-formed steel. Finite element analysis with shell elements was used to model the cold-formed steel plate while solid elements were used to model the bolt fastener for the purpose of studying the structural behavior of the bolt connections. Material nonlinearities, contact problems, and a geometric nonlinearity procedure were used to predict the failure behavior of the bolt connections. The analytical method was generated using the spring model. The bolt-plate interaction stiffness was newly proposed which was verified by the experiment and finite element model. It was applied to examine the effect of geometric variables on the single-column multibolt connection. The effects were studied of varying bolt diameter, plate thickness, and the plate thickness ratio (t2/t1 on the bolt load distribution. The results of the parametric study showed that the t2/t1 ratio controlled the efficiency of the bolt load distribution more than the other parameters studied.

  17. Testing methods for using high-resolution satellite imagery to monitor polar bear abundance and distribution

    Science.gov (United States)

    LaRue, Michelle A.; Stapleton, Seth P.; Porter, Claire; Atkinson, Stephen N.; Atwood, Todd C.; Dyck, Markus; Lecomte, Nicolas

    2015-01-01

    High-resolution satellite imagery is a promising tool for providing coarse information about polar species abundance and distribution, but current applications are limited. With polar bears (Ursus maritimus), the technique has only proven effective on landscapes with little topographic relief that are devoid of snow and ice, and time-consuming manual review of imagery is required to identify bears. Here, we evaluated mechanisms to further develop methods for satellite imagery by examining data from Rowley Island, Canada. We attempted to automate and expedite detection via a supervised spectral classification and image differencing to expedite image review. We also assessed what proportion of a region should be sampled to obtain reliable estimates of density and abundance. Although the spectral signature of polar bears differed from nontarget objects, these differences were insufficient to yield useful results via a supervised classification process. Conversely, automated image differencing—or subtracting one image from another—correctly identified nearly 90% of polar bear locations. This technique, however, also yielded false positives, suggesting that manual review will still be required to confirm polar bear locations. On Rowley Island, bear distribution approximated a Poisson distribution across a range of plot sizes, and resampling suggests that sampling >50% of the site facilitates reliable estimation of density (CV large-scale applications remain limited because of the challenges in automation and the limited environments in which the method can be effectively applied. Improvements in resolution may expand opportunities for its future uses.

  18. 关于序区间偏好信息的群决策方法研究%Study on the method of ranking in group decision making based on ordinal interval preference information

    Institute of Scientific and Technical Information of China (English)

    陈侠; 陈岩

    2011-01-01

    It is a new important research topic to discuss the problem of ranking in group decision making based on ordinal interval preference information. In this paper, an analytic method is proposed to solve the problem of ranking based on the ordinal interval preference information in decision making. Firstly, some concepts and characters of the ordinal interval preference information are introduced. Then, based on introducing the concepts of possibility and possibility matrix, the conclusion is obtained that the matrices of possibility of all experts are fuzzy reciprocal matrices and they are weak consistent. Furthermore, an optimization model of group consensus is constructed to calculate the optimization weigh vector, and an analysis method of ranking in group decision making based on the ordinal interval preference information is proposed. Finally, a numerical example is given to illustrate the use of the proposed analysis method.%在群决策分析中,基于序区间偏好信息的排序方法的研究是一个新的重要研究课题.针对决策分析中基于序区间偏好信息的群决策方法问题,提出了一种新的分析方法.首先,提出了序区间的有关定义及性质;其次,通过定义序区间的可能度及可能度矩阵的概念,得出了每个专家的可能度矩阵均具有满意一致性的互补判断矩阵结论.进而构建了基于群体一致性的最优化模型,依据计算的最优权重向量给出了一种关于序区间偏好信息的群决策方案排序方法.最后,通过一个算例说明了提出的分析方法.

  19. Multi-Attribute Decision-Making Method with Three-Parameter Interval Grey Number%三参数区间灰数的多属性灰靶决策方法

    Institute of Scientific and Technical Information of China (English)

    朱山丽; 肖美丹; 李晔

    2016-01-01

    The grey target decision-making model is proposed based on three-parameter interval grey number for multi-attribute decision-making problems with uncertain decision information. Firstly,a new distance measure of three-parameter interval grey number is given based on the importance of the“center of gravity”to determine the positive and negative clouts. The kernel and ranking method of three-parameter interval grey number is defined , and a new comprehensive off-target distance is proposed,which integrates the distance between different attributes to the positive and negative clouts. Attribute weights are determined by comprehensive off-target target minimum distance and grey entropy maximization. An example is presented to illustrate the usefulness and effectiveness of the proposed method.%针对决策信息不确定的多属性决策问题,提出了基于三参数区间灰数的灰靶决策方法。首先基于“重心”点的重要作用给出了一种新型的三参数区间灰数的距离测度,定义了三参数区间灰数的核和排序方法,由此确定决策方案的正负靶心,利用正负靶心距的空间投影距离求得综合靶心距,由综合靶心距最小化和灰熵最大化确定属性的权重,进而对方案进行排序。最后以一个实例说明决策模型的合理性和实用性。

  20. Optimization of axial enrichment distribution for BWR fuels using scoping libraries and block coordinate descent method

    Energy Technology Data Exchange (ETDEWEB)

    Tung, Wu-Hsiung, E-mail: wstong@iner.gov.tw; Lee, Tien-Tso; Kuo, Weng-Sheng; Yaur, Shung-Jung

    2017-03-15

    Highlights: • An optimization method for axial enrichment distribution in a BWR fuel was developed. • Block coordinate descent method is employed to search for optimal solution. • Scoping libraries are used to reduce computational effort. • Optimization search space consists of enrichment difference parameters. • Capability of the method to find optimal solution is demonstrated. - Abstract: An optimization method has been developed to search for the optimal axial enrichment distribution in a fuel assembly for a boiling water reactor core. The optimization method features: (1) employing the block coordinate descent method to find the optimal solution in the space of enrichment difference parameters, (2) using scoping libraries to reduce the amount of CASMO-4 calculation, and (3) integrating a core critical constraint into the objective function that is used to quantify the quality of an axial enrichment design. The objective function consists of the weighted sum of core parameters such as shutdown margin and critical power ratio. The core parameters are evaluated by using SIMULATE-3, and the cross section data required for the SIMULATE-3 calculation are generated by using CASMO-4 and scoping libraries. The application of the method to a 4-segment fuel design (with the highest allowable segment enrichment relaxed to 5%) demonstrated that the method can obtain an axial enrichment design with improved thermal limit ratios and objective function value while satisfying the core design constraints and core critical requirement through the use of an objective function. The use of scoping libraries effectively reduced the number of CASMO-4 calculation, from 85 to 24, in the 4-segment optimization case. An exhausted search was performed to examine the capability of the method in finding the optimal solution for a 4-segment fuel design. The results show that the method found a solution very close to the optimum obtained by the exhausted search. The number of