WorldWideScience

Sample records for maximum standard deviation

  1. The Distance Standard Deviation

    OpenAIRE

    Edelmann, Dominic; Richards, Donald; Vogel, Daniel

    2017-01-01

    The distance standard deviation, which arises in distance correlation analysis of multivariate data, is studied as a measure of spread. New representations for the distance standard deviation are obtained in terms of Gini's mean difference and in terms of the moments of spacings of order statistics. Inequalities for the distance variance are derived, proving that the distance standard deviation is bounded above by the classical standard deviation and by Gini's mean difference. Further, it is ...

  2. The reinterpretation of standard deviation concept

    OpenAIRE

    Ye, Xiaoming

    2017-01-01

    Existing mathematical theory interprets the concept of standard deviation as the dispersion degree. Therefore, in measurement theory, both uncertainty concept and precision concept, which are expressed with standard deviation or times standard deviation, are also defined as the dispersion of measurement result, so that the concept logic is tangled. Through comparative analysis of the standard deviation concept and re-interpreting the measurement error evaluation principle, this paper points o...

  3. The Standard Deviation of Launch Vehicle Environments

    Science.gov (United States)

    Yunis, Isam

    2005-01-01

    Statistical analysis is used in the development of the launch vehicle environments of acoustics, vibrations, and shock. The standard deviation of these environments is critical to accurate statistical extrema. However, often very little data exists to define the standard deviation and it is better to use a typical standard deviation than one derived from a few measurements. This paper uses Space Shuttle and expendable launch vehicle flight data to define a typical standard deviation for acoustics and vibrations. The results suggest that 3dB is a conservative and reasonable standard deviation for the source environment and the payload environment.

  4. Visualizing the Sample Standard Deviation

    Science.gov (United States)

    Sarkar, Jyotirmoy; Rashid, Mamunur

    2017-01-01

    The standard deviation (SD) of a random sample is defined as the square-root of the sample variance, which is the "mean" squared deviation of the sample observations from the sample mean. Here, we interpret the sample SD as the square-root of twice the mean square of all pairwise half deviations between any two sample observations. This…

  5. Comparing Standard Deviation Effects across Contexts

    Science.gov (United States)

    Ost, Ben; Gangopadhyaya, Anuj; Schiman, Jeffrey C.

    2017-01-01

    Studies using tests scores as the dependent variable often report point estimates in student standard deviation units. We note that a standard deviation is not a standard unit of measurement since the distribution of test scores can vary across contexts. As such, researchers should be cautious when interpreting differences in the numerical size of…

  6. Beat the Deviations in Estimating Maximum Power of Thermoelectric Modules

    DEFF Research Database (Denmark)

    Gao, Junling; Chen, Min

    2013-01-01

    Under a certain temperature difference, the maximum power of a thermoelectric module can be estimated by the open-circuit voltage and the short-circuit current. In practical measurement, there exist two switch modes, either from open to short or from short to open, but the two modes can give...... different estimations on the maximum power. Using TEG-127-2.8-3.5-250 and TEG-127-1.4-1.6-250 as two examples, the difference is about 10%, leading to some deviations with the temperature change. This paper analyzes such differences by means of a nonlinear numerical model of thermoelectricity, and finds out...... that the main cause is the influence of various currents on the produced electromotive potential. A simple and effective calibration method is proposed to minimize the deviations in specifying the maximum power. Experimental results validate the method with improved estimation accuracy....

  7. FINDING STANDARD DEVIATION OF A FUZZY NUMBER

    OpenAIRE

    Fokrul Alom Mazarbhuiya

    2017-01-01

    Two probability laws can be root of a possibility law. Considering two probability densities over two disjoint ranges, we can define the fuzzy standard deviation of a fuzzy variable with the help of the standard deviation two random variables in two disjoint spaces.

  8. 7 CFR 400.204 - Notification of deviation from standards.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 6 2010-01-01 2010-01-01 false Notification of deviation from standards. 400.204... Contract-Standards for Approval § 400.204 Notification of deviation from standards. A Contractor shall advise the Corporation immediately if the Contractor deviates from the requirements of these standards...

  9. A Note on Standard Deviation and Standard Error

    Science.gov (United States)

    Hassani, Hossein; Ghodsi, Mansoureh; Howell, Gareth

    2010-01-01

    Many students confuse the standard deviation and standard error of the mean and are unsure which, if either, to use in presenting data. In this article, we endeavour to address these questions and cover some related ambiguities about these quantities.

  10. Exploring Students' Conceptions of the Standard Deviation

    Science.gov (United States)

    delMas, Robert; Liu, Yan

    2005-01-01

    This study investigated introductory statistics students' conceptual understanding of the standard deviation. A computer environment was designed to promote students' ability to coordinate characteristics of variation of values about the mean with the size of the standard deviation as a measure of that variation. Twelve students participated in an…

  11. [Roaming through methodology. XXXVIII. Common misconceptions involving standard deviation and standard error

    NARCIS (Netherlands)

    Mokkink, H.G.A.

    2002-01-01

    Standard deviation and standard error have a clear mutual relationship, but at the same time they differ strongly in the type of information they supply. This can lead to confusion and misunderstandings. Standard deviation describes the variability in a sample of measures of a variable, for instance

  12. Hearing protector performance and standard deviation.

    Science.gov (United States)

    Williams, W; Dillon, H

    2005-01-01

    The attenuation performance of a hearing protector is used to estimate the protected exposure level of the user. The aim is to reduce the exposed level to an acceptable value. Users should expect the attenuation to fall within a reasonable range of values around a norm. However, an analysis of extensive test data indicates that there is a negative relationship between attenuation performance and the standard deviation. This result is deduced using a variation in the method of calculating a single number rating of attenuation that is more amenable to drawing statistical inferences. As performance is typically specified as a function of the mean attenuation minus one or two standard deviations from the mean to ensure that greater than 50% of the wearer population are well protected, the implication of increasing standard deviation with decreasing attenuation found in this study means that a significant number of users are, in fact, experiencing over-protection. These users may be disinclined to use their hearing protectors because of an increased feeling of acoustic isolation. This problem is exacerbated in areas with lower noise levels.

  13. A Visual Model for the Variance and Standard Deviation

    Science.gov (United States)

    Orris, J. B.

    2011-01-01

    This paper shows how the variance and standard deviation can be represented graphically by looking at each squared deviation as a graphical object--in particular, as a square. A series of displays show how the standard deviation is the size of the average square.

  14. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range

    OpenAIRE

    Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun

    2014-01-01

    Background In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. Methods In this paper, we propose to improve the existing literature in ...

  15. 7 CFR 400.174 - Notification of deviation from financial standards.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 6 2010-01-01 2010-01-01 false Notification of deviation from financial standards... Agreement-Standards for Approval; Regulations for the 1997 and Subsequent Reinsurance Years § 400.174 Notification of deviation from financial standards. An insurer must immediately advise FCIC if it deviates from...

  16. SAMPLE STANDARD DEVIATION(s) CHART UNDER THE ASSUMPTION OF MODERATENESS AND ITS PERFORMANCE ANALYSIS

    OpenAIRE

    Kalpesh S. Tailor

    2017-01-01

    Moderate distribution proposed by Naik V.D and Desai J.M., is a sound alternative of normal distribution, which has mean and mean deviation as pivotal parameters and which has properties similar to normal distribution. Mean deviation (δ) is a very good alternative of standard deviation (σ) as mean deviation is considered to be the most intuitively and rationally defined measure of dispersion. This fact can be very useful in the field of quality control to construct the control limits of the c...

  17. Standard Deviation for Small Samples

    Science.gov (United States)

    Joarder, Anwar H.; Latif, Raja M.

    2006-01-01

    Neater representations for variance are given for small sample sizes, especially for 3 and 4. With these representations, variance can be calculated without a calculator if sample sizes are small and observations are integers, and an upper bound for the standard deviation is immediate. Accessible proofs of lower and upper bounds are presented for…

  18. Standard deviation of wind direction as a function of time; three hours to five hundred seventy-six hours

    International Nuclear Information System (INIS)

    Culkowski, W.M.

    1976-01-01

    The standard deviation of horizontal wind direction sigma/sub theta/ increases with time of averaging up to a maximum value of 104 0 . The average standard deviation of horizontal wind directions averaged over periods of 3, 5, 10, 16, 24, 36, 48, 72, 144, 288, and 576 hours were calculated from wind data obtained from a 100 meter tower in the Oak Ridge area. For periods up to 100 hours, sigma/sub theta/ varies as t/sup .28/; after 100 hours sigma/sub theta/ varies as 6.5 ln t

  19. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range.

    Science.gov (United States)

    Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun

    2014-12-19

    In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different

  20. Simulation-based estimation of mean and standard deviation for meta-analysis via Approximate Bayesian Computation (ABC).

    Science.gov (United States)

    Kwon, Deukwoo; Reis, Isildinha M

    2015-08-12

    When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.

  1. A robust standard deviation control chart

    NARCIS (Netherlands)

    Schoonhoven, M.; Does, R.J.M.M.

    2012-01-01

    This article studies the robustness of Phase I estimators for the standard deviation control chart. A Phase I estimator should be efficient in the absence of contaminations and resistant to disturbances. Most of the robust estimators proposed in the literature are robust against either diffuse

  2. Statistics as Unbiased Estimators: Exploring the Teaching of Standard Deviation

    Science.gov (United States)

    Wasserman, Nicholas H.; Casey, Stephanie; Champion, Joe; Huey, Maryann

    2017-01-01

    This manuscript presents findings from a study about the knowledge for and planned teaching of standard deviation. We investigate how understanding variance as an unbiased (inferential) estimator--not just a descriptive statistic for the variation (spread) in data--is related to teachers' instruction regarding standard deviation, particularly…

  3. A better norm-referenced grading using the standard deviation criterion.

    Science.gov (United States)

    Chan, Wing-shing

    2014-01-01

    The commonly used norm-referenced grading assigns grades to rank-ordered students in fixed percentiles. It has the disadvantage of ignoring the actual distance of scores among students. A simple norm-referenced grading via standard deviation is suggested for routine educational grading. The number of standard deviation of a student's score from the class mean was used as the common yardstick to measure achievement level. Cumulative probability of a normal distribution was referenced to help decide the amount of students included within a grade. RESULTS of the foremost 12 students from a medical examination were used for illustrating this grading method. Grading by standard deviation seemed to produce better cutoffs in allocating an appropriate grade to students more according to their differential achievements and had less chance in creating arbitrary cutoffs in between two similarly scored students than grading by fixed percentile. Grading by standard deviation has more advantages and is more flexible than grading by fixed percentile for norm-referenced grading.

  4. Does standard deviation matter? Using "standard deviation" to quantify security of multistage testing.

    Science.gov (United States)

    Wang, Chun; Zheng, Yi; Chang, Hua-Hua

    2014-01-01

    With the advent of web-based technology, online testing is becoming a mainstream mode in large-scale educational assessments. Most online tests are administered continuously in a testing window, which may post test security problems because examinees who take the test earlier may share information with those who take the test later. Researchers have proposed various statistical indices to assess the test security, and one most often used index is the average test-overlap rate, which was further generalized to the item pooling index (Chang & Zhang, 2002, 2003). These indices, however, are all defined as the means (that is, the expected proportion of common items among examinees) and they were originally proposed for computerized adaptive testing (CAT). Recently, multistage testing (MST) has become a popular alternative to CAT. The unique features of MST make it important to report not only the mean, but also the standard deviation (SD) of test overlap rate, as we advocate in this paper. The standard deviation of test overlap rate adds important information to the test security profile, because for the same mean, a large SD reflects that certain groups of examinees share more common items than other groups. In this study, we analytically derived the lower bounds of the SD under MST, with the results under CAT as a benchmark. It is shown that when the mean overlap rate is the same between MST and CAT, the SD of test overlap tends to be larger in MST. A simulation study was conducted to provide empirical evidence. We also compared the security of MST under the single-pool versus the multiple-pool designs; both analytical and simulation studies show that the non-overlapping multiple-pool design will slightly increase the security risk.

  5. Precision analysis for standard deviation measurements of immobile single fluorescent molecule images.

    Science.gov (United States)

    DeSantis, Michael C; DeCenzo, Shawn H; Li, Je-Luen; Wang, Y M

    2010-03-29

    Standard deviation measurements of intensity profiles of stationary single fluorescent molecules are useful for studying axial localization, molecular orientation, and a fluorescence imaging system's spatial resolution. Here we report on the analysis of the precision of standard deviation measurements of intensity profiles of single fluorescent molecules imaged using an EMCCD camera.We have developed an analytical expression for the standard deviation measurement error of a single image which is a function of the total number of detected photons, the background photon noise, and the camera pixel size. The theoretical results agree well with the experimental, simulation, and numerical integration results. Using this expression, we show that single-molecule standard deviation measurements offer nanometer precision for a large range of experimental parameters.

  6. 1 CFR 21.14 - Deviations from standard organization of the Code of Federal Regulations.

    Science.gov (United States)

    2010-01-01

    ... 1 General Provisions 1 2010-01-01 2010-01-01 false Deviations from standard organization of the... CODIFICATION General Numbering § 21.14 Deviations from standard organization of the Code of Federal Regulations. (a) Any deviation from standard Code of Federal Regulations designations must be approved in advance...

  7. Design and analysis of control charts for standard deviation with estimated parameters

    NARCIS (Netherlands)

    Schoonhoven, M.; Riaz, M.; Does, R.J.M.M.

    2011-01-01

    This paper concerns the design and analysis of the standard deviation control chart with estimated limits. We consider an extensive range of statistics to estimate the in-control standard deviation (Phase I) and design the control chart for real-time process monitoring (Phase II) by determining the

  8. Standard deviation index for stimulated Brillouin scattering suppression with different homogeneities.

    Science.gov (United States)

    Ran, Yang; Su, Rongtao; Ma, Pengfei; Wang, Xiaolin; Zhou, Pu; Si, Lei

    2016-05-10

    We present a new quantitative index of standard deviation to measure the homogeneity of spectral lines in a fiber amplifier system so as to find the relation between the stimulated Brillouin scattering (SBS) threshold and the homogeneity of the corresponding spectral lines. A theoretical model is built and a simulation framework has been established to estimate the SBS threshold when input spectra with different homogeneities are set. In our experiment, by setting the phase modulation voltage to a constant value and the modulation frequency to different values, spectral lines with different homogeneities can be obtained. The experimental results show that the SBS threshold increases negatively with the standard deviation of the modulated spectrum, which is in good agreement with the theoretical results. When the phase modulation voltage is confined to 10 V and the modulation frequency is set to 80 MHz, the standard deviation of the modulated spectrum equals 0.0051, which is the lowest value in our experiment. Thus, at this time, the highest SBS threshold has been achieved. This standard deviation can be a good quantitative index in evaluating the power scaling potential in a fiber amplifier system, which is also a design guideline in suppressing the SBS to a better degree.

  9. Wavelength selection method with standard deviation: application to pulse oximetry.

    Science.gov (United States)

    Vazquez-Jaccaud, Camille; Paez, Gonzalo; Strojnik, Marija

    2011-07-01

    Near-infrared spectroscopy provides useful biological information after the radiation has penetrated through the tissue, within the therapeutic window. One of the significant shortcomings of the current applications of spectroscopic techniques to a live subject is that the subject may be uncooperative and the sample undergoes significant temporal variations, due to his health status that, from radiometric point of view, introduce measurement noise. We describe a novel wavelength selection method for monitoring, based on a standard deviation map, that allows low-noise sensitivity. It may be used with spectral transillumination, transmission, or reflection signals, including those corrupted by noise and unavoidable temporal effects. We apply it to the selection of two wavelengths for the case of pulse oximetry. Using spectroscopic data, we generate a map of standard deviation that we propose as a figure-of-merit in the presence of the noise introduced by the living subject. Even in the presence of diverse sources of noise, we identify four wavelength domains with standard deviation, minimally sensitive to temporal noise, and two wavelengths domains with low sensitivity to temporal noise.

  10. Computation of standard deviations in eigenvalue calculations

    International Nuclear Information System (INIS)

    Gelbard, E.M.; Prael, R.

    1990-01-01

    In Brissenden and Garlick (1985), the authors propose a modified Monte Carlo method for eigenvalue calculations, designed to decrease particle transport biases in the flux and eigenvalue estimates, and in corresponding estimates of standard deviations. Apparently a very similar method has been used by Soviet Monte Carlo specialists. The proposed method is based on the generation of ''superhistories'', chains of histories run in sequence without intervening renormalization of the fission source. This method appears to have some disadvantages, discussed elsewhere. Earlier numerical experiments suggest that biases in fluxes and eigenvalues are negligibly small, even for very small numbers of histories per generation. Now more recent experiments, run on the CRAY-XMP, tend to confirm these earlier conclusions. The new experiments, discussed in this paper, involve the solution of one-group 1D diffusion theory eigenvalue problems, in difference form, via Monte Carlo. Experiments covered a range of dominance ratios from ∼0.75 to ∼0.985. In all cases flux and eigenvalue biases were substantially smaller than one standard deviation. The conclusion that, in practice, the eigenvalue bias is negligible has strong theoretical support. (author)

  11. The standard deviation method: data analysis by classical means and by neural networks

    International Nuclear Information System (INIS)

    Bugmann, G.; Stockar, U. von; Lister, J.B.

    1989-08-01

    The Standard Deviation Method is a method for determining particle size which can be used, for instance, to determine air-bubble sizes in a fermentation bio-reactor. The transmission coefficient of an ultrasound beam through a gassy liquid is measured repetitively. Due to the displacements and random positions of the bubbles, the measurements show a scatter whose standard deviation is dependent on the bubble-size. The precise relationship between the measured standard deviation, the transmission and the particle size has been obtained from a set of computer-simulated data. (author) 9 figs., 5 refs

  12. Linear Estimation of Standard Deviation of Logistic Distribution ...

    African Journals Online (AJOL)

    The paper presents a theoretical method based on order statistics and a FORTRAN program for computing the variance and relative efficiencies of the standard deviation of the logistic population with respect to the Cramer-Rao lower variance bound and the best linear unbiased estimators (BLUE\\'s) when the mean is ...

  13. Standard deviation and standard error of the mean.

    Science.gov (United States)

    Lee, Dong Kyu; In, Junyong; Lee, Sangseok

    2015-06-01

    In most clinical and experimental studies, the standard deviation (SD) and the estimated standard error of the mean (SEM) are used to present the characteristics of sample data and to explain statistical analysis results. However, some authors occasionally muddle the distinctive usage between the SD and SEM in medical literature. Because the process of calculating the SD and SEM includes different statistical inferences, each of them has its own meaning. SD is the dispersion of data in a normal distribution. In other words, SD indicates how accurately the mean represents sample data. However the meaning of SEM includes statistical inference based on the sampling distribution. SEM is the SD of the theoretical distribution of the sample means (the sampling distribution). While either SD or SEM can be applied to describe data and statistical results, one should be aware of reasonable methods with which to use SD and SEM. We aim to elucidate the distinctions between SD and SEM and to provide proper usage guidelines for both, which summarize data and describe statistical results.

  14. Standard Test Method for Measuring Optical Angular Deviation of Transparent Parts

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1996-01-01

    1.1 This test method covers measuring the angular deviation of a light ray imposed by transparent parts such as aircraft windscreens and canopies. The results are uncontaminated by the effects of lateral displacement, and the procedure may be performed in a relatively short optical path length. This is not intended as a referee standard. It is one convenient method for measuring angular deviations through transparent windows. 1.2 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  15. Semiparametric Bernstein–von Mises for the error standard deviation

    NARCIS (Netherlands)

    Jonge, de R.; Zanten, van J.H.

    2013-01-01

    We study Bayes procedures for nonparametric regression problems with Gaussian errors, giving conditions under which a Bernstein–von Mises result holds for the marginal posterior distribution of the error standard deviation. We apply our general results to show that a single Bayes procedure using a

  16. Semiparametric Bernstein-von Mises for the error standard deviation

    NARCIS (Netherlands)

    de Jonge, R.; van Zanten, H.

    2013-01-01

    We study Bayes procedures for nonparametric regression problems with Gaussian errors, giving conditions under which a Bernstein-von Mises result holds for the marginal posterior distribution of the error standard deviation. We apply our general results to show that a single Bayes procedure using a

  17. Robust Confidence Interval for a Ratio of Standard Deviations

    Science.gov (United States)

    Bonett, Douglas G.

    2006-01-01

    Comparing variability of test scores across alternate forms, test conditions, or subpopulations is a fundamental problem in psychometrics. A confidence interval for a ratio of standard deviations is proposed that performs as well as the classic method with normal distributions and performs dramatically better with nonnormal distributions. A simple…

  18. Improvement of least-squares collocation error estimates using local GOCE Tzz signal standard deviations

    DEFF Research Database (Denmark)

    Tscherning, Carl Christian

    2015-01-01

    outside the data area. On the other hand, a comparison of predicted quantities with observed values show that the error also varies depending on the local data standard deviation. This quantity may be (and has been) estimated using the GOCE second order vertical derivative, Tzz, in the area covered...... by the satellite. The ratio between the nearly constant standard deviations of a predicted quantity (e.g. in a 25° × 25° area) and the standard deviations of Tzz in smaller cells (e.g., 1° × 1°) have been used as a scale factor in order to obtain more realistic error estimates. This procedure has been applied...

  19. 7 CFR 1724.52 - Permitted deviations from RUS construction standards.

    Science.gov (United States)

    2010-01-01

    ... neutrals to provide the required electric service to a consumer, the RUS standard transformer secondary... UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE ELECTRIC ENGINEERING, ARCHITECTURAL SERVICES AND DESIGN POLICIES AND PROCEDURES Electric System Design § 1724.52 Permitted deviations from RUS construction...

  20. Variation in the standard deviation of the lure rating distribution: Implications for estimates of recollection probability.

    Science.gov (United States)

    Dopkins, Stephen; Varner, Kaitlin; Hoyer, Darin

    2017-10-01

    In word recognition semantic priming of test words increased the false-alarm rate and the mean of confidence ratings to lures. Such priming also increased the standard deviation of confidence ratings to lures and the slope of the z-ROC function, suggesting that the priming increased the standard deviation of the lure evidence distribution. The Unequal Variance Signal Detection (UVSD) model interpreted the priming as increasing the standard deviation of the lure evidence distribution. Without additional parameters the Dual Process Signal Detection (DPSD) model could only accommodate the results by fitting the data for related and unrelated primes separately, interpreting the priming, implausibly, as decreasing the probability of target recollection (DPSD). With an additional parameter, for the probability of false (lure) recollection the model could fit the data for related and unrelated primes together, interpreting the priming as increasing the probability of false recollection. These results suggest that DPSD estimates of target recollection probability will decrease with increases in the lure confidence/evidence standard deviation unless a parameter is included for false recollection. Unfortunately the size of a given lure confidence/evidence standard deviation relative to other possible lure confidence/evidence standard deviations is often unspecified by context. Hence the model often has no way of estimating false recollection probability and thereby correcting its estimates of target recollection probability.

  1. Standard deviation of scatterometer measurements from space.

    Science.gov (United States)

    Fischer, R. E.

    1972-01-01

    The standard deviation of scatterometer measurements has been derived under assumptions applicable to spaceborne scatterometers. Numerical results are presented which show that, with sufficiently long integration times, input signal-to-noise ratios below unity do not cause excessive degradation of measurement accuracy. The effects on measurement accuracy due to varying integration times and changing the ratio of signal bandwidth to IF filter-noise bandwidth are also plotted. The results of the analysis may resolve a controversy by showing that in fact statistically useful scatterometer measurements can be made from space using a 20-W transmitter, such as will be used on the S-193 experiment for Skylab-A.

  2. Size-dependent standard deviation for growth rates: empirical results and theoretical modeling.

    Science.gov (United States)

    Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H Eugene; Grosse, I

    2008-05-01

    We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation sigma(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation sigma(R) on the average value of the wages with a scaling exponent beta approximately 0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation sigma(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of sigma(R) on the average payroll with a scaling exponent beta approximately -0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.

  3. Size-dependent standard deviation for growth rates: Empirical results and theoretical modeling

    Science.gov (United States)

    Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H. Eugene; Grosse, I.

    2008-05-01

    We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation σ(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation σ(R) on the average value of the wages with a scaling exponent β≈0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation σ(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of σ(R) on the average payroll with a scaling exponent β≈-0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.

  4. 75 FR 383 - Canned Pacific Salmon Deviating From Identity Standard; Extension of Temporary Permit for Market...

    Science.gov (United States)

    2010-01-05

    ...] Canned Pacific Salmon Deviating From Identity Standard; Extension of Temporary Permit for Market Testing... test products designated as ``skinless and boneless sockeye salmon'' that deviate from the U.S. standard of identity for canned Pacific salmon. The extension will allow the permit holder to continue to...

  5. Closed-form confidence intervals for functions of the normal mean and standard deviation.

    Science.gov (United States)

    Donner, Allan; Zou, G Y

    2012-08-01

    Confidence interval methods for a normal mean and standard deviation are well known and simple to apply. However, the same cannot be said for important functions of these parameters. These functions include the normal distribution percentiles, the Bland-Altman limits of agreement, the coefficient of variation and Cohen's effect size. We present a simple approach to this problem by using variance estimates recovered from confidence limits computed for the mean and standard deviation separately. All resulting confidence intervals have closed forms. Simulation results demonstrate that this approach performs very well for limits of agreement, coefficients of variation and their differences.

  6. Semiparametric Bernstein–von Mises for the error standard deviation

    OpenAIRE

    Jonge, de, R.; Zanten, van, J.H.

    2013-01-01

    We study Bayes procedures for nonparametric regression problems with Gaussian errors, giving conditions under which a Bernstein–von Mises result holds for the marginal posterior distribution of the error standard deviation. We apply our general results to show that a single Bayes procedure using a hierarchical spline-based prior on the regression function and an independent prior on the error variance, can simultaneously achieve adaptive, rate-optimal estimation of a smooth, multivariate regr...

  7. Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates

    International Nuclear Information System (INIS)

    Laurence, T.; Chromy, B.

    2010-01-01

    Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE

  8. Dynamics of the standard deviations of three wind velocity components from the data of acoustic sounding

    Science.gov (United States)

    Krasnenko, N. P.; Kapegesheva, O. F.; Shamanaeva, L. G.

    2017-11-01

    Spatiotemporal dynamics of the standard deviations of three wind velocity components measured with a mini-sodar in the atmospheric boundary layer is analyzed. During the day on September 16 and at night on September 12 values of the standard deviation changed for the x- and y-components from 0.5 to 4 m/s, and for the z-component from 0.2 to 1.2 m/s. An analysis of the vertical profiles of the standard deviations of three wind velocity components for a 6-day measurement period has shown that the increase of σx and σy with altitude is well described by a power law dependence with exponent changing from 0.22 to 1.3 depending on the time of day, and σz depends linearly on the altitude. The approximation constants have been found and their errors have been estimated. The established physical regularities and the approximation constants allow the spatiotemporal dynamics of the standard deviation of three wind velocity components in the atmospheric boundary layer to be described and can be recommended for application in ABL models.

  9. Deviating from the standard: effects on labor continuity and career patterns

    NARCIS (Netherlands)

    Roman, A.A.

    2006-01-01

    Deviating from a standard career path is increasingly becoming an option for individuals to combine paid labor with other important life domains. These career detours emerge in diverse labor forms such as part-time jobs, temporary working hour reductions, and labor force time-outs, used to alleviate

  10. What to use to express the variability of data: Standard deviation or standard error of mean?

    OpenAIRE

    Barde, Mohini P.; Barde, Prajakt J.

    2012-01-01

    Statistics plays a vital role in biomedical research. It helps present data precisely and draws the meaningful conclusions. While presenting data, one should be aware of using adequate statistical measures. In biomedical journals, Standard Error of Mean (SEM) and Standard Deviation (SD) are used interchangeably to express the variability; though they measure different parameters. SEM quantifies uncertainty in estimate of the mean whereas SD indicates dispersion of the data from mean. As reade...

  11. Phase-I monitoring of standard deviations in multistage linear profiles

    Science.gov (United States)

    Kalaei, Mahdiyeh; Soleimani, Paria; Niaki, Seyed Taghi Akhavan; Atashgar, Karim

    2018-03-01

    In most modern manufacturing systems, products are often the output of some multistage processes. In these processes, the stages are dependent on each other, where the output quality of each stage depends also on the output quality of the previous stages. This property is called the cascade property. Although there are many studies in multistage process monitoring, there are fewer works on profile monitoring in multistage processes, especially on the variability monitoring of a multistage profile in Phase-I for which no research is found in the literature. In this paper, a new methodology is proposed to monitor the standard deviation involved in a simple linear profile designed in Phase I to monitor multistage processes with the cascade property. To this aim, an autoregressive correlation model between the stages is considered first. Then, the effect of the cascade property on the performances of three types of T 2 control charts in Phase I with shifts in standard deviation is investigated. As we show that this effect is significant, a U statistic is next used to remove the cascade effect, based on which the investigated control charts are modified. Simulation studies reveal good performances of the modified control charts.

  12. Is standard deviation of daily PM2.5 concentration associated with respiratory mortality?

    Science.gov (United States)

    Lin, Hualiang; Ma, Wenjun; Qiu, Hong; Vaughn, Michael G; Nelson, Erik J; Qian, Zhengmin; Tian, Linwei

    2016-09-01

    Studies on health effects of air pollution often use daily mean concentration to estimate exposure while ignoring daily variations. This study examined the health effects of daily variation of PM2.5. We calculated daily mean and standard deviations of PM2.5 in Hong Kong between 1998 and 2011. We used a generalized additive model to estimate the association between respiratory mortality and daily mean and variation of PM2.5, as well as their interaction. We controlled for potential confounders, including temporal trends, day of the week, meteorological factors, and gaseous air pollutants. Both daily mean and standard deviation of PM2.5 were significantly associated with mortalities from overall respiratory diseases and pneumonia. Each 10 μg/m(3) increment in daily mean concentration at lag 2 day was associated with a 0.61% (95% CI: 0.19%, 1.03%) increase in overall respiratory mortality and a 0.67% (95% CI: 0.14%, 1.21%) increase in pneumonia mortality. And a 10 μg/m(3) increase in standard deviation at lag 1 day corresponded to a 1.40% (95% CI: 0.35%, 2.46%) increase in overall respiratory mortality, and a 1.80% (95% CI: 0.46%, 3.16%) increase in pneumonia mortality. We also observed a positive but non-significant synergistic interaction between daily mean and variation on respiratory mortality and pneumonia mortality. However, we did not find any significant association with mortality from chronic obstructive pulmonary diseases. Our study suggests that, besides mean concentration, the standard deviation of PM2.5 might be one potential predictor of respiratory mortality in Hong Kong, and should be considered when assessing the respiratory effects of PM2.5. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Segmentation Using Symmetry Deviation

    DEFF Research Database (Denmark)

    Hollensen, Christian; Højgaard, L.; Specht, L.

    2011-01-01

    of the CT-scans into a single atlas. Afterwards the standard deviation of anatomical symmetry for the 20 normal patients was evaluated using non-rigid registration and registered onto the atlas to create an atlas for normal anatomical symmetry deviation. The same non-rigid registration was used on the 10...... hypopharyngeal cancer patients to find anatomical symmetry and evaluate it against the standard deviation of the normal patients to locate pathologic volumes. Combining the information with an absolute PET threshold of 3 Standard uptake value (SUV) a volume was automatically delineated. The overlap of automated....... The standard deviation of the anatomical symmetry, seen in figure for one patient along CT and PET, was extracted for normal patients and compared with the deviation from cancer patients giving a new way of determining cancer pathology location. Using the novel method an overlap concordance index...

  14. Myocardial infarct sizing by late gadolinium-enhanced MRI: Comparison of manual, full-width at half-maximum, and n-standard deviation methods.

    Science.gov (United States)

    Zhang, Lin; Huttin, Olivier; Marie, Pierre-Yves; Felblinger, Jacques; Beaumont, Marine; Chillou, Christian DE; Girerd, Nicolas; Mandry, Damien

    2016-11-01

    To compare three widely used methods for myocardial infarct (MI) sizing on late gadolinium-enhanced (LGE) magnetic resonance (MR) images: manual delineation and two semiautomated techniques (full-width at half-maximum [FWHM] and n-standard deviation [SD]). 3T phase-sensitive inversion-recovery (PSIR) LGE images of 114 patients after an acute MI (2-4 days and 6 months) were analyzed by two independent observers to determine both total and core infarct sizes (TIS/CIS). Manual delineation served as the reference for determination of optimal thresholds for semiautomated methods after thresholding at multiple values. Reproducibility and accuracy were expressed as overall bias ± 95% limits of agreement. Mean infarct sizes by manual methods were 39.0%/24.4% for the acute MI group (TIS/CIS) and 29.7%/17.3% for the chronic MI group. The optimal thresholds (ie, providing the closest mean value to the manual method) were FWHM30% and 3SD for the TIS measurement and FWHM45% and 6SD for the CIS measurement (paired t-test; all P > 0.05). The best reproducibility was obtained using FWHM. For TIS measurement in the acute MI group, intra-/interobserver agreements, from Bland-Altman analysis, with FWHM30%, 3SD, and manual were -0.02 ± 7.74%/-0.74 ± 5.52%, 0.31 ± 9.78%/2.96 ± 16.62% and -2.12 ± 8.86%/0.18 ± 16.12, respectively; in the chronic MI group, the corresponding values were 0.23 ± 3.5%/-2.28 ± 15.06, -0.29 ± 10.46%/3.12 ± 13.06% and 1.68 ± 6.52%/-2.88 ± 9.62%, respectively. A similar trend for reproducibility was obtained for CIS measurement. However, semiautomated methods produced inconsistent results (variabilities of 24-46%) compared to manual delineation. The FWHM technique was the most reproducible method for infarct sizing both in acute and chronic MI. However, both FWHM and n-SD methods showed limited accuracy compared to manual delineation. J. Magn. Reson. Imaging 2016;44:1206-1217. © 2016 International Society for Magnetic Resonance in Medicine.

  15. Determination of the relations governing the evolution of the standard deviations of the distribution of pollution

    International Nuclear Information System (INIS)

    Crabol, B.

    1985-04-01

    An original concept on the difference of behaviour of the high frequency (small-scale) and low frequency (large-scale) atmospheric turbulence relatively to the mean wind speed has been introduced. Through a dimensional analysis based on TAYLOR's formulation, it has been shown that the parameter of the atmospheric dispersion standard-deviations was the travel distance near the source, and the travel time far from the source. Using hypotheses on the energy spectrum in the atmosphere, a numerical application has made it possible to quantify the evolution of the horizontal standard deviation for different mean wind speeds between 0,2 and 10m/s. The areas of validity of the parameter (travel distance or travel time) are clearly shown. The first one is confined in the near field and is all the smaller if the wind speed decreases. For t > 5000s, the dependence on the wind speed of the horizontal standard-deviation expressed in function of the travel time becomes insignificant. The horizontal standard-deviation is only function of the travel time. Results are compared with experimental data obtained in the atmosphere. The similar evolution of the calculated and experimental curves confirms the validity of the hypothesis and input data in calculation. This study can be applied to radioactive effluents transport in the atmosphere

  16. Standard Practice for Optical Distortion and Deviation of Transparent Parts Using the Double-Exposure Method

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2009-01-01

    1.1 This photographic practice determines the optical distortion and deviation of a line of sight through a simple transparent part, such as a commercial aircraft windshield or a cabin window. This practice applies to essentially flat or nearly flat parts and may not be suitable for highly curved materials. 1.2 Test Method F 801 addresses optical deviation (angluar deviation) and Test Method F 2156 addresses optical distortion using grid line slope. These test methods should be used instead of Practice F 733 whenever practical. 1.3 This standard does not purport to address the safety concerns associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  17. Quantitative evaluation of standard deviations of group velocity dispersion in optical fibre using parametric amplification

    DEFF Research Database (Denmark)

    Rishøj, Lars Søgaard; Svane, Ask Sebastian; Lund-Hansen, Toke

    2014-01-01

    A numerical model for parametric amplifiers, which include stochastic variations of the group velocity dispersion (GVD), is presented. The impact on the gain is investigated, both with respect to the magnitude of the variations and by the effect caused by changing the wavelength of the pump. It i....... It is demonstrated that the described model is able to predict the experimental results and thereby provide a quantitative evaluation of the standard deviation of the GVD. For the investigated fibre, a standard deviation of 0.01 ps/(nm km) was found....

  18. More recent robust methods for the estimation of mean and standard deviation of data

    International Nuclear Information System (INIS)

    Kanisch, G.

    2003-01-01

    Outliers in a data set result in biased values of mean and standard deviation. One way to improve the estimation of a mean is to apply tests to identify outliers and to exclude them from the calculations. Tests according to Grubbs or to Dixon, which are frequently used in practice, especially within laboratory intercomparisons, are not very efficient in identifying outliers. Since more than ten years now so-called robust methods are used more and more, which determine mean and standard deviation by iteration and down-weighting values far from the mean, thereby diminishing the impact of outliers. In 1989 the Analytical Methods Committee of the British Royal Chemical Society published such a robust method. Since 1993 the US Environmental Protection Agency published a more efficient and quite versatile method. Mean and standard deviation are calculated by iteration and application of a special weight function for down-weighting outlier candidates. In 2000, W. Cofino et al. published a very efficient robust method which works quite different from the others. It applies methods taken from the basics of quantum mechanics, such as ''wave functions'' associated with each laboratory mean value and matrix algebra (solving eigenvalue problems). In contrast to the other ones, this method includes the individual measurement uncertainties. (orig.)

  19. Distribution of Standard deviation of an observable among superposed states

    OpenAIRE

    Yu, Chang-shui; Shao, Ting-ting; Li, Dong-mo

    2016-01-01

    The standard deviation (SD) quantifies the spread of the observed values on a measurement of an observable. In this paper, we study the distribution of SD among the different components of a superposition state. It is found that the SD of an observable on a superposition state can be well bounded by the SDs of the superposed states. We also show that the bounds also serve as good bounds on coherence of a superposition state. As a further generalization, we give an alternative definition of in...

  20. Estimating maize water stress by standard deviation of canopy temperature in thermal imagery

    Science.gov (United States)

    A new crop water stress index using standard deviation of canopy temperature as an input was developed to monitor crop water status. In this study, thermal imagery was taken from maize under various levels of deficit irrigation treatments in different crop growing stages. The Expectation-Maximizatio...

  1. WASP (Write a Scientific Paper) using Excel -5: Quartiles and standard deviation.

    Science.gov (United States)

    Grech, Victor

    2018-03-01

    The almost inevitable descriptive statistics exercise that is undergone once data collection is complete, prior to inferential statistics, requires the acquisition of basic descriptors which may include standard deviation and quartiles. This paper provides pointers as to how to do this in Microsoft Excel™ and explains the relationship between the two. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. An estimator for the standard deviation of a natural frequency. I.

    Science.gov (United States)

    Schiff, A. J.; Bogdanoff, J. L.

    1971-01-01

    A brief review of mean-square approximate systems is given. The case in which the masses are deterministic is considered first in the derivation of an estimator for the upper bound of the standard deviation of a natural frequency. Two examples presented include a two-degree-of-freedom system and a case in which the disorder in the springs is perfectly correlated. For purposes of comparison, a Monte Carlo simulation was done on a digital computer.

  3. The gait standard deviation, a single measure of kinematic variability.

    Science.gov (United States)

    Sangeux, Morgan; Passmore, Elyse; Graham, H Kerr; Tirosh, Oren

    2016-05-01

    Measurement of gait kinematic variability provides relevant clinical information in certain conditions affecting the neuromotor control of movement. In this article, we present a measure of overall gait kinematic variability, GaitSD, based on combination of waveforms' standard deviation. The waveform standard deviation is the common numerator in established indices of variability such as Kadaba's coefficient of multiple correlation or Winter's waveform coefficient of variation. Gait data were collected on typically developing children aged 6-17 years. Large number of strides was captured for each child, average 45 (SD: 11) for kinematics and 19 (SD: 5) for kinetics. We used a bootstrap procedure to determine the precision of GaitSD as a function of the number of strides processed. We compared the within-subject, stride-to-stride, variability with the, between-subject, variability of the normative pattern. Finally, we investigated the correlation between age and gait kinematic, kinetic and spatio-temporal variability. In typically developing children, the relative precision of GaitSD was 10% as soon as 6 strides were captured. As a comparison, spatio-temporal parameters required 30 strides to reach the same relative precision. The ratio stride-to-stride divided by normative pattern variability was smaller in kinematic variables (the smallest for pelvic tilt, 28%) than in kinetic and spatio-temporal variables (the largest for normalised stride length, 95%). GaitSD had a strong, negative correlation with age. We show that gait consistency may stabilise only at, or after, skeletal maturity. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. A standard deviation selection in evolutionary algorithm for grouper fish feed formulation

    Science.gov (United States)

    Cai-Juan, Soong; Ramli, Razamin; Rahman, Rosshairy Abdul

    2016-10-01

    Malaysia is one of the major producer countries for fishery production due to its location in the equatorial environment. Grouper fish is one of the potential markets in contributing to the income of the country due to its desirable taste, high demand and high price. However, the demand of grouper fish is still insufficient from the wild catch. Therefore, there is a need to farm grouper fish to cater to the market demand. In order to farm grouper fish, there is a need to have prior knowledge of the proper nutrients needed because there is no exact data available. Therefore, in this study, primary data and secondary data are collected even though there is a limitation of related papers and 30 samples are investigated by using standard deviation selection in Evolutionary algorithm. Thus, this study would unlock frontiers for an extensive research in respect of grouper fish feed formulation. Results shown that the fitness of standard deviation selection in evolutionary algorithm is applicable. The feasible and low fitness, quick solution can be obtained. These fitness can be further predicted to minimize cost in farming grouper fish.

  5. What to use to express the variability of data: Standard deviation or standard error of mean?

    Science.gov (United States)

    Barde, Mohini P; Barde, Prajakt J

    2012-07-01

    Statistics plays a vital role in biomedical research. It helps present data precisely and draws the meaningful conclusions. While presenting data, one should be aware of using adequate statistical measures. In biomedical journals, Standard Error of Mean (SEM) and Standard Deviation (SD) are used interchangeably to express the variability; though they measure different parameters. SEM quantifies uncertainty in estimate of the mean whereas SD indicates dispersion of the data from mean. As readers are generally interested in knowing the variability within sample, descriptive data should be precisely summarized with SD. Use of SEM should be limited to compute CI which measures the precision of population estimate. Journals can avoid such errors by requiring authors to adhere to their guidelines.

  6. Maximum drawdown and the allocation to real estate

    NARCIS (Netherlands)

    Hamelink, F.; Hoesli, M.

    2004-01-01

    The role of real estate in a mixed-asset portfolio is investigated when the maximum drawdown (hereafter MaxDD), rather than the standard deviation, is used as the measure of risk. In particular, it is analysed whether the discrepancy between the optimal allocation to real estate and the actual

  7. A stochastic model for the derivation of economic values and their standard deviations for production and functional traits in dairy cattle

    DEFF Research Database (Denmark)

    Nielsen, Hanne-Marie; Groen, A F; Østergaard, Søren

    2006-01-01

    The objective of this paper was to present a model of a dairy cattle production system for the derivation of economic values and their standard deviations for both production and functional traits under Danish production circumstances. The stochastic model used is dynamic, and simulates production...... was -0.94 €/day per cow-year. Standard deviations of economic values expressing variation in realised profit of a farm before and after a genetic change were computed using a linear Taylor series expansion. Expressed as coefficient of variation, standard deviations of economic values based on 1000...

  8. 14 CFR 21.609 - Approval for deviation.

    Science.gov (United States)

    2010-01-01

    ... deviation. (a) Each manufacturer who requests approval to deviate from any performance standard of a TSO shall show that the standards from which a deviation is requested are compensated for by factors or... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Approval for deviation. 21.609 Section 21...

  9. Distribution of standard deviation of an observable among superposed states

    International Nuclear Information System (INIS)

    Yu, Chang-shui; Shao, Ting-ting; Li, Dong-mo

    2016-01-01

    The standard deviation (SD) quantifies the spread of the observed values on a measurement of an observable. In this paper, we study the distribution of SD among the different components of a superposition state. It is found that the SD of an observable on a superposition state can be well bounded by the SDs of the superposed states. We also show that the bounds also serve as good bounds on coherence of a superposition state. As a further generalization, we give an alternative definition of incompatibility of two observables subject to a given state and show how the incompatibility subject to a superposition state is distributed.

  10. Distribution of standard deviation of an observable among superposed states

    Science.gov (United States)

    Yu, Chang-shui; Shao, Ting-ting; Li, Dong-mo

    2016-10-01

    The standard deviation (SD) quantifies the spread of the observed values on a measurement of an observable. In this paper, we study the distribution of SD among the different components of a superposition state. It is found that the SD of an observable on a superposition state can be well bounded by the SDs of the superposed states. We also show that the bounds also serve as good bounds on coherence of a superposition state. As a further generalization, we give an alternative definition of incompatibility of two observables subject to a given state and show how the incompatibility subject to a superposition state is distributed.

  11. Depth (Standard Deviation) Layer used to identify, delineate and classify moderate-depth benthic habitats around St. John, USVI

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Standard deviation of depth was calculated from the bathymetry surface for each cell using the ArcGIS Spatial Analyst Focal Statistics "STD" parameter. Standard...

  12. Use of Standard Deviations as Predictors in Models Using Large-Scale International Data Sets

    Science.gov (United States)

    Austin, Bruce; French, Brian; Adesope, Olusola; Gotch, Chad

    2017-01-01

    Measures of variability are successfully used in predictive modeling in research areas outside of education. This study examined how standard deviations can be used to address research questions not easily addressed using traditional measures such as group means based on index variables. Student survey data were obtained from the Organisation for…

  13. MAXIMUM-LIKELIHOOD-ESTIMATION OF THE ENTROPY OF AN ATTRACTOR

    NARCIS (Netherlands)

    SCHOUTEN, JC; TAKENS, F; VANDENBLEEK, CM

    In this paper, a maximum-likelihood estimate of the (Kolmogorov) entropy of an attractor is proposed that can be obtained directly from a time series. Also, the relative standard deviation of the entropy estimate is derived; it is dependent on the entropy and on the number of samples used in the

  14. The two errors of using the within-subject standard deviation (WSD) as the standard error of a reliable change index.

    Science.gov (United States)

    Maassen, Gerard H

    2010-08-01

    In this Journal, Lewis and colleagues introduced a new Reliable Change Index (RCI(WSD)), which incorporated the within-subject standard deviation (WSD) of a repeated measurement design as the standard error. In this note, two opposite errors in using WSD this way are demonstrated. First, being the standard error of measurement of only a single assessment makes WSD too small when practice effects are absent. Then, too many individuals will be designated reliably changed. Second, WSD can grow unlimitedly to the extent that differential practice effects occur. This can even make RCI(WSD) unable to detect any reliable change.

  15. Muon’s (g-2): the obstinate deviation from the Standard Model

    CERN Multimedia

    Antonella Del Rosso

    2011-01-01

    It’s been 50 years since a small group at CERN measured the muon (g-2) for the first time. Several other experiments have followed over the years. The latest measurement at Brookhaven (2004) gave a value that obstinately remains about 3 standard deviations away from the prediction of the Standard Model. Francis Farley, one of the fathers of the (g-2) experiments, argues that a statement such as “everything we observe is accounted for by the Standard Model” is not acceptable.   Francis J. M. Farley. Francis J. M. Farley, Fellow of the Royal Society since 1972 and the 1980 winner of the Hughes Medal "for his ultra-precise measurements of the muon magnetic moment, a severe test of quantum electrodynamics and of the nature of the muon", is among the scientists who still look at the (g-2) anomaly as one of the first proofs of the existence of new physics. “Although it seems to be generally believed that all experiments agree with the Stan...

  16. Differential standard deviation of log-scale intensity based optical coherence tomography angiography.

    Science.gov (United States)

    Shi, Weisong; Gao, Wanrong; Chen, Chaoliang; Yang, Victor X D

    2017-12-01

    In this paper, a differential standard deviation of log-scale intensity (DSDLI) based optical coherence tomography angiography (OCTA) is presented for calculating microvascular images of human skin. The DSDLI algorithm calculates the variance in difference images of two consecutive log-scale intensity based structural images from the same position along depth direction to contrast blood flow. The en face microvascular images were then generated by calculating the standard deviation of the differential log-scale intensities within the specific depth range, resulting in an improvement in spatial resolution and SNR in microvascular images compared to speckle variance OCT and power intensity differential method. The performance of DSDLI was testified by both phantom and in vivo experiments. In in vivo experiments, a self-adaptive sub-pixel image registration algorithm was performed to remove the bulk motion noise, where 2D Fourier transform was utilized to generate new images with spatial interval equal to half of the distance between two pixels in both fast-scanning and depth directions. The SNRs of signals of flowing particles are improved by 7.3 dB and 6.8 dB on average in phantom and in vivo experiments, respectively, while the average spatial resolution of images of in vivo blood vessels is increased by 21%. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Maximum likelihood positioning for gamma-ray imaging detectors with depth of interaction measurement

    International Nuclear Information System (INIS)

    Lerche, Ch.W.; Ros, A.; Monzo, J.M.; Aliaga, R.J.; Ferrando, N.; Martinez, J.D.; Herrero, V.; Esteve, R.; Gadea, R.; Colom, R.J.; Toledo, J.; Mateo, F.; Sebastia, A.; Sanchez, F.; Benlloch, J.M.

    2009-01-01

    The center of gravity algorithm leads to strong artifacts for gamma-ray imaging detectors that are based on monolithic scintillation crystals and position sensitive photo-detectors. This is a consequence of using the centroids as position estimates. The fact that charge division circuits can also be used to compute the standard deviation of the scintillation light distribution opens a way out of this drawback. We studied the feasibility of maximum likelihood estimation for computing the true gamma-ray photo-conversion position from the centroids and the standard deviation of the light distribution. The method was evaluated on a test detector that consists of the position sensitive photomultiplier tube H8500 and a monolithic LSO crystal (42mmx42mmx10mm). Spatial resolution was measured for the centroids and the maximum likelihood estimates. The results suggest that the maximum likelihood positioning is feasible and partially removes the strong artifacts of the center of gravity algorithm.

  18. Maximum likelihood positioning for gamma-ray imaging detectors with depth of interaction measurement

    Energy Technology Data Exchange (ETDEWEB)

    Lerche, Ch.W. [Grupo de Sistemas Digitales, ITACA, Universidad Politecnica de Valencia, 46022 Valencia (Spain)], E-mail: lerche@ific.uv.es; Ros, A. [Grupo de Fisica Medica Nuclear, IFIC, Universidad de Valencia-Consejo Superior de Investigaciones Cientificas, 46980 Paterna (Spain); Monzo, J.M.; Aliaga, R.J.; Ferrando, N.; Martinez, J.D.; Herrero, V.; Esteve, R.; Gadea, R.; Colom, R.J.; Toledo, J.; Mateo, F.; Sebastia, A. [Grupo de Sistemas Digitales, ITACA, Universidad Politecnica de Valencia, 46022 Valencia (Spain); Sanchez, F.; Benlloch, J.M. [Grupo de Fisica Medica Nuclear, IFIC, Universidad de Valencia-Consejo Superior de Investigaciones Cientificas, 46980 Paterna (Spain)

    2009-06-01

    The center of gravity algorithm leads to strong artifacts for gamma-ray imaging detectors that are based on monolithic scintillation crystals and position sensitive photo-detectors. This is a consequence of using the centroids as position estimates. The fact that charge division circuits can also be used to compute the standard deviation of the scintillation light distribution opens a way out of this drawback. We studied the feasibility of maximum likelihood estimation for computing the true gamma-ray photo-conversion position from the centroids and the standard deviation of the light distribution. The method was evaluated on a test detector that consists of the position sensitive photomultiplier tube H8500 and a monolithic LSO crystal (42mmx42mmx10mm). Spatial resolution was measured for the centroids and the maximum likelihood estimates. The results suggest that the maximum likelihood positioning is feasible and partially removes the strong artifacts of the center of gravity algorithm.

  19. Multiplicative surrogate standard deviation: a group metric for the glycemic variability of individual hospitalized patients.

    Science.gov (United States)

    Braithwaite, Susan S; Umpierrez, Guillermo E; Chase, J Geoffrey

    2013-09-01

    Group metrics are described to quantify blood glucose (BG) variability of hospitalized patients. The "multiplicative surrogate standard deviation" (MSSD) is the reverse-transformed group mean of the standard deviations (SDs) of the logarithmically transformed BG data set of each patient. The "geometric group mean" (GGM) is the reverse-transformed group mean of the means of the logarithmically transformed BG data set of each patient. Before reverse transformation is performed, the mean of means and mean of SDs each has its own SD, which becomes a multiplicative standard deviation (MSD) after reverse transformation. Statistical predictions and comparisons of parametric or nonparametric tests remain valid after reverse transformation. A subset of a previously published BG data set of 20 critically ill patients from the first 72 h of treatment under the SPRINT protocol was transformed logarithmically. After rank ordering according to the SD of the logarithmically transformed BG data of each patient, the cohort was divided into two equal groups, those having lower or higher variability. For the entire cohort, the GGM was 106 (÷/× 1.07) mg/dl, and MSSD was 1.24 (÷/× 1.07). For the subgroups having lower and higher variability, respectively, the GGM did not differ, 104 (÷/× 1.07) versus 109 (÷/× 1.07) mg/dl, but the MSSD differed, 1.17 (÷/× 1.03) versus 1.31 (÷/× 1.05), p = .00004. By using the MSSD with its MSD, groups can be characterized and compared according to glycemic variability of individual patient members. © 2013 Diabetes Technology Society.

  20. Age-independent anti-Müllerian hormone (AMH) standard deviation scores to estimate ovarian function.

    Science.gov (United States)

    Helden, Josef van; Weiskirchen, Ralf

    2017-06-01

    To determine single year age-specific anti-Müllerian hormone (AMH) standard deviation scores (SDS) for women associated to normal ovarian function and different ovarian disorders resulting in sub- or infertility. Determination of particular year median and mean AMH values with standard deviations (SD), calculation of age-independent cut off SDS for the discrimination between normal ovarian function and ovarian disorders. Single-year-specific median, mean, and SD values have been evaluated for the Beckman Access AMH immunoassay. While the decrease of both median and mean AMH values is strongly correlated with increasing age, calculated SDS values have been shown to be age independent with the differentiation between normal ovarian function measured as occurred ovulation with sufficient luteal activity compared with hyperandrogenemic cycle disorders or anovulation associated with high AMH values and reduced ovarian activity or insufficiency associated with low AMH, respectively. These results will be helpful for the treatment of patients and the ventilation of the different reproductive options. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Testing of Software Routine to Determine Deviate and Cumulative Probability: ModStandardNormal Version 1.0

    International Nuclear Information System (INIS)

    A.H. Monib

    1999-01-01

    The purpose of this calculation is to document that the software routine ModStandardNomal Version 1.0 which is a Visual Fortran 5.0 module, provides correct results for a normal distribution up to five significant figures (three significant figures at the function tails) for a specified range of input parameters. The software routine may be used for quality affecting work. Two types of output are generated in ModStandardNomal: a deviate, x, given a cumulative probability, p, between 0 and 1; and a cumulative probability, p, given a deviate, x, between -8 and 8. This calculation supports Performance Assessment, under Technical Product Development Plan, TDP-EBS-MD-000006 (Attachment I, DIRS 3) and is written in accordance with the AP-3.12Q Calculations procedure (Attachment I, DIRS 4)

  2. Image contrast enhancement based on a local standard deviation model

    International Nuclear Information System (INIS)

    Chang, Dah-Chung; Wu, Wen-Rong

    1996-01-01

    The adaptive contrast enhancement (ACE) algorithm is a widely used image enhancement method, which needs a contrast gain to adjust high frequency components of an image. In the literature, the gain is usually inversely proportional to the local standard deviation (LSD) or is a constant. But these cause two problems in practical applications, i.e., noise overenhancement and ringing artifact. In this paper a new gain is developed based on Hunt's Gaussian image model to prevent the two defects. The new gain is a nonlinear function of LSD and has the desired characteristic emphasizing the LSD regions in which details are concentrated. We have applied the new ACE algorithm to chest x-ray images and the simulations show the effectiveness of the proposed algorithm

  3. MUSiC - Model-independent search for deviations from Standard Model predictions in CMS

    Science.gov (United States)

    Pieta, Holger

    2010-02-01

    We present an approach for a model independent search in CMS. Systematically scanning the data for deviations from the standard model Monte Carlo expectations, such an analysis can help to understand the detector and tune event generators. By minimizing the theoretical bias the analysis is furthermore sensitive to a wide range of models for new physics, including the uncounted number of models not-yet-thought-of. After sorting the events into classes defined by their particle content (leptons, photons, jets and missing transverse energy), a minimally prejudiced scan is performed on a number of distributions. Advanced statistical methods are used to determine the significance of the deviating regions, rigorously taking systematic uncertainties into account. A number of benchmark scenarios, including common models of new physics and possible detector effects, have been used to gauge the power of such a method. )

  4. Final height in survivors of childhood cancer compared with Height Standard Deviation Scores at diagnosis

    NARCIS (Netherlands)

    Knijnenburg, S. L.; Raemaekers, S.; van den Berg, H.; van Dijk, I. W. E. M.; Lieverst, J. A.; van der Pal, H. J.; Jaspers, M. W. M.; Caron, H. N.; Kremer, L. C.; van Santen, H. M.

    2013-01-01

    Our study aimed to evaluate final height in a cohort of Dutch childhood cancer survivors (CCS) and assess possible determinants of final height, including height at diagnosis. We calculated standard deviation scores (SDS) for height at initial cancer diagnosis and height in adulthood in a cohort of

  5. Standard deviation analysis of the mastoid fossa temperature differential reading: a potential model for objective chiropractic assessment.

    Science.gov (United States)

    Hart, John

    2011-03-01

    This study describes a model for statistically analyzing follow-up numeric-based chiropractic spinal assessments for an individual patient based on his or her own baseline. Ten mastoid fossa temperature differential readings (MFTD) obtained from a chiropractic patient were used in the study. The first eight readings served as baseline and were compared to post-adjustment readings. One of the two post-adjustment MFTD readings fell outside two standard deviations of the baseline mean and therefore theoretically represents improvement according to pattern analysis theory. This study showed how standard deviation analysis may be used to identify future outliers for an individual patient based on his or her own baseline data. Copyright © 2011 National University of Health Sciences. Published by Elsevier Inc. All rights reserved.

  6. Standard deviation of luminance distribution affects lightness and pupillary response.

    Science.gov (United States)

    Kanari, Kei; Kaneko, Hirohiko

    2014-12-01

    We examined whether the standard deviation (SD) of luminance distribution serves as information of illumination. We measured the lightness of a patch presented in the center of a scrambled-dot pattern while manipulating the SD of the luminance distribution. Results showed that lightness decreased as the SD of the surround stimulus increased. We also measured pupil diameter while viewing a similar stimulus. The pupil diameter decreased as the SD of luminance distribution of the stimuli increased. We confirmed that these results were not obtained because of the increase of the highest luminance in the stimulus. Furthermore, results of field measurements revealed a correlation between the SD of luminance distribution and illuminance in natural scenes. These results indicated that the visual system refers to the SD of the luminance distribution in the visual stimulus to estimate the scene illumination.

  7. Quantitative angle-insensitive flow measurement using relative standard deviation OCT.

    Science.gov (United States)

    Zhu, Jiang; Zhang, Buyun; Qi, Li; Wang, Ling; Yang, Qiang; Zhu, Zhuqing; Huo, Tiancheng; Chen, Zhongping

    2017-10-30

    Incorporating different data processing methods, optical coherence tomography (OCT) has the ability for high-resolution angiography and quantitative flow velocity measurements. However, OCT angiography cannot provide quantitative information of flow velocities, and the velocity measurement based on Doppler OCT requires the determination of Doppler angles, which is a challenge in a complex vascular network. In this study, we report on a relative standard deviation OCT (RSD-OCT) method which provides both vascular network mapping and quantitative information for flow velocities within a wide range of Doppler angles. The RSD values are angle-insensitive within a wide range of angles, and a nearly linear relationship was found between the RSD values and the flow velocities. The RSD-OCT measurement in a rat cortex shows that it can quantify the blood flow velocities as well as map the vascular network in vivo .

  8. Simple standard problem for the Preisach moving model

    International Nuclear Information System (INIS)

    Morentin, F.J.; Alejos, O.; Francisco, C. de; Munoz, J.M.; Hernandez-Gomez, P.; Torres, C.

    2004-01-01

    The present work proposes a simple magnetic system as a candidate for a Standard Problem for Preisach-based models. The system consists in a regular square array of magnetic particles totally oriented along the direction of application of an external magnetic field. The behavior of such system was numerically simulated for different values of the interaction between particles and of the standard deviation of the critical fields of the particles. The characteristic parameters of the Preisach moving model were worked out during simulations, i.e., the mean value and the standard deviation of the interaction field. For this system, results reveal that the mean interaction field depends linearly on the system magnetization, as the Preisach moving model predicts. Nevertheless, the standard deviation cannot be considered as independent of the magnetization. In fact, the standard deviation shows a maximum at demagnetization and two minima at magnetization saturation. Furthermore, not all the demagnetization states are equivalent. The plot standard deviation vs. magnetization is a multi-valuated curve when the system undergoes an AC demagnetization procedure. In this way, the standard deviation increases as the system goes from coercivity to the AC demagnetized state

  9. Analyzing Vegetation Change in an Elephant-Impacted Landscape Using the Moving Standard Deviation Index

    Directory of Open Access Journals (Sweden)

    Timothy J. Fullman

    2014-01-01

    Full Text Available Northern Botswana is influenced by various socio-ecological drivers of landscape change. The African elephant (Loxodonta africana is one of the leading sources of landscape shifts in this region. Developing the ability to assess elephant impacts on savanna vegetation is important to promote effective management strategies. The Moving Standard Deviation Index (MSDI applies a standard deviation calculation to remote sensing imagery to assess degradation of vegetation. Used previously for assessing impacts of livestock on rangelands, we evaluate the ability of the MSDI to detect elephant-modified vegetation along the Chobe riverfront in Botswana, a heavily elephant-impacted landscape. At broad scales, MSDI values are positively related to elephant utilization. At finer scales, using data from 257 sites along the riverfront, MSDI values show a consistent negative relationship with intensity of elephant utilization. We suggest that these differences are due to varying effects of elephants across scales. Elephant utilization of vegetation may increase heterogeneity across the landscape, but decrease it within heavily used patches, resulting in the observed MSDI pattern of divergent trends at different scales. While significant, the low explanatory power of the relationship between the MSDI and elephant utilization suggests the MSDI may have limited use for regional monitoring of elephant impacts.

  10. Refined multiscale fuzzy entropy based on standard deviation for biomedical signal analysis.

    Science.gov (United States)

    Azami, Hamed; Fernández, Alberto; Escudero, Javier

    2017-11-01

    Multiscale entropy (MSE) has been a prevalent algorithm to quantify the complexity of biomedical time series. Recent developments in the field have tried to alleviate the problem of undefined MSE values for short signals. Moreover, there has been a recent interest in using other statistical moments than the mean, i.e., variance, in the coarse-graining step of the MSE. Building on these trends, here we introduce the so-called refined composite multiscale fuzzy entropy based on the standard deviation (RCMFE σ ) and mean (RCMFE μ ) to quantify the dynamical properties of spread and mean, respectively, over multiple time scales. We demonstrate the dependency of the RCMFE σ and RCMFE μ , in comparison with other multiscale approaches, on several straightforward signal processing concepts using a set of synthetic signals. The results evidenced that the RCMFE σ and RCMFE μ values are more stable and reliable than the classical multiscale entropy ones. We also inspect the ability of using the standard deviation as well as the mean in the coarse-graining process using magnetoencephalograms in Alzheimer's disease and publicly available electroencephalograms recorded from focal and non-focal areas in epilepsy. Our results indicated that when the RCMFE μ cannot distinguish different types of dynamics of a particular time series at some scale factors, the RCMFE σ may do so, and vice versa. The results showed that RCMFE σ -based features lead to higher classification accuracies in comparison with the RCMFE μ -based ones. We also made freely available all the Matlab codes used in this study at http://dx.doi.org/10.7488/ds/1477 .

  11. Fidelity deviation in quantum teleportation

    OpenAIRE

    Bang, Jeongho; Ryu, Junghee; Kaszlikowski, Dagomir

    2018-01-01

    We analyze the performance of quantum teleportation in terms of average fidelity and fidelity deviation. The average fidelity is defined as the average value of the fidelities over all possible input states and the fidelity deviation is their standard deviation, which is referred to as a concept of fluctuation or universality. In the analysis, we find the condition to optimize both measures under a noisy quantum channel---we here consider the so-called Werner channel. To characterize our resu...

  12. On the Linear Relation between the Mean and the Standard Deviation of a Response Time Distribution

    Science.gov (United States)

    Wagenmakers, Eric-Jan; Brown, Scott

    2007-01-01

    Although it is generally accepted that the spread of a response time (RT) distribution increases with the mean, the precise nature of this relation remains relatively unexplored. The authors show that in several descriptive RT distributions, the standard deviation increases linearly with the mean. Results from a wide range of tasks from different…

  13. Quantum uncertainty relation based on the mean deviation

    OpenAIRE

    Sharma, Gautam; Mukhopadhyay, Chiranjib; Sazim, Sk; Pati, Arun Kumar

    2018-01-01

    Traditional forms of quantum uncertainty relations are invariably based on the standard deviation. This can be understood in the historical context of simultaneous development of quantum theory and mathematical statistics. Here, we present alternative forms of uncertainty relations, in both state dependent and state independent forms, based on the mean deviation. We illustrate the robustness of this formulation in situations where the standard deviation based uncertainty relation is inapplica...

  14. Limitations of the relative standard deviation of win percentages for measuring competitive balance in sports leagues

    OpenAIRE

    P. Dorian Owen

    2009-01-01

    The relative standard deviation of win percentages, the most widely used measure of within-season competitive balance, has an upper bound which is very sensitive to variation in the numbers of teams and games played. Taking into account this upper bound provides additional insight into comparisons of competitive balance across leagues or over time.

  15. SU-E-I-59: Investigation of the Usefulness of a Standard Deviation and Mammary Gland Density as Indexes for Mammogram Classification.

    Science.gov (United States)

    Takarabe, S; Yabuuchi, H; Morishita, J

    2012-06-01

    To investigate the usefulness of the standard deviation of pixel values in a whole mammary glands region and the percentage of a high- density mammary glands region to a whole mammary glands region as features for classification of mammograms into four categories based on the ACR BI-RADS breast composition. We used 36 digital mediolateral oblique view mammograms (18 patients) approved by our IRB. These images were classified into the four categories of breast compositions by an experienced breast radiologist and the results of the classification were regarded as a gold standard. First, a whole mammary region in a breast was divided into two regions such as a high-density mammary glands region and a low/iso-density mammary glands region by using a threshold value that was obtained from the pixel values corresponding to a pectoral muscle region. Then the percentage of a high-density mammary glands region to a whole mammary glands region was calculated. In addition, as a new method, the standard deviation of pixel values in a whole mammary glands region was calculated as an index based on the intermingling of mammary glands and fats. Finally, all mammograms were classified by using the combination of the percentage of a high-density mammary glands region and the standard deviation of each image. The agreement rates of the classification between our proposed method and gold standard was 86% (31/36). This result signified that our method has the potential to classify mammograms. The combination of the standard deviation of pixel values in a whole mammary glands region and the percentage of a high-density mammary glands region to a whole mammary glands region was available as features to classify mammograms based on the ACR BI- RADS breast composition. © 2012 American Association of Physicists in Medicine.

  16. Tsallis distribution as a standard maximum entropy solution with 'tail' constraint

    International Nuclear Information System (INIS)

    Bercher, J.-F.

    2008-01-01

    We show that Tsallis' distributions can be derived from the standard (Shannon) maximum entropy setting, by incorporating a constraint on the divergence between the distribution and another distribution imagined as its tail. In this setting, we find an underlying entropy which is the Renyi entropy. Furthermore, escort distributions and generalized means appear as a direct consequence of the construction. Finally, the 'maximum entropy tail distribution' is identified as a Generalized Pareto Distribution

  17. Multi-focus image fusion based on area-based standard deviation in dual tree contourlet transform domain

    Science.gov (United States)

    Dong, Min; Dong, Chenghui; Guo, Miao; Wang, Zhe; Mu, Xiaomin

    2018-04-01

    Multiresolution-based methods, such as wavelet and Contourlet are usually used to image fusion. This work presents a new image fusion frame-work by utilizing area-based standard deviation in dual tree Contourlet trans-form domain. Firstly, the pre-registered source images are decomposed with dual tree Contourlet transform; low-pass and high-pass coefficients are obtained. Then, the low-pass bands are fused with weighted average based on area standard deviation rather than the simple "averaging" rule. While the high-pass bands are merged with the "max-absolute' fusion rule. Finally, the modified low-pass and high-pass coefficients are used to reconstruct the final fused image. The major advantage of the proposed fusion method over conventional fusion is the approximately shift invariance and multidirectional selectivity of dual tree Contourlet transform. The proposed method is compared with wavelet- , Contourletbased methods and other the state-of-the art methods on common used multi focus images. Experiments demonstrate that the proposed fusion framework is feasible and effective, and it performs better in both subjective and objective evaluation.

  18. Inverse correlation between the standard deviation of R-R intervals in supine position and the simplified menopausal index in women with climacteric symptoms.

    Science.gov (United States)

    Yanagihara, Nobuyuki; Seki, Meikan; Nakano, Masahiro; Hachisuga, Toru; Goto, Yukio

    2014-06-01

    Disturbance of autonomic nervous activity has been thought to play a role in the climacteric symptoms of postmenopausal women. This study was therefore designed to investigate the relationship between autonomic nervous activity and climacteric symptoms in postmenopausal Japanese women. The autonomic nervous activity of 40 Japanese women with climacteric symptoms and 40 Japanese women without climacteric symptoms was measured by power spectral analysis of heart rate variability using a standard hexagonal radar chart. The scores for climacteric symptoms were determined using the simplified menopausal index. Sympathetic excitability and irritability, as well as the standard deviation of mean R-R intervals in supine position, were significantly (P standard deviation of mean R-R intervals in supine position and the simplified menopausal index score. The lack of control for potential confounding variables was a limitation of this study. In climacteric women, the standard deviation of mean R-R intervals in supine position is negatively correlated with the simplified menopausal index score.

  19. Standard operation procedures for conducting the on-the-road driving test, and measurement of the standard deviation of lateral position (SDLP).

    Science.gov (United States)

    Verster, Joris C; Roth, Thomas

    2011-01-01

    This review discusses the methodology of the standardized on-the-road driving test and standard operation procedures to conduct the test and analyze the data. The on-the-road driving test has proven to be a sensitive and reliable method to examine driving ability after administration of central nervous system (CNS) drugs. The test is performed on a public highway in normal traffic. Subjects are instructed to drive with a steady lateral position and constant speed. Its primary parameter, the standard deviation of lateral position (SDLP), ie, an index of 'weaving', is a stable measure of driving performance with high test-retest reliability. SDLP differences from placebo are dose-dependent, and do not depend on the subject's baseline driving skills (placebo SDLP). It is important that standard operation procedures are applied to conduct the test and analyze the data in order to allow comparisons between studies from different sites.

  20. Odds per adjusted standard deviation: comparing strengths of associations for risk factors measured on different scales and across diseases and populations.

    Science.gov (United States)

    Hopper, John L

    2015-11-15

    How can the "strengths" of risk factors, in the sense of how well they discriminate cases from controls, be compared when they are measured on different scales such as continuous, binary, and integer? Given that risk estimates take into account other fitted and design-related factors-and that is how risk gradients are interpreted-so should the presentation of risk gradients. Therefore, for each risk factor X0, I propose using appropriate regression techniques to derive from appropriate population data the best fitting relationship between the mean of X0 and all the other covariates fitted in the model or adjusted for by design (X1, X2, … , Xn). The odds per adjusted standard deviation (OPERA) presents the risk association for X0 in terms of the change in risk per s = standard deviation of X0 adjusted for X1, X2, … , Xn, rather than the unadjusted standard deviation of X0 itself. If the increased risk is relative risk (RR)-fold over A adjusted standard deviations, then OPERA = exp[ln(RR)/A] = RR(s). This unifying approach is illustrated by considering breast cancer and published risk estimates. OPERA estimates are by definition independent and can be used to compare the predictive strengths of risk factors across diseases and populations. © The Author 2015. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  1. Introducing the Mean Absolute Deviation "Effect" Size

    Science.gov (United States)

    Gorard, Stephen

    2015-01-01

    This paper revisits the use of effect sizes in the analysis of experimental and similar results, and reminds readers of the relative advantages of the mean absolute deviation as a measure of variation, as opposed to the more complex standard deviation. The mean absolute deviation is easier to use and understand, and more tolerant of extreme…

  2. New reference charts for testicular volume in Dutch children and adolescents allow the calculation of standard deviation scores

    NARCIS (Netherlands)

    Joustra, S.D.; Plas, E.M. van der; Goede, J.; Oostdijk, W.; Delemarre-van de Waal, H.A.; Hack, W.W.M.; Buuren, S. van; Wit, J.M.

    2015-01-01

    Aim Accurate calculations of testicular volume standard deviation (SD) scores are not currently available. We constructed LMS-smoothed age-reference charts for testicular volume in healthy boys. Methods The LMS method was used to calculate reference data, based on testicular volumes from

  3. Development and operation of a quality assurance system for deviations from standard operating procedures in a clinical cell therapy laboratory.

    Science.gov (United States)

    McKenna, D; Kadidlo, D; Sumstad, D; McCullough, J

    2003-01-01

    Errors and accidents, or deviations from standard operating procedures, other policy, or regulations must be documented and reviewed, with corrective actions taken to assure quality performance in a cellular therapy laboratory. Though expectations and guidance for deviation management exist, a description of the framework for the development of such a program is lacking in the literature. Here we describe our deviation management program, which uses a Microsoft Access database and Microsoft Excel to analyze deviations and notable events, facilitating quality assurance (QA) functions and ongoing process improvement. Data is stored in a Microsoft Access database with an assignment to one of six deviation type categories. Deviation events are evaluated for potential impact on patient and product, and impact scores for each are determined using a 0- 4 grading scale. An immediate investigation occurs, and corrective actions are taken to prevent future similar events from taking place. Additionally, deviation data is collectively analyzed on a quarterly basis using Microsoft Excel, to identify recurring events or developing trends. Between January 1, 2001 and December 31, 2001 over 2500 products were processed at our laboratory. During this time period, 335 deviations and notable events occurred, affecting 385 products and/or patients. Deviations within the 'technical error' category were most common (37%). Thirteen percent of deviations had a patient and/or a product impact score > or = 2, a score indicating, at a minimum, potentially affected patient outcome or moderate effect upon product quality. Real-time analysis and quarterly review of deviations using our deviation management program allows for identification and correction of deviations. Monitoring of deviation trends allows for process improvement and overall successful functioning of the QA program in the cell therapy laboratory. Our deviation management program could serve as a model for other laboratories in

  4. U.S. Navy Marine Climatic Atlas of the World. Volume IX. World-Wide Means and Standard Deviations

    Science.gov (United States)

    1981-10-01

    TITLE (..d SobtII,) S. TYPE OF REPORT & PERIOD COVERED U. S. Navy Marine Climatic Atlas of the World Volume IX World-wide Means and Standard Reference...Ives the best estimate of the population standard deviations. The means, , are com~nuted from: EX IIN I 90 80 70 60" 50’ 40, 30 20 10 0 1070 T- VErr ...or 10%, whichever is greater Since the mean ice limit approximates the minus two de l temperature isopleth, this analyzed lower limit was Wave Heights

  5. Approaching nanometre accuracy in measurement of the profile deviation of a large plane mirror

    International Nuclear Information System (INIS)

    Müller, Andreas; Hofmann, Norbert; Manske, Eberhard

    2012-01-01

    The interferometric nanoprofilometer (INP), developed at the Institute of Process Measurement and Sensor Technology at the Ilmenau University of Technology, is a precision device for measuring the profile deviations of plane mirrors with a profile length of up to 250 mm at the nanometre scale. As its expanded uncertainty of U(l) = 7.8 nm at a confidence level of p = 95% (k = 2) was mainly influenced by the uncertainty of the straightness standard (3.6 nm) and the uncertainty caused by the signal and demodulation errors of the interferometer signals (1.2 nm), these two sources of uncertainty have been the subject of recent analyses and modifications. To measure the profile deviation of the standard mirror we performed a classic three-flat test using the INP. The three-flat test consists of a combination of measurements between three different test flats. The shape deviations of the three flats can then be determined by applying a least-squares solution of the resulting equation system. The results of this three-flat test showed surprisingly good consistency, enabling us to correct this systematic error in profile deviation measurements and reducing the uncertainty component of the standard mirror to 0.4 nm. Another area of research is the signal and demodulation error arising during the interpretation of the interferometer signals. In the case of the interferometric nanoprofilometer, the special challenge is that the maximum path length differences are too small during the scan of the entire profile deviation over perfectly aligned 250 mm long mirrors for proper interpolation and correction since they do not yet cover even half of an interference fringe. By applying a simple method of weighting to the interferometer data the common ellipse fitting could be performed successfully and the demodulation error was greatly reduced. The remaining uncertainty component is less than 0.5 nm. In summary we were successful in greatly reducing two major systematic errors. The

  6. Standard Deviation of Spatially-Averaged Surface Cross Section Data from the TRMM Precipitation Radar

    Science.gov (United States)

    Meneghini, Robert; Jones, Jeffrey A.

    2010-01-01

    We investigate the spatial variability of the normalized radar cross section of the surface (NRCS or Sigma(sup 0)) derived from measurements of the TRMM Precipitation Radar (PR) for the period from 1998 to 2009. The purpose of the study is to understand the way in which the sample standard deviation of the Sigma(sup 0) data changes as a function of spatial resolution, incidence angle, and surface type (land/ocean). The results have implications regarding the accuracy by which the path integrated attenuation from precipitation can be inferred by the use of surface scattering properties.

  7. Caution regarding the choice of standard deviations to guide sample size calculations in clinical trials.

    Science.gov (United States)

    Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie

    2013-08-01

    The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the

  8. New g-2 measurement deviates further from Standard Model

    CERN Multimedia

    2004-01-01

    "The latest result from an international collaboration of scientists investigating how the spin of a muon is affected as this type of subatomic particle moves through a magnetic field deviates further than previous measurements from theoretical predictions" (1 page).

  9. Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET

    Energy Technology Data Exchange (ETDEWEB)

    Gopich, Irina V. [Laboratory of Chemical Physics, National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, Maryland 20892 (United States)

    2015-01-21

    Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated.

  10. Study of Railway Track Irregularity Standard Deviation Time Series Based on Data Mining and Linear Model

    Directory of Open Access Journals (Sweden)

    Jia Chaolong

    2013-01-01

    Full Text Available Good track geometry state ensures the safe operation of the railway passenger service and freight service. Railway transportation plays an important role in the Chinese economic and social development. This paper studies track irregularity standard deviation time series data and focuses on the characteristics and trend changes of track state by applying clustering analysis. Linear recursive model and linear-ARMA model based on wavelet decomposition reconstruction are proposed, and all they offer supports for the safe management of railway transportation.

  11. Large deviations of the maximum eigenvalue in Wishart random matrices

    International Nuclear Information System (INIS)

    Vivo, Pierpaolo; Majumdar, Satya N; Bohigas, Oriol

    2007-01-01

    We analytically compute the probability of large fluctuations to the left of the mean of the largest eigenvalue in the Wishart (Laguerre) ensemble of positive definite random matrices. We show that the probability that all the eigenvalues of a (N x N) Wishart matrix W = X T X (where X is a rectangular M x N matrix with independent Gaussian entries) are smaller than the mean value (λ) = N/c decreases for large N as ∼exp[-β/2 N 2 Φ - (2√c + 1: c)], where β = 1, 2 corresponds respectively to real and complex Wishart matrices, c = N/M ≤ 1 and Φ - (x; c) is a rate (sometimes also called large deviation) function that we compute explicitly. The result for the anti-Wishart case (M < N) simply follows by exchanging M and N. We also analytically determine the average spectral density of an ensemble of Wishart matrices whose eigenvalues are constrained to be smaller than a fixed barrier. Numerical simulations are in excellent agreement with the analytical predictions

  12. Large deviations of the maximum eigenvalue in Wishart random matrices

    Energy Technology Data Exchange (ETDEWEB)

    Vivo, Pierpaolo [School of Information Systems, Computing and Mathematics, Brunel University, Uxbridge, Middlesex, UB8 3PH (United Kingdom) ; Majumdar, Satya N [Laboratoire de Physique Theorique et Modeles Statistiques (UMR 8626 du CNRS), Universite Paris-Sud, Batiment 100, 91405 Orsay Cedex (France); Bohigas, Oriol [Laboratoire de Physique Theorique et Modeles Statistiques (UMR 8626 du CNRS), Universite Paris-Sud, Batiment 100, 91405 Orsay Cedex (France)

    2007-04-20

    We analytically compute the probability of large fluctuations to the left of the mean of the largest eigenvalue in the Wishart (Laguerre) ensemble of positive definite random matrices. We show that the probability that all the eigenvalues of a (N x N) Wishart matrix W = X{sup T}X (where X is a rectangular M x N matrix with independent Gaussian entries) are smaller than the mean value ({lambda}) = N/c decreases for large N as {approx}exp[-{beta}/2 N{sup 2}{phi}{sub -} (2{radical}c + 1: c)], where {beta} = 1, 2 corresponds respectively to real and complex Wishart matrices, c = N/M {<=} 1 and {phi}{sub -}(x; c) is a rate (sometimes also called large deviation) function that we compute explicitly. The result for the anti-Wishart case (M < N) simply follows by exchanging M and N. We also analytically determine the average spectral density of an ensemble of Wishart matrices whose eigenvalues are constrained to be smaller than a fixed barrier. Numerical simulations are in excellent agreement with the analytical predictions.

  13. Using the standard deviation of a region of interest in an image to estimate camera to emitter distance.

    Science.gov (United States)

    Cano-García, Angel E; Lazaro, José Luis; Infante, Arturo; Fernández, Pedro; Pompa-Chacón, Yamilet; Espinoza, Felipe

    2012-01-01

    In this study, a camera to infrared diode (IRED) distance estimation problem was analyzed. The main objective was to define an alternative to measures depth only using the information extracted from pixel grey levels of the IRED image to estimate the distance between the camera and the IRED. In this paper, the standard deviation of the pixel grey level in the region of interest containing the IRED image is proposed as an empirical parameter to define a model for estimating camera to emitter distance. This model includes the camera exposure time, IRED radiant intensity and the distance between the camera and the IRED. An expression for the standard deviation model related to these magnitudes was also derived and calibrated using different images taken under different conditions. From this analysis, we determined the optimum parameters to ensure the best accuracy provided by this alternative. Once the model calibration had been carried out, a differential method to estimate the distance between the camera and the IRED was defined and applied, considering that the camera was aligned with the IRED. The results indicate that this method represents a useful alternative for determining the depth information.

  14. Reducing the standard deviation in multiple-assay experiments where the variation matters but the absolute value does not.

    Science.gov (United States)

    Echenique-Robba, Pablo; Nelo-Bazán, María Alejandra; Carrodeguas, José A

    2013-01-01

    When the value of a quantity x for a number of systems (cells, molecules, people, chunks of metal, DNA vectors, so on) is measured and the aim is to replicate the whole set again for different trials or assays, despite the efforts for a near-equal design, scientists might often obtain quite different measurements. As a consequence, some systems' averages present standard deviations that are too large to render statistically significant results. This work presents a novel correction method of a very low mathematical and numerical complexity that can reduce the standard deviation of such results and increase their statistical significance. Two conditions are to be met: the inter-system variations of x matter while its absolute value does not, and a similar tendency in the values of x must be present in the different assays (or in other words, the results corresponding to different assays must present a high linear correlation). We demonstrate the improvements this method offers with a cell biology experiment, but it can definitely be applied to any problem that conforms to the described structure and requirements and in any quantitative scientific field that deals with data subject to uncertainty.

  15. Using the Standard Deviation of a Region of Interest in an Image to Estimate Camera to Emitter Distance

    Directory of Open Access Journals (Sweden)

    Felipe Espinoza

    2012-05-01

    Full Text Available In this study, a camera to infrared diode (IRED distance estimation problem was analyzed. The main objective was to define an alternative to measures depth only using the information extracted from pixel grey levels of the IRED image to estimate the distance between the camera and the IRED. In this paper, the standard deviation of the pixel grey level in the region of interest containing the IRED image is proposed as an empirical parameter to define a model for estimating camera to emitter distance. This model includes the camera exposure time, IRED radiant intensity and the distance between the camera and the IRED. An expression for the standard deviation model related to these magnitudes was also derived and calibrated using different images taken under different conditions. From this analysis, we determined the optimum parameters to ensure the best accuracy provided by this alternative. Once the model calibration had been carried out, a differential method to estimate the distance between the camera and the IRED was defined and applied, considering that the camera was aligned with the IRED. The results indicate that this method represents a useful alternative for determining the depth information.

  16. VAR Portfolio Optimal: Perbandingan Antara Metode Markowitz dan Mean Absolute Deviation

    Directory of Open Access Journals (Sweden)

    R. Agus Sartono

    2009-05-01

    Full Text Available Portfolio selection method which have been introduced by Harry Markowitz (1952 used variance or deviation standard as a measure of risk. Kanno and Yamazaki (1991 introduced another method and used mean absolute deviation as a measure of risk instead of variance. The Value-at Risk (VaR is a relatively new method to capitalized risk that been used by financial institutions. The aim of this research is compare between mean variance and mean absolute deviation of two portfolios. Next, we attempt to assess the VaR of two portfolios using delta normal method and historical simulation. We use the secondary data from the Jakarta Stock Exchange – LQ45 during 2003. We find that there is a weak-positive correlation between deviation standard and return in both portfolios. The VaR nolmal delta based on mean absolute deviation method eventually is higher than the VaR normal delta based on mean variance method. However, based on the historical simulation the VaR of two methods is statistically insignificant. Thus, the deviation standard is sufficient measures of portfolio risk.Keywords: optimalisasi portofolio, mean-variance, mean-absolute deviation, value-at-risk, metode delta normal, metode simulasi historis

  17. Top Yukawa deviation in extra dimension

    International Nuclear Information System (INIS)

    Haba, Naoyuki; Oda, Kin-ya; Takahashi, Ryo

    2009-01-01

    We suggest a simple one-Higgs-doublet model living in the bulk of five-dimensional spacetime compactified on S 1 /Z 2 , in which the top Yukawa coupling can be smaller than the naive standard-model expectation, i.e. the top quark mass divided by the Higgs vacuum expectation value. If we find only single Higgs particle at the LHC and also observe the top Yukawa deviation, our scenario becomes a realistic candidate beyond the standard model. The Yukawa deviation comes from the fact that the wave function profile of the free physical Higgs field can become different from that of the vacuum expectation value, due to the presence of the brane-localized Higgs potentials. In the Brane-Localized Fermion scenario, we find sizable top Yukawa deviation, which could be checked at the LHC experiment, with a dominant Higgs production channel being the WW fusion. We also study the Bulk Fermion scenario with brane-localized Higgs potential, which resembles the Universal Extra Dimension model with a stable dark matter candidate. We show that both scenarios are consistent with the current electroweak precision measurements.

  18. Excursions out-of-lane versus standard deviation of lateral position as outcome measure of the on-the-road driving test

    NARCIS (Netherlands)

    Verster, Joris C; Roth, Thomas

    BACKGROUND: The traditional outcome measure of the Dutch on-the-road driving test is the standard deviation of lateral position (SDLP), the weaving of the car. This paper explores whether excursions out-of-lane are a suitable additional outcome measure to index driving impairment. METHODS: A

  19. Decreasing the amplitude deviation of Guassian filter in surface roughness measurements

    Science.gov (United States)

    Liu, Bo; Wang, Yu

    2008-12-01

    A new approach for decreasing the amplitude characteristic deviation of Guassian filter in surface roughness measurements is presented in this paper. According to Central Limit Theorem, many different Guassian approximation filters could be constructed. By using first-order Butterworth filter and moving average filter to approximate Guassian filter, their directions of amplitude deviation are opposite, and their locations of extreme value are close. So the linear combination of them could reduce the amplitude deviation greatly. The maximum amplitude deviation is only about 0.11% through paralleling them. The algorithm of this new method is simple and its efficiency is high.

  20. Determination of the relations governing trends in the standard deviations of the distribution of pollution based on observations on the atmospheric turbulence spectrum and the possibility of laboratory simulation

    International Nuclear Information System (INIS)

    Crabol, B.

    1980-01-01

    Using TAYLOR's calculation, which takes account of the low-pass filter effect of the transfer time on the value for the standard deviation of particle dispersion, we have introduced a high-pass filter which translate the effect of the time of observation, by definition finite, onto the true atmospheric scale. It is then possible to identify those conditions under which the relations governing variation of the standard deviations of pollution distribution are dependent upon: the distance of transfer alone, the time of transfer alone. Thence, making certain simplifying assumptions, practical quantitive relationships are deduced for the variation of the horizontal standard deviation of pollution dispersion as a function of wind speed and time of transfer

  1. A CORRECTION TO THE STANDARD GALACTIC REDDENING MAP: PASSIVE GALAXIES AS STANDARD CRAYONS

    International Nuclear Information System (INIS)

    Peek, J. E. G.; Graves, Genevieve J.

    2010-01-01

    We present corrections to the Schlegel et al. (SFD98) reddening maps over the Sloan Digital Sky Survey (SDSS) northern Galactic cap area. To find these corrections, we employ what we call the 'standard crayon' method, in which we use passively evolving galaxies as color standards to measure deviations from the reddening map. We select these passively evolving galaxies spectroscopically, using limits on the Hα and [O II] equivalent widths to remove all star-forming galaxies from the SDSS main galaxy catalog. We find that by correcting for known reddening, redshift, color-magnitude relation, and variation of color with environmental density, we can reduce the scatter in color to below 3% in the bulk of the 151,637 galaxies that we select. Using these galaxies, we construct maps of the deviation from the SFD98 reddening map at 4. 0 5 resolution, with 1σ error of ∼1.5 mmag E(B - V). We find that the SFD98 maps are largely accurate with most of the map having deviations below 3 mmag E(B - V), though some regions do deviate from SFD98 by as much as 50%. The maximum deviation found is 45 mmag in E(B - V), and spatial structure of the deviation is strongly correlated with the observed dust temperature, such that SFD98 underpredict reddening in regions of low dust temperature. Our maps of these deviations, as well as their errors, are made available to the scientific community on the Web as a supplemental correction to SFD98.

  2. Fidelity deviation in quantum teleportation

    Science.gov (United States)

    Bang, Jeongho; Ryu, Junghee; Kaszlikowski, Dagomir

    2018-04-01

    We analyze the performance of quantum teleportation in terms of average fidelity and fidelity deviation. The average fidelity is defined as the average value of the fidelities over all possible input states and the fidelity deviation is their standard deviation, which is referred to as a concept of fluctuation or universality. In the analysis, we find the condition to optimize both measures under a noisy quantum channel—we here consider the so-called Werner channel. To characterize our results, we introduce a 2D space defined by the aforementioned measures, in which the performance of the teleportation is represented as a point with the channel noise parameter. Through further analysis, we specify some regions drawn for different channel conditions, establishing the connection to the dissimilar contributions of the entanglement to the teleportation and the Bell inequality violation.

  3. Statistical analysis of solid waste composition data: Arithmetic mean, standard deviation and correlation coefficients

    DEFF Research Database (Denmark)

    Edjabou, Maklawe Essonanawe; Martín-Fernández, Josep Antoni; Scheutz, Charlotte

    2017-01-01

    -derived food waste amounted to 2.21 ± 3.12% with a confidence interval of (−4.03; 8.45), which highlights the problem of the biased negative proportions. A Pearson’s correlation test, applied to waste fraction generation (kg mass), indicated a positive correlation between avoidable vegetable food waste...... and plastic packaging. However, correlation tests applied to waste fraction compositions (percentage values) showed a negative association in this regard, thus demonstrating that statistical analyses applied to compositional waste fraction data, without addressing the closed characteristics of these data......, have the potential to generate spurious or misleading results. Therefore, ¨compositional data should be transformed adequately prior to any statistical analysis, such as computing mean, standard deviation and correlation coefficients....

  4. A stochastic model for the derivation of economic values and their standard deviations for production and functional traits in dairy cattle

    NARCIS (Netherlands)

    Nielsen, H.M.; Groen, A.F.; Ostergaard, S.; Berg, P.

    2006-01-01

    The objective of this paper was to present a model of a dairy cattle production system for the derivation of economic values and their standard deviations for both production and functional traits under Danish production circumstances. The stochastic model used is dynamic, and simulates production

  5. Text localization using standard deviation analysis of structure elements and support vector machines

    Directory of Open Access Journals (Sweden)

    Zagoris Konstantinos

    2011-01-01

    Full Text Available Abstract A text localization technique is required to successfully exploit document images such as technical articles and letters. The proposed method detects and extracts text areas from document images. Initially a connected components analysis technique detects blocks of foreground objects. Then, a descriptor that consists of a set of suitable document structure elements is extracted from the blocks. This is achieved by incorporating an algorithm called Standard Deviation Analysis of Structure Elements (SDASE which maximizes the separability between the blocks. Another feature of the SDASE is that its length adapts according to the requirements of the application. Finally, the descriptor of each block is used as input to a trained support vector machines that classify the block as text or not. The proposed technique is also capable of adjusting to the text structure of the documents. Experimental results on benchmarking databases demonstrate the effectiveness of the proposed method.

  6. Lack of sensitivity of staffing for 8-hour sessions to standard deviation in daily actual hours of operating room time used for surgeons with long queues.

    Science.gov (United States)

    Pandit, Jaideep J; Dexter, Franklin

    2009-06-01

    At multiple facilities including some in the United Kingdom's National Health Service, the following are features of many surgical-anesthetic teams: i) there is sufficient workload for each operating room (OR) list to almost always be fully scheduled; ii) the workdays are organized such that a single surgeon is assigned to each block of time (usually 8 h); iii) one team is assigned per block; and iv) hardly ever would a team "split" to do cases in more than one OR simultaneously. We used Monte-Carlo simulation using normal and Weibull distributions to estimate the times to complete lists of cases scheduled into such 8 h sessions. For each combination of mean and standard deviation, inefficiencies of use of OR time were determined for 10 h versus 8 h of staffing. When the mean actual hours of OR time used averages standard deviation and relative cost of over-run to under-run. When mean > or = 8 h 50 min, 10 h staffing has higher OR efficiency. For 8 h 25 min standard deviation of 60 min and relative cost of over-run to under-run of 2.0 versus (b) 8 h 48 min for normal, standard deviation of 0 min and relative cost ratio of 1.50. Although the simplest decision rule would be to staff for 8 h if the mean workload is standard deviation 60 min, and relative cost ratio of 2.00, the inefficiency of use of OR time would be 34% larger if staffing were planned for 8 h instead of 10 h. For surgical teams with 8 h sessions, use the following decision rule for anesthesiology and OR nurse staffing. If actual hours of OR time used averages or = 8 h 50 min, plan 10 h staffing. For averages in between, perform the full analysis of McIntosh et al. (Anesth Analg 2006;103:1499-516).

  7. An absolute deviation approach to assessing correlation.

    OpenAIRE

    Gorard, S.

    2015-01-01

    This paper describes two possible alternatives to the more traditional Pearson’s R correlation coefficient, both based on using the mean absolute deviation, rather than the standard deviation, as a measure of dispersion. Pearson’s R is well-established and has many advantages. However, these newer variants also have several advantages, including greater simplicity and ease of computation, and perhaps greater tolerance of underlying assumptions (such as the need for linearity). The first alter...

  8. Scatter-Reducing Sounding Filtration Using a Genetic Algorithm and Mean Monthly Standard Deviation

    Science.gov (United States)

    Mandrake, Lukas

    2013-01-01

    Retrieval algorithms like that used by the Orbiting Carbon Observatory (OCO)-2 mission generate massive quantities of data of varying quality and reliability. A computationally efficient, simple method of labeling problematic datapoints or predicting soundings that will fail is required for basic operation, given that only 6% of the retrieved data may be operationally processed. This method automatically obtains a filter designed to reduce scatter based on a small number of input features. Most machine-learning filter construction algorithms attempt to predict error in the CO2 value. By using a surrogate goal of Mean Monthly STDEV, the goal is to reduce the retrieved CO2 scatter rather than solving the harder problem of reducing CO2 error. This lends itself to improved interpretability and performance. This software reduces the scatter of retrieved CO2 values globally based on a minimum number of input features. It can be used as a prefilter to reduce the number of soundings requested, or as a post-filter to label data quality. The use of the MMS (Mean Monthly Standard deviation) provides a much cleaner, clearer filter than the standard ABS(CO2-truth) metrics previously employed by competitor methods. The software's main strength lies in a clearer (i.e., fewer features required) filter that more efficiently reduces scatter in retrieved CO2 rather than focusing on the more complex (and easily removed) bias issues.

  9. Delay and Standard Deviation Beamforming to Enhance Specular Reflections in Ultrasound Imaging.

    Science.gov (United States)

    Bandaru, Raja Sekhar; Sornes, Anders Rasmus; Hermans, Jeroen; Samset, Eigil; D'hooge, Jan

    2016-12-01

    Although interventional devices, such as needles, guide wires, and catheters, are best visualized by X-ray, real-time volumetric echography could offer an attractive alternative as it avoids ionizing radiation; it provides good soft tissue contrast, and it is mobile and relatively cheap. Unfortunately, as echography is traditionally used to image soft tissue and blood flow, the appearance of interventional devices in conventional ultrasound images remains relatively poor, which is a major obstacle toward ultrasound-guided interventions. The objective of this paper was therefore to enhance the appearance of interventional devices in ultrasound images. Thereto, a modified ultrasound beamforming process using conventional-focused transmit beams is proposed that exploits the properties of received signals containing specular reflections (as arising from these devices). This new beamforming approach referred to as delay and standard deviation beamforming (DASD) was quantitatively tested using simulated as well as experimental data using a linear array transducer. Furthermore, the influence of different imaging settings (i.e., transmit focus, imaging depth, and scan angle) on the obtained image contrast was evaluated. The study showed that the image contrast of specular regions improved by 5-30 dB using DASD beamforming compared with traditional delay and sum (DAS) beamforming. The highest gain in contrast was observed when the interventional device was tilted away from being orthogonal to the transmit beam, which is a major limitation in standard DAS imaging. As such, the proposed beamforming methodology can offer an improved visualization of interventional devices in the ultrasound image with potential implications for ultrasound-guided interventions.

  10. 40 CFR 60.2220 - What must I include in the deviation report?

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 6 2010-07-01 2010-07-01 false What must I include in the deviation... PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for... Recordkeeping and Reporting § 60.2220 What must I include in the deviation report? In each report required under...

  11. Identification of "ever-cropped" land (1984-2010) using Landsat annual maximum NDVI image composites: Southwestern Kansas case study.

    Science.gov (United States)

    Maxwell, Susan K; Sylvester, Kenneth M

    2012-06-01

    A time series of 230 intra- and inter-annual Landsat Thematic Mapper images was used to identify land that was ever cropped during the years 1984 through 2010 for a five county region in southwestern Kansas. Annual maximum Normalized Difference Vegetation Index (NDVI) image composites (NDVI(ann-max)) were used to evaluate the inter-annual dynamics of cropped and non-cropped land. Three feature images were derived from the 27-year NDVI(ann-max) image time series and used in the classification: 1) maximum NDVI value that occurred over the entire 27 year time span (NDVI(max)), 2) standard deviation of the annual maximum NDVI values for all years (NDVI(sd)), and 3) standard deviation of the annual maximum NDVI values for years 1984-1986 (NDVI(sd84-86)) to improve Conservation Reserve Program land discrimination.Results of the classification were compared to three reference data sets: County-level USDA Census records (1982-2007) and two digital land cover maps (Kansas 2005 and USGS Trends Program maps (1986-2000)). Area of ever-cropped land for the five counties was on average 11.8 % higher than the area estimated from Census records. Overall agreement between the ever-cropped land map and the 2005 Kansas map was 91.9% and 97.2% for the Trends maps. Converting the intra-annual Landsat data set to a single annual maximum NDVI image composite considerably reduced the data set size, eliminated clouds and cloud-shadow affects, yet maintained information important for discriminating cropped land. Our results suggest that Landsat annual maximum NDVI image composites will be useful for characterizing land use and land cover change for many applications.

  12. Large deviations

    CERN Document Server

    Varadhan, S R S

    2016-01-01

    The theory of large deviations deals with rates at which probabilities of certain events decay as a natural parameter in the problem varies. This book, which is based on a graduate course on large deviations at the Courant Institute, focuses on three concrete sets of examples: (i) diffusions with small noise and the exit problem, (ii) large time behavior of Markov processes and their connection to the Feynman-Kac formula and the related large deviation behavior of the number of distinct sites visited by a random walk, and (iii) interacting particle systems, their scaling limits, and large deviations from their expected limits. For the most part the examples are worked out in detail, and in the process the subject of large deviations is developed. The book will give the reader a flavor of how large deviation theory can help in problems that are not posed directly in terms of large deviations. The reader is assumed to have some familiarity with probability, Markov processes, and interacting particle systems.

  13. Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains

    Science.gov (United States)

    Cofré, Rodrigo; Maldonado, Cesar

    2018-01-01

    We consider the maximum entropy Markov chain inference approach to characterize the collective statistics of neuronal spike trains, focusing on the statistical properties of the inferred model. We review large deviations techniques useful in this context to describe properties of accuracy and convergence in terms of sampling size. We use these results to study the statistical fluctuation of correlations, distinguishability and irreversibility of maximum entropy Markov chains. We illustrate these applications using simple examples where the large deviation rate function is explicitly obtained for maximum entropy models of relevance in this field.

  14. Dealing with missing standard deviation and mean values in meta-analysis of continuous outcomes: a systematic review.

    Science.gov (United States)

    Weir, Christopher J; Butcher, Isabella; Assi, Valentina; Lewis, Stephanie C; Murray, Gordon D; Langhorne, Peter; Brady, Marian C

    2018-03-07

    Rigorous, informative meta-analyses rely on availability of appropriate summary statistics or individual participant data. For continuous outcomes, especially those with naturally skewed distributions, summary information on the mean or variability often goes unreported. While full reporting of original trial data is the ideal, we sought to identify methods for handling unreported mean or variability summary statistics in meta-analysis. We undertook two systematic literature reviews to identify methodological approaches used to deal with missing mean or variability summary statistics. Five electronic databases were searched, in addition to the Cochrane Colloquium abstract books and the Cochrane Statistics Methods Group mailing list archive. We also conducted cited reference searching and emailed topic experts to identify recent methodological developments. Details recorded included the description of the method, the information required to implement the method, any underlying assumptions and whether the method could be readily applied in standard statistical software. We provided a summary description of the methods identified, illustrating selected methods in example meta-analysis scenarios. For missing standard deviations (SDs), following screening of 503 articles, fifteen methods were identified in addition to those reported in a previous review. These included Bayesian hierarchical modelling at the meta-analysis level; summary statistic level imputation based on observed SD values from other trials in the meta-analysis; a practical approximation based on the range; and algebraic estimation of the SD based on other summary statistics. Following screening of 1124 articles for methods estimating the mean, one approximate Bayesian computation approach and three papers based on alternative summary statistics were identified. Illustrative meta-analyses showed that when replacing a missing SD the approximation using the range minimised loss of precision and generally

  15. Minimizing the Standard Deviation of Spatially Averaged Surface Cross-Sectional Data from the Dual-Frequency Precipitation Radar

    Science.gov (United States)

    Meneghini, Robert; Kim, Hyokyung

    2016-01-01

    For an airborne or spaceborne radar, the precipitation-induced path attenuation can be estimated from the measurements of the normalized surface cross section, sigma 0, in the presence and absence of precipitation. In one implementation, the mean rain-free estimate and its variability are found from a lookup table (LUT) derived from previously measured data. For the dual-frequency precipitation radar aboard the global precipitation measurement satellite, the nominal table consists of the statistics of the rain-free 0 over a 0.5 deg x 0.5 deg latitude-longitude grid using a three-month set of input data. However, a problem with the LUT is an insufficient number of samples in many cells. An alternative table is constructed by a stepwise procedure that begins with the statistics over a 0.25 deg x 0.25 deg grid. If the number of samples at a cell is too few, the area is expanded, cell by cell, choosing at each step that cell that minimizes the variance of the data. The question arises, however, as to whether the selected region corresponds to the smallest variance. To address this question, a second type of variable-averaging grid is constructed using all possible spatial configurations and computing the variance of the data within each region. Comparisons of the standard deviations for the fixed and variable-averaged grids are given as a function of incidence angle and surface type using a three-month set of data. The advantage of variable spatial averaging is that the average standard deviation can be reduced relative to the fixed grid while satisfying the minimum sample requirement.

  16. Quantification of intravoxel velocity standard deviation and turbulence intensity by generalizing phase-contrast MRI.

    Science.gov (United States)

    Dyverfeldt, Petter; Sigfridsson, Andreas; Kvitting, John-Peder Escobar; Ebbers, Tino

    2006-10-01

    Turbulent flow, characterized by velocity fluctuations, is a contributing factor to the pathogenesis of several cardiovascular diseases. A clinical noninvasive tool for assessing turbulence is lacking, however. It is well known that the occurrence of multiple spin velocities within a voxel during the influence of a magnetic gradient moment causes signal loss in phase-contrast magnetic resonance imaging (PC-MRI). In this paper a mathematical derivation of an expression for computing the standard deviation (SD) of the blood flow velocity distribution within a voxel is presented. The SD is obtained from the magnitude of PC-MRI signals acquired with different first gradient moments. By exploiting the relation between the SD and turbulence intensity (TI), this method allows for quantitative studies of turbulence. For validation, the TI in an in vitro flow phantom was quantified, and the results compared favorably with previously published laser Doppler anemometry (LDA) results. This method has the potential to become an important tool for the noninvasive assessment of turbulence in the arterial tree.

  17. Evolutionary implications of genetic code deviations

    International Nuclear Information System (INIS)

    Chela Flores, J.

    1986-07-01

    By extending the standard genetic code into a temperature dependent regime, we propose a train of molecular events leading to alternative coding. The first few examples of these deviations have already been reported in some ciliated protozoans and Gram positive bacteria. A possible range of further alternative coding, still within the context of universality, is pointed out. (author)

  18. Beam deviation method as a diagnostic tool for the plasma focus

    International Nuclear Information System (INIS)

    Schmidt, H.; Rueckle, B.

    1978-01-01

    The application of an optical method for density measurements in cylindrical plasmas is described. The angular deviation of a probing light beam sent through the plasma is proportional to the maximum of the density in the plasma column. The deviation does not depend on the plasma dimensions, however, it is influenced to a certain degree by the density profile. The method is successfully applied to the investigation of a dense plasma focus with a time resolution of 2 ns and a spatial resolution (in axial direction) of 2 mm. (orig.) [de

  19. Excursions out-of-lane versus standard deviation of lateral position as outcome measure of the on-the-road driving test.

    Science.gov (United States)

    Verster, Joris C; Roth, Thomas

    2014-07-01

    The traditional outcome measure of the Dutch on-the-road driving test is the standard deviation of lateral position (SDLP), the weaving of the car. This paper explores whether excursions out-of-lane are a suitable additional outcome measure to index driving impairment. A literature search was conducted to search for driving tests that used both SDLP and excursions out-of-lane as outcome measures. The analyses were limited to studies examining hypnotic drugs because several of these drugs have been shown to produce next-morning sedation. Standard deviation of lateral position was more sensitive in demonstrating driving impairment. In fact, solely relying on excursions out-of-lane as outcome measure incorrectly classifies approximately half of impaired drives as unimpaired. The frequency of excursions out-of-lane is determined by the mean lateral position within the right traffic lane. Defining driving impairment as having a ΔSDLP > 2.4 cm, half of the impaired driving tests (51.2%, 43/84) failed to produce excursions out-of-lane. Alternatively, 20.9% of driving tests with ΔSDLP < 2.4 cm (27/129) had at least one excursion out-of-lane. Excursions out-of-lane are neither a suitable measure to demonstrate driving impairment nor is this measure sufficiently sensitive to differentiate adequately between differences in magnitude of driving impairment. Copyright © 2014 John Wiley & Sons, Ltd.

  20. Standard deviation of local tallies in global Monte Carlo calculation of nuclear reactor core

    International Nuclear Information System (INIS)

    Ueki, Taro

    2010-01-01

    Time series methodology has been studied to assess the feasibility of statistical error estimation in the continuous space and energy Monte Carlo calculation of the three-dimensional whole reactor core. The noise propagation was examined and the fluctuation of track length tallies for local fission rate and power has been formally shown to be represented by the autoregressive moving average process of orders p and p-1 [ARMA(p,p-1)], where p is an integer larger than or equal to two. Therefore, ARMA(p,p-1) fitting was applied to the real standard deviation estimation of the power of fuel assemblies at particular heights. Numerical results indicate that straightforward ARMA(3,2) fitting is promising, but a stability issue must be resolved toward the incorporation in the distributed version of production Monte Carlo codes. The same numerical results reveal that the average performance of ARMA(3,2) fitting is equivalent to that of the batch method with a batch size larger than 100 and smaller than 200 cycles for a 1,100 MWe pressurized water reactor. (author)

  1. The Effects of Data Gaps on the Calculated Monthly Mean Maximum and Minimum Temperatures in the Continental United States: A Spatial and Temporal Study.

    Science.gov (United States)

    Stooksbury, David E.; Idso, Craig D.; Hubbard, Kenneth G.

    1999-05-01

    Gaps in otherwise regularly scheduled observations are often referred to as missing data. This paper explores the spatial and temporal impacts that data gaps in the recorded daily maximum and minimum temperatures have on the calculated monthly mean maximum and minimum temperatures. For this analysis 138 climate stations from the United States Historical Climatology Network Daily Temperature and Precipitation Data set were selected. The selected stations had no missing maximum or minimum temperature values during the period 1951-80. The monthly mean maximum and minimum temperatures were calculated for each station for each month. For each month 1-10 consecutive days of data from each station were randomly removed. This was performed 30 times for each simulated gap period. The spatial and temporal impact of the 1-10-day data gaps were compared. The influence of data gaps is most pronounced in the continental regions during the winter and least pronounced in the southeast during the summer. In the north central plains, 10-day data gaps during January produce a standard deviation value greater than 2°C about the `true' mean. In the southeast, 10-day data gaps in July produce a standard deviation value less than 0.5°C about the mean. The results of this study will be of value in climate variability and climate trend research as well as climate assessment and impact studies.

  2. Maximum permissible dose

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed

  3. A deviation display method for visualising data in mobile gamma-ray spectrometry.

    OpenAIRE

    Kock, Peder; Finck, Robert R; Nilsson, Jonas M C; Östlund, Karl; Samuelsson, Christer

    2010-01-01

    A real time visualisation method, to be used in mobile gamma-spectrometric search operations using standard detector systems is presented. The new method, called deviation display, uses a modified waterfall display to present relative changes in spectral data over energy and time. Using unshielded (137)Cs and (241)Am point sources and different natural background environments, the behaviour of the deviation displays is demonstrated and analysed for two standard detector types (NaI(Tl) and HPG...

  4. A deviation display method for visualising data in mobile gamma-ray spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Kock, Peder, E-mail: Peder.Kock@med.lu.s [Department of Medical Radiation Physics, Clinical Sciences, Lund University, University Hospital, SE-221 85 Lund (Sweden); Finck, Robert R. [Swedish Radiation Protection Authority, SE-171 16 Stockholm (Sweden); Nilsson, Jonas M.C.; Ostlund, Karl; Samuelsson, Christer [Department of Medical Radiation Physics, Clinical Sciences, Lund University, University Hospital, SE-221 85 Lund (Sweden)

    2010-09-15

    A real time visualisation method, to be used in mobile gamma-spectrometric search operations using standard detector systems is presented. The new method, called deviation display, uses a modified waterfall display to present relative changes in spectral data over energy and time. Using unshielded {sup 137}Cs and {sup 241}Am point sources and different natural background environments, the behaviour of the deviation displays is demonstrated and analysed for two standard detector types (NaI(Tl) and HPGe). The deviation display enhances positive significant changes while suppressing the natural background fluctuations. After an initialisation time of about 10 min this technique leads to a homogeneous display dominated by the background colour, where even small changes in spectral data are easy to discover. As this paper shows, the deviation display method works well for all tested gamma energies and natural background radiation levels and with both tested detector systems.

  5. A deviation display method for visualising data in mobile gamma-ray spectrometry

    International Nuclear Information System (INIS)

    Kock, Peder; Finck, Robert R.; Nilsson, Jonas M.C.; Ostlund, Karl; Samuelsson, Christer

    2010-01-01

    A real time visualisation method, to be used in mobile gamma-spectrometric search operations using standard detector systems is presented. The new method, called deviation display, uses a modified waterfall display to present relative changes in spectral data over energy and time. Using unshielded 137 Cs and 241 Am point sources and different natural background environments, the behaviour of the deviation displays is demonstrated and analysed for two standard detector types (NaI(Tl) and HPGe). The deviation display enhances positive significant changes while suppressing the natural background fluctuations. After an initialisation time of about 10 min this technique leads to a homogeneous display dominated by the background colour, where even small changes in spectral data are easy to discover. As this paper shows, the deviation display method works well for all tested gamma energies and natural background radiation levels and with both tested detector systems.

  6. VAR Portfolio Optimal: Perbandingan Antara Metode Markowitz Dan Mean Absolute Deviation

    OpenAIRE

    Sartono, R. Agus; Setiawan, Arie Andika

    2006-01-01

    Portfolio selection method which have been introduced by Harry Markowitz (1952) used variance or deviation standard as a measure of risk. Kanno and Yamazaki (1991) introduced another method and used mean absolute deviation as a measure of risk instead of variance. The Value-at Risk (VaR) is a relatively new method to capitalized risk that been used by financial institutions. The aim of this research is compare between mean variance and mean absolute deviation of two portfolios. Next, we attem...

  7. Natural background approach to setting radiation standards

    International Nuclear Information System (INIS)

    Adler, H.I.; Federow, H.; Weinberg, A.M.

    1979-01-01

    The suggestion has often been made that an additional radiation exposure imposed on humanity as a result of some important activity such as electricity generation would be acceptable if the exposure was small compared to the natural background. In order to make this concept quantitative and objective, we propose that small compared with the natural background be interpreted as the standard deviation (weighted with the exposed population) of the natural background. This use of the variation in natural background radiation is less arbitrary and requires fewer unfounded assumptions than some current approaches to standard-setting. The standard deviation is an easily calculated statistic that is small compared with the mean value for natural exposures of populations. It is an objectively determined quantity and its significance is generally understood. Its determination does not omit any of the pertinent data. When this method is applied to the population of the United States, it suggests that a dose of 20 mrem/year would be an acceptable standard. This is comparable to the 25 mrem/year suggested as the maximum allowable exposure to an individual from the complete uranium fuel cycle

  8. ARFI cut-off values and significance of standard deviation for liver fibrosis staging in patients with chronic liver disease.

    Science.gov (United States)

    Goertz, Ruediger S; Sturm, Joerg; Pfeifer, Lukas; Wildner, Dane; Wachter, David L; Neurath, Markus F; Strobel, Deike

    2013-01-01

    Acoustic radiation force impulse (ARFI) elastometry quantifies hepatic stiffness, and thus degree of fibrosis, non-invasively. Our aim was to analyse the diagnostic accuracy of ARFI cut-off values, and the significance of a defined limit of standard deviation (SD) as a potential quality parameter for liver fibrosis staging in patients with chronic liver diseases (CLD). 153 patients with CLD (various aetiologies) undergoing liver biopsy, and an additional 25 patients with known liver cirrhosis, were investigated. ARFI measurements were performed in the right hepatic lobe, and correlated with the histopathological Ludwig fibrosis score (inclusion criteria: at least 6 portal tracts). The diagnostic accuracy of cut-off values was analysed with respect to an SD limit of 30% of the mean ARFI value. The mean ARFI elastometry showed 1.95 ± 0.87 m/s (range 0.79-4.40) in 178 patients (80 female, 98 male, mean age: 52 years). The cut-offs were 1.25 m/s for F ≥ 2, 1.72 m/s for F ≥ 3 and 1.75 m/s for F = 4, and the corresponding AUROC 80.7%, 86.2% and 88.7%, respectively. Exclusion of 31 patients (17.4%) with an SD higher than 30% of the mean ARFI improved the diagnostic accuracy: The AUROC for F ≥ 2, F ≥ 3 and F = 4 were 86.1%, 91.2% and 91.5%, respectively. The diagnostic accuracy of ARFI can be improved by applying a maximum SD of 30% of the mean ARFI as a quality parameter--which however leads to an exclusion of a relevant number of patients. ARFI results with a high SD should be interpreted with caution.

  9. Standard values of maximum tongue pressure taken using newly developed disposable tongue pressure measurement device.

    Science.gov (United States)

    Utanohara, Yuri; Hayashi, Ryo; Yoshikawa, Mineka; Yoshida, Mitsuyoshi; Tsuga, Kazuhiro; Akagawa, Yasumasa

    2008-09-01

    It is clinically important to evaluate tongue function in terms of rehabilitation of swallowing and eating ability. We have developed a disposable tongue pressure measurement device designed for clinical use. In this study we used this device to determine standard values of maximum tongue pressure in adult Japanese. Eight hundred fifty-three subjects (408 male, 445 female; 20-79 years) were selected for this study. All participants had no history of dysphagia and maintained occlusal contact in the premolar and molar regions with their own teeth. A balloon-type disposable oral probe was used to measure tongue pressure by asking subjects to compress it onto the palate for 7 s with maximum voluntary effort. Values were recorded three times for each subject, and the mean values were defined as maximum tongue pressure. Although maximum tongue pressure was higher for males than for females in the 20-49-year age groups, there was no significant difference between males and females in the 50-79-year age groups. The maximum tongue pressure of the seventies age group was significantly lower than that of the twenties to fifties age groups. It may be concluded that maximum tongue pressures were reduced with primary aging. Males may become weaker with age at a faster rate than females; however, further decreases in strength were in parallel for male and female subjects.

  10. Clustering Indian Ocean Tropical Cyclone Tracks by the Standard Deviational Ellipse

    Directory of Open Access Journals (Sweden)

    Md. Shahinoor Rahman

    2018-05-01

    Full Text Available The standard deviational ellipse is useful to analyze the shape and the length of a tropical cyclone (TC track. Cyclone intensity at each six-hour position is used as the weight at that location. Only named cyclones in the Indian Ocean since 1981 are considered for this study. The K-means clustering algorithm is used to cluster Indian Ocean cyclones based on the five parameters: x-y coordinates of the mean center, variances along zonal and meridional directions, and covariance between zonal and meridional locations of the cyclone track. Four clusters are identified across the Indian Ocean; among them, only one cluster is in the North Indian Ocean (NIO and the rest of them are in the South Indian Ocean (SIO. Other characteristics associated with each cluster, such as wind speed, lifespan, track length, track orientation, seasonality, landfall, category during landfall, total accumulated cyclone energy (ACE, and cyclone trend, are analyzed and discussed. Cyclone frequency and energy of Cluster 4 (in the NIO have been following a linear increasing trend. Cluster 4 also has a higher number of landfall cyclones compared to other clusters. Cluster 2, located in the middle of the SIO, is characterized by the long track, high intensity, long lifespan, and high accumulated energy. Sea surface temperature (SST and outgoing longwave radiation (OLR associated with genesis of TCs are also examined in each cluster. Cyclone genesis is co-located with the negative OLR anomaly and the positive SST anomaly. Localized SST anomalies are associated with clusters in the SIO; however, TC geneses of Cluster 4 are associated with SSTA all over the Indian Ocean (IO.

  11. 14 CFR 121.360 - Ground proximity warning-glide slope deviation alerting system.

    Science.gov (United States)

    2010-01-01

    ... deviation alerting system. 121.360 Section 121.360 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION... Equipment Requirements § 121.360 Ground proximity warning-glide slope deviation alerting system. (a) No... system that meets the performance and environmental standards of TSO-C92 (available from the FAA, 800...

  12. 40 CFR 60.2780 - What must I include in the deviation report?

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 6 2010-07-01 2010-07-01 false What must I include in the deviation... PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emissions Guidelines and... the deviation report? In each report required under § 60.2775, for any pollutant or parameter that...

  13. 40 CFR 60.2958 - What must I include in the deviation report?

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 6 2010-07-01 2010-07-01 false What must I include in the deviation... PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Operator Training and Qualification Recordkeeping and Reporting § 60.2958 What must I include in the deviation report? In each report...

  14. A Modified Differential Coherent Bit Synchronization Algorithm for BeiDou Weak Signals with Large Frequency Deviation.

    Science.gov (United States)

    Han, Zhifeng; Liu, Jianye; Li, Rongbing; Zeng, Qinghua; Wang, Yi

    2017-07-04

    BeiDou system navigation messages are modulated with a secondary NH (Neumann-Hoffman) code of 1 kbps, where frequent bit transitions limit the coherent integration time to 1 millisecond. Therefore, a bit synchronization algorithm is necessary to obtain bit edges and NH code phases. In order to realize bit synchronization for BeiDou weak signals with large frequency deviation, a bit synchronization algorithm based on differential coherent and maximum likelihood is proposed. Firstly, a differential coherent approach is used to remove the effect of frequency deviation, and the differential delay time is set to be a multiple of bit cycle to remove the influence of NH code. Secondly, the maximum likelihood function detection is used to improve the detection probability of weak signals. Finally, Monte Carlo simulations are conducted to analyze the detection performance of the proposed algorithm compared with a traditional algorithm under the CN0s of 20~40 dB-Hz and different frequency deviations. The results show that the proposed algorithm outperforms the traditional method with a frequency deviation of 50 Hz. This algorithm can remove the effect of BeiDou NH code effectively and weaken the influence of frequency deviation. To confirm the feasibility of the proposed algorithm, real data tests are conducted. The proposed algorithm is suitable for BeiDou weak signal bit synchronization with large frequency deviation.

  15. 40 CFR 60.3053 - What must I include in the deviation report?

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 6 2010-07-01 2010-07-01 false What must I include in the deviation... PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines and Compliance... Model Rule-Recordkeeping and Reporting § 60.3053 What must I include in the deviation report? In each...

  16. YOUTH VANDALISM IN THE ENVIRONMENT OF MEGALOPOLIS: BORDERS OF STANDARD AND DEVIATION

    Directory of Open Access Journals (Sweden)

    D. V. Rudenkin

    2018-01-01

    people more or less regularly commit vandal actions, without perceiving them as a deviation from predefined standard pattern of behaviour; young people do not notice vandal behaviour of people around as well.The data obtained point to considerable flexibility and discrepancy of ideas of vandalism among the young population in megalopolises: vandalism is regarded as deviation and categorically condemned at the level of stereotypes with abstraction from the reality; in a daily occurrence, vandalism is treated as unrecognized norm in relation to specific situations. The tendency of the gradual erosion of taboo nature and deviance of vandalism in consciousness of youth is stated.Practical significance. The materials of the research could be applied to optimize the work on up-brining in educational institutions and to increase the effectiveness of prevention of vandalism among young people.

  17. Determining Maximum Photovoltaic Penetration in a Distribution Grid considering Grid Operation Limits

    DEFF Research Database (Denmark)

    Kordheili, Reza Ahmadi; Bak-Jensen, Birgitte; Pillai, Jayakrishnan Radhakrishna

    2014-01-01

    High penetration of photovoltaic panels in distribution grid can bring the grid to its operation limits. The main focus of the paper is to determine maximum photovoltaic penetration level in the grid. Three main criteria were investigated for determining maximum penetration level of PV panels...... for this grid: even distribution of PV panels, aggregation of panels at the beginning of each feeder, and aggregation of panels at the end of each feeder. Load modeling is done using Velander formula. Since PV generation is highest in the summer due to irradiation, a summer day was chosen to determine maximum......; maximum voltage deviation of customers, cables current limits, and transformer nominal value. Voltage deviation of different buses was investigated for different penetration levels. The proposed model was simulated on a Danish distribution grid. Three different PV location scenarios were investigated...

  18. Improving IQ measurement in intellectual disabilities using true deviation from population norms.

    Science.gov (United States)

    Sansone, Stephanie M; Schneider, Andrea; Bickel, Erika; Berry-Kravis, Elizabeth; Prescott, Christina; Hessl, David

    2014-01-01

    Intellectual disability (ID) is characterized by global cognitive deficits, yet the very IQ tests used to assess ID have limited range and precision in this population, especially for more impaired individuals. We describe the development and validation of a method of raw z-score transformation (based on general population norms) that ameliorates floor effects and improves the precision of IQ measurement in ID using the Stanford Binet 5 (SB5) in fragile X syndrome (FXS; n = 106), the leading inherited cause of ID, and in individuals with idiopathic autism spectrum disorder (ASD; n = 205). We compared the distributional characteristics and Q-Q plots from the standardized scores with the deviation z-scores. Additionally, we examined the relationship between both scoring methods and multiple criterion measures. We found evidence that substantial and meaningful variation in cognitive ability on standardized IQ tests among individuals with ID is lost when converting raw scores to standardized scaled, index and IQ scores. Use of the deviation z- score method rectifies this problem, and accounts for significant additional variance in criterion validation measures, above and beyond the usual IQ scores. Additionally, individual and group-level cognitive strengths and weaknesses are recovered using deviation scores. Traditional methods for generating IQ scores in lower functioning individuals with ID are inaccurate and inadequate, leading to erroneously flat profiles. However assessment of cognitive abilities is substantially improved by measuring true deviation in performance from standardization sample norms. This work has important implications for standardized test development, clinical assessment, and research for which IQ is an important measure of interest in individuals with neurodevelopmental disorders and other forms of cognitive impairment.

  19. Utility of shear wave elastography to detect papillary thyroid carcinoma in thyroid nodules: efficacy of the standard deviation elasticity.

    Science.gov (United States)

    Kim, Hye Jeong; Kwak, Mi Kyung; Choi, In Ho; Jin, So-Young; Park, Hyeong Kyu; Byun, Dong Won; Suh, Kyoil; Yoo, Myung Hi

    2018-02-23

    The aim of this study was to address the role of the elasticity index as a possible predictive marker for detecting papillary thyroid carcinoma (PTC) and quantitatively assess shear wave elastography (SWE) as a tool for differentiating PTC from benign thyroid nodules. One hundred and nineteen patients with thyroid nodules undergoing SWE before ultrasound-guided fine needle aspiration and core needle biopsy were analyzed. The mean (EMean), minimum (EMin), maximum (EMax), and standard deviation (ESD) of SWE elasticity indices were measured. Among 105 nodules, 14 were PTC and 91 were benign. The EMean, EMin, and EMax values were significantly higher in PTCs than benign nodules (EMean 37.4 in PTC vs. 23.7 in benign nodules, p = 0.005; EMin 27.9 vs. 17.8, p = 0.034; EMax 46.7 vs. 31.5, p < 0.001). The EMean, EMin, and EMax were significantly associated with PTC with diagnostic odds ratios varying from 6.74 to 9.91, high specificities (86.4%, 86.4%, and 88.1%, respectively), and positive likelihood ratios (4.21, 3.69, and 4.82, respectively). The ESD values were significantly higher in PTC than in benign nodules (6.3 vs. 2.6, p < 0.001). ESD had the highest specificity (96.6%) when applied with a cut-off value of 6.5 kPa. It had a positive likelihood ratio of 14.75 and a diagnostic odds ratio of 28.50. The shear elasticity index of ESD, with higher likelihood ratios for PTC, will probably identify nodules that have a high potential for malignancy. It may help to identify and select malignant nodules, while reducing unnecessary fine needle aspiration and core needle biopsies of benign nodules.

  20. Diurnal Dynamics of Standard Deviations of Three Wind Velocity Components in the Atmospheric Boundary Layer

    Science.gov (United States)

    Shamanaeva, L. G.; Krasnenko, N. P.; Kapegesheva, O. F.

    2018-04-01

    Diurnal dynamics of the standard deviation (SD) of three wind velocity components measured with a minisodar in the atmospheric boundary layer is analyzed. Statistical analysis of measurement data demonstrates that the SDs for x- and y-components σx and σy lie in the range from 0.2 to 4 m/s, and σz = 0.1-1.2 m/s. The increase of σx and σy with the altitude is described sufficiently well by a power law with exponent changing from 0.22 to 1.3 depending on time of day, and σz increases by a linear law. Approximation constants are determined and errors of their application are estimated. It is found that the maximal diurnal spread of SD values is 56% for σx and σy and 94% for σz. The established physical laws and the obtained approximation constants allow the diurnal dynamics of the SDs for three wind velocity components in the atmospheric boundary layer to be determined and can be recommended for application in models of the atmospheric boundary layer.

  1. Large deviations for solutions to stochastic recurrence equations under Kesten's condition

    DEFF Research Database (Denmark)

    Buraczewski, Dariusz; Damek, Ewa; Mikosch, Thomas Valentin

    2013-01-01

    In this paper we prove large deviations results for partial sums constructed from the solution to a stochastic recurrence equation. We assume Kesten’s condition [17] under which the solution of the stochastic recurrence equation has a marginal distribution with power law tails, while the noise...... sequence of the equations can have light tails. The results of the paper are analogs of those obtained by A.V. and S.V. Nagaev [21, 22] in the case of partial sums of iid random variables. In the latter case, the large deviation probabilities of the partial sums are essentially determined by the largest...... step size of the partial sum. For the solution to a stochastic recurrence equation, the magnitude of the large deviation probabilities is again given by the tail of the maximum summand, but the exact asymptotic tail behavior is also influenced by clusters of extreme values, due to dependencies...

  2. Variation of Probable Maximum Precipitation in Brazos River Basin, TX

    Science.gov (United States)

    Bhatia, N.; Singh, V. P.

    2017-12-01

    The Brazos River basin, the second-largest river basin by area in Texas, generates the highest amount of flow volume of any river in a given year in Texas. With its headwaters located at the confluence of Double Mountain and Salt forks in Stonewall County, the third-longest flowline of the Brazos River traverses within narrow valleys in the area of rolling topography of west Texas, and flows through rugged terrains in mainly featureless plains of central Texas, before its confluence with Gulf of Mexico. Along its major flow network, the river basin covers six different climate regions characterized on the basis of similar attributes of vegetation, temperature, humidity, rainfall, and seasonal weather changes, by National Oceanic and Atmospheric Administration (NOAA). Our previous research on Texas climatology illustrated intensified precipitation regimes, which tend to result in extreme flood events. Such events have caused huge losses of lives and infrastructure in the Brazos River basin. Therefore, a region-specific investigation is required for analyzing precipitation regimes along the geographically-diverse river network. Owing to the topographical and hydroclimatological variations along the flow network, 24-hour Probable Maximum Precipitation (PMP) was estimated for different hydrologic units along the river network, using the revised Hershfield's method devised by Lan et al. (2017). The method incorporates the use of a standardized variable describing the maximum deviation from the average of a sample scaled by the standard deviation of the sample. The hydrometeorological literature identifies this method as more reasonable and consistent with the frequency equation. With respect to the calculation of stable data size required for statistically reliable results, this study also quantified the respective uncertainty associated with PMP values in different hydrologic units. The corresponding range of return periods of PMPs in different hydrologic units was

  3. An explicit local uniform large deviation bound for Brownian bridges

    NARCIS (Netherlands)

    Wittich, O.

    2005-01-01

    By comparing curve length in a manifold and a standard sphere, we prove a local uniform bound for the exponent in the Large Deviation formula that describes the concentration of Brownian bridges to geodesics.

  4. Effect of nasal deviation on quality of life.

    Science.gov (United States)

    de Lima Ramos, Sueli; Hochman, Bernardo; Gomes, Heitor Carvalho; Abla, Luiz Eduardo Felipe; Veiga, Daniela Francescato; Juliano, Yara; Dini, Gal Moreira; Ferreira, Lydia Masako

    2011-07-01

    Nasal deviation is a common complaint in otorhinolaryngology and plastic surgery. This condition not only causes impairment of nasal function but also affects quality of life, leading to psychological distress. The subjective assessment of quality of life, as an important aspect of outcomes research, has received increasing attention in recent decades. Quality of life is measured using standardized questionnaires that have been tested for reliability, validity, and sensitivity. The aim of this study was to evaluate health-related quality of life, self-esteem, and depression in patients with nasal deviation. Sixty patients were selected for the study. Patients with nasal deviation (n = 32) were assigned to the study group, and patients without nasal deviation (n = 28) were assigned to the control group. The diagnosis of nasal deviation was made by digital photogrammetry. Quality of life was assessed using the Medical Outcomes Study 36-Item Short Form Health Survey questionnaire; the Rosenberg Self-Esteem/Federal University of São Paulo, Escola Paulista de Medicina Scale; and the 20-item Self-Report Questionnaire. There were significant differences between groups in the physical functioning and general health subscales of the Medical Outcomes Study 36-Item Short Form Health Survey (p < 0.05). Depression was detected in 11 patients (34.4 percent) in the study group and in two patients in the control group, with a significant difference between groups (p < 0.05). Nasal deviation is an aspect of rhinoplasty of which the surgeon should be aware so that proper psychological diagnosis can be made and suitable treatment can be planned because psychologically the patients with nasal deviation have significantly worse quality of life and are more prone to depression. Risk, II.(Figure is included in full-text article.).

  5. Post flight analysis of NASA standard star trackers recovered from the solar maximum mission

    Science.gov (United States)

    Newman, P.

    1985-01-01

    The flight hardware returned after the Solar Maximum Mission Repair Mission was analyzed to determine the effects of 4 years in space. The NASA Standard Star Tracker would be a good candidate for such analysis because it is moderately complex and had a very elaborate calibration during the acceptance procedure. However, the recovery process extensively damaged the cathode of the image dissector detector making proper operation of the tracker and a comparison with preflight characteristics impossible. Otherwise, the tracker functioned nominally during testing.

  6. Autoregressive moving average fitting for real standard deviation in Monte Carlo power distribution calculation

    International Nuclear Information System (INIS)

    Ueki, Taro

    2010-01-01

    The noise propagation of tallies in the Monte Carlo power method can be represented by the autoregressive moving average process of orders p and p-1 (ARMA(p,p-1)], where p is an integer larger than or equal to two. The formula of the autocorrelation of ARMA(p,q), p≥q+1, indicates that ARMA(3,2) fitting is equivalent to lumping the eigenmodes of fluctuation propagation in three modes such as the slow, intermediate and fast attenuation modes. Therefore, ARMA(3,2) fitting was applied to the real standard deviation estimation of fuel assemblies at particular heights. The numerical results show that straightforward ARMA(3,2) fitting is promising but a stability issue must be resolved toward the incorporation in the distributed version of production Monte Carlo codes. The same numerical results reveal that the average performance of ARMA(3,2) fitting is equivalent to that of the batch method in MCNP with a batch size larger than one hundred and smaller than two hundred cycles for a 1100 MWe pressurized water reactor. The bias correction of low lag autocovariances in MVP/GMVP is demonstrated to have the potential of improving the average performance of ARMA(3,2) fitting. (author)

  7. Statistical analysis of solid waste composition data: Arithmetic mean, standard deviation and correlation coefficients.

    Science.gov (United States)

    Edjabou, Maklawe Essonanawe; Martín-Fernández, Josep Antoni; Scheutz, Charlotte; Astrup, Thomas Fruergaard

    2017-11-01

    Data for fractional solid waste composition provide relative magnitudes of individual waste fractions, the percentages of which always sum to 100, thereby connecting them intrinsically. Due to this sum constraint, waste composition data represent closed data, and their interpretation and analysis require statistical methods, other than classical statistics that are suitable only for non-constrained data such as absolute values. However, the closed characteristics of waste composition data are often ignored when analysed. The results of this study showed, for example, that unavoidable animal-derived food waste amounted to 2.21±3.12% with a confidence interval of (-4.03; 8.45), which highlights the problem of the biased negative proportions. A Pearson's correlation test, applied to waste fraction generation (kg mass), indicated a positive correlation between avoidable vegetable food waste and plastic packaging. However, correlation tests applied to waste fraction compositions (percentage values) showed a negative association in this regard, thus demonstrating that statistical analyses applied to compositional waste fraction data, without addressing the closed characteristics of these data, have the potential to generate spurious or misleading results. Therefore, ¨compositional data should be transformed adequately prior to any statistical analysis, such as computing mean, standard deviation and correlation coefficients. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Note onset deviations as musical piece signatures.

    Science.gov (United States)

    Serrà, Joan; Özaslan, Tan Hakan; Arcos, Josep Lluis

    2013-01-01

    A competent interpretation of a musical composition presents several non-explicit departures from the written score. Timing variations are perhaps the most important ones: they are fundamental for expressive performance and a key ingredient for conferring a human-like quality to machine-based music renditions. However, the nature of such variations is still an open research question, with diverse theories that indicate a multi-dimensional phenomenon. In the present study, we consider event-shift timing variations and show that sequences of note onset deviations are robust and reliable predictors of the musical piece being played, irrespective of the performer. In fact, our results suggest that only a few consecutive onset deviations are already enough to identify a musical composition with statistically significant accuracy. We consider a mid-size collection of commercial recordings of classical guitar pieces and follow a quantitative approach based on the combination of standard statistical tools and machine learning techniques with the semi-automatic estimation of onset deviations. Besides the reported results, we believe that the considered materials and the methodology followed widen the testing ground for studying musical timing and could open new perspectives in related research fields.

  9. Note onset deviations as musical piece signatures.

    Directory of Open Access Journals (Sweden)

    Joan Serrà

    Full Text Available A competent interpretation of a musical composition presents several non-explicit departures from the written score. Timing variations are perhaps the most important ones: they are fundamental for expressive performance and a key ingredient for conferring a human-like quality to machine-based music renditions. However, the nature of such variations is still an open research question, with diverse theories that indicate a multi-dimensional phenomenon. In the present study, we consider event-shift timing variations and show that sequences of note onset deviations are robust and reliable predictors of the musical piece being played, irrespective of the performer. In fact, our results suggest that only a few consecutive onset deviations are already enough to identify a musical composition with statistically significant accuracy. We consider a mid-size collection of commercial recordings of classical guitar pieces and follow a quantitative approach based on the combination of standard statistical tools and machine learning techniques with the semi-automatic estimation of onset deviations. Besides the reported results, we believe that the considered materials and the methodology followed widen the testing ground for studying musical timing and could open new perspectives in related research fields.

  10. Deviation from the superparamagnetic behaviour of fine-particle systems

    CERN Document Server

    Malaescu, I

    2000-01-01

    Studies concerning superparamagnetic behaviour of fine magnetic particle systems were performed using static and radiofrequency measurements, in the range 1-60 MHz. The samples were: a ferrofluid with magnetite particles dispersed in kerosene (sample A), magnetite powder (sample B) and the same magnetite powder dispersed in a polymer (sample C). Radiofrequency measurements indicated a maximum in the imaginary part of the complex magnetic susceptibility, for each of the samples, at frequencies with the magnitude order of tens of MHz, the origin of which was assigned to Neel-type relaxation processes. The static measurements showed a Langevin-type dependence of magnetisation M and of susceptibility chi, on the magnetic field for sample A. For samples B and C deviations from this type of dependence were found. These deviations were analysed qualitatively and explained in terms of the interparticle interactions, dispersion medium influence and surface effects.

  11. Biological bases of the maximum permissible exposure levels of the UK laser standard BS 4803: 1983

    International Nuclear Information System (INIS)

    McKinlay, A.F.; Harlen, F.

    1983-10-01

    The use of lasers has increased greatly over the past 15 years or so, to the extent that they are now used routinely in many occupational and public situations. There has been an increasing awareness of the potential hazards presented by lasers and substantial efforts have been made to formulate safety standards. In the UK the relevant Safety Standard is the British Standards Institution Standard BS 4803. This Standard was originally published in 1972 and a revision has recently been published (BS 4803: 1983). The revised standard has been developed using the American National Standards Institute Standard, ANSI Z136.1 (1973 onwards), as a model. In other countries, national standards have been similarly formulated, resulting in a large measure of international agreement through participation in the work of the International Electrotechnical Commission (IEC). The bases of laser safety standards are biophysical data on threshold injury effects, particularly on the retina, and the development of theoretical models of damage mechanisms. This report deals in some detail with the mechanisms of injury from over exposure to optical radiations, in particular with the dependency of the type and degree of damage on wavelength, image size and pulse duration. The maximum permissible exposure levels recommended in BS 4803: 1983 are compared with published data for damage thresholds and the adequacy of the standard is discussed. (author)

  12. Analysis of some fuel characteristics deviations and their influence over WWER-440 fuel cycle design

    International Nuclear Information System (INIS)

    Stoyanova, I.; Kamenov, K.

    2001-01-01

    The aim of this study is to estimate the influence of some deviations in WWER-440 fuel assemblies (FA) characteristics upon fuel core design. A large number of different fresh fuel assemblies with enrichment of 3.5 t % are examined related to the enrichment, mass of initial metal Uranium and assembly shroud thickness. Infinite multiplication factor (Kinf) in fuel assembly has been calculated by HELIOS spectral code for basic assembly and for different FA with deviation of a single parameter. The effects from single parameter deviation (enrichment) and from two parameter deviations (enrichment and wall thickness) on the neutron-physics characteristics of the core are estimated for different fuel assemblies. Relatively week burnup dependence on Kinf is observed as result of deviation in the enrichment of the fuel and in the wall thickness of the assembly. An assessment of a FA single and two parameter deviations effects on design fuel cycle duration and relative power peaking factor is also considers in the paper. As a final conclusion can be settled that the maximum relative shortness of fuel cycle can be observed in the case of two FA parameters deviations

  13. Volumetric segmentation of ADC maps and utility of standard deviation as measure of tumor heterogeneity in soft tissue tumors.

    Science.gov (United States)

    Singer, Adam D; Pattany, Pradip M; Fayad, Laura M; Tresley, Jonathan; Subhawong, Ty K

    2016-01-01

    Determine interobserver concordance of semiautomated three-dimensional volumetric and two-dimensional manual measurements of apparent diffusion coefficient (ADC) values in soft tissue masses (STMs) and explore standard deviation (SD) as a measure of tumor ADC heterogeneity. Concordance correlation coefficients for mean ADC increased with more extensive sampling. Agreement on the SD of tumor ADC values was better for large regions of interest and multislice methods. Correlation between mean and SD ADC was low, suggesting that these parameters are relatively independent. Mean ADC of STMs can be determined by volumetric quantification with high interobserver agreement. STM heterogeneity merits further investigation as a potential imaging biomarker that complements other functional magnetic resonance imaging parameters. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Assessing the stock market volatility for different sectors in Malaysia by using standard deviation and EWMA methods

    Science.gov (United States)

    Saad, Shakila; Ahmad, Noryati; Jaffar, Maheran Mohd

    2017-11-01

    Nowadays, the study on volatility concept especially in stock market has gained so much attention from a group of people engaged in financial and economic sectors. The applications of volatility concept in financial economics can be seen in valuation of option pricing, estimation of financial derivatives, hedging the investment risk and etc. There are various ways to measure the volatility value. However for this study, two methods are used; the simple standard deviation and Exponentially Weighted Moving Average (EWMA). The focus of this study is to measure the volatility on three different sectors of business in Malaysia, called primary, secondary and tertiary by using both methods. The daily and annual volatilities of different business sector based on stock prices for the period of 1 January 2014 to December 2014 have been calculated in this study. Result shows that different patterns of the closing stock prices and return give different volatility values when calculating using simple method and EWMA method.

  15. A study on the deviation aspects of the poem “The Eightieth Stage”

    Directory of Open Access Journals (Sweden)

    Soghra Salmaninejad Mehrabadi

    2016-02-01

    's innovation. New expressions are also used in other parts of abnormality in “The Eightieth Stag e” . Stylistic deviation Sometimes, Akhavan uses local and slang words, and words with different songs and music produces deviation as well. This Application is one kind of abnormality. Words such as “han, hey, by the truth, pity, hoome, kope, meydanak and ...” are of this type of abnormality .   Ancient deviation One way to break out of the habit of poetry , is attention to ancient words and actions . Archaism is one of the factors affecting the deviation. Archaism deviation helps to make the old sp. According to Leach, the ancient is the survival of the old language in the now. Syntactic factors, type of music and words, are effective in escape from the standard language. ”Sowrat (sharpness, hamgenan (counterparts, parine (last year, pour ( son, pahlaw (champion’’are Words that show Akhavan’s attention to archaism. The ancient pronunciation is another part of his work. Furthermore, use of mythology and allusion have created deviation of this type. Cases such as anagram adjectival compounds, the use of two prepositions for a word, the use of the adjective and noun in the plural form, are signs of archaism in grammar and syntax. He is interested in grammatical elements of Khorasani Style. Most elements of this style used in “The Eightieth Stage” poetry. S emantic deviation Semantic deviation is caused by the imagery . The poet uses frequently literary figures. By this way, he produces new meaning and therefore highlights his poem. Simile, metaphor, personification and irony are the most important examples of this deviation. Apparently the maximum deviation from the norm in this poem is of periodic deviation (ancient or archaism. The second row belongs to the semantic deviation in which metaphor is the most meaningful. The effect of metaphor in this poem is quite well. In general, Poet’s notice to the different deviations is one of his techniques and the key

  16. Deviation from intention to treat analysis in randomised trials and treatment effect estimates: meta-epidemiological study.

    Science.gov (United States)

    Abraha, Iosief; Cherubini, Antonio; Cozzolino, Francesco; De Florio, Rita; Luchetta, Maria Laura; Rimland, Joseph M; Folletti, Ilenia; Marchesi, Mauro; Germani, Antonella; Orso, Massimiliano; Eusebi, Paolo; Montedori, Alessandro

    2015-05-27

    To examine whether deviation from the standard intention to treat analysis has an influence on treatment effect estimates of randomised trials. Meta-epidemiological study. Medline, via PubMed, searched between 2006 and 2010; 43 systematic reviews of interventions and 310 randomised trials were included. From each year searched, random selection of 5% of intervention reviews with a meta-analysis that included at least one trial that deviated from the standard intention to treat approach. Basic characteristics of the systematic reviews and randomised trials were extracted. Information on the reporting of intention to treat analysis, outcome data, risk of bias items, post-randomisation exclusions, and funding were extracted from each trial. Trials were classified as: ITT (reporting the standard intention to treat approach), mITT (reporting a deviation from the standard approach), and no ITT (reporting no approach). Within each meta-analysis, treatment effects were compared between mITT and ITT trials, and between mITT and no ITT trials. The ratio of odds ratios was calculated (value deviated from the intention to treat analysis showed larger intervention effects than trials that reported the standard approach. Where an intention to treat analysis is impossible to perform, authors should clearly report who is included in the analysis and attempt to perform multiple imputations. © Abraha et al 2015.

  17. Difference in prognostic significance of maximum standardized uptake value on [18F]-fluoro-2-deoxyglucose positron emission tomography between adenocarcinoma and squamous cell carcinoma of the lung

    International Nuclear Information System (INIS)

    Tsutani, Yasuhiro; Miyata, Yoshihiro; Misumi, Keizo; Ikeda, Takuhiro; Mimura, Takeshi; Hihara, Jun; Okada, Morihito

    2011-01-01

    This study evaluates the prognostic significance of [18F]-fluoro-2-deoxyglucose positron emission tomography/computed tomography findings according to histological subtypes in patients with completely resected non-small cell lung cancer. We examined 176 consecutive patients who had undergone preoperative [18F]-fluoro-2-deoxyglucose-positron emission tomography/computed tomography imaging and curative surgical resection for adenocarcinoma (n=132) or squamous cell carcinoma (n=44). Maximum standardized uptake values for the primary lesions in all patients were calculated as the [18F]-fluoro-2-deoxyglucose uptake and the surgical results were analyzed. The median values of maximum standardized uptake value for the primary tumors were 2.60 in patients with adenocarcinoma and 6.95 in patients with squamous cell carcinoma (P 6.95 (P=0.83) among patients with squamous cell carcinoma, 2-year disease-free survival rates were 93.9% for maximum standardized uptake value ≤3.7 and 52.4% for maximum standardized uptake value >3.7 (P<0.0001) among those with adenocarcinoma, and notably, 100 and 57.2%, respectively, in patients with Stage I adenocarcinoma (P<0.0001). On the basis of the multivariate Cox analyses of patients with adenocarcinoma, maximum standardized uptake value (P=0.008) was a significantly independent factor for disease-free survival as well as nodal metastasis (P=0.001). Maximum standardized uptake value of the primary tumor was a powerful prognostic determinant for patients with adenocarcinoma, but not with squamous cell carcinoma of the lung. (author)

  18. Poorer right ventricular systolic function and exercise capacity in women after repair of tetralogy of fallot: a sex comparison of standard deviation scores based on sex-specific reference values in healthy control subjects.

    Science.gov (United States)

    Sarikouch, Samir; Boethig, Dietmar; Peters, Brigitte; Kropf, Siegfried; Dubowy, Karl-Otto; Lange, Peter; Kuehne, Titus; Haverich, Axel; Beerbaum, Philipp

    2013-11-01

    In repaired congenital heart disease, there is increasing evidence of sex differences in cardiac remodeling, but there is a lack of comparable data for specific congenital heart defects such as in repaired tetralogy of Fallot. In a prospective multicenter study, a cohort of 272 contemporary patients (158 men; mean age, 14.3±3.3 years [range, 8-20 years]) with repaired tetralogy of Fallot underwent cardiac magnetic resonance for ventricular function and metabolic exercise testing. All data were transformed to standard deviation scores according to the Lambda-Mu-Sigma method by relating individual values to their respective 50th percentile (standard deviation score, 0) in sex-specific healthy control subjects. No sex differences were observed in age at repair, type of repair conducted, or overall hemodynamic results. Relative to sex-specific controls, repaired tetralogy of Fallot in women had larger right ventricular end-systolic volumes (standard deviation scores: women, 4.35; men, 3.25; P=0.001), lower right ventricular ejection fraction (women, -2.83; men, -2.12; P=0.011), lower right ventricular muscle mass (women, 1.58; men 2.45; P=0.001), poorer peak oxygen uptake (women, -1.65; men, -1.14; Pstandard deviation scores in repaired tetralogy of Fallot suggest that women perform poorer than men in terms of right ventricular systolic function as tested by cardiac magnetic resonance and exercise capacity. This effect cannot be explained by selection bias. Further outcome data are required from longitudinal cohort studies.

  19. ROC [Receiver Operating Characteristics] study of maximum likelihood estimator human brain image reconstructions in PET [Positron Emission Tomography] clinical practice

    International Nuclear Information System (INIS)

    Llacer, J.; Veklerov, E.; Nolan, D.; Grafton, S.T.; Mazziotta, J.C.; Hawkins, R.A.; Hoh, C.K.; Hoffman, E.J.

    1990-10-01

    This paper will report on the progress to date in carrying out Receiver Operating Characteristics (ROC) studies comparing Maximum Likelihood Estimator (MLE) and Filtered Backprojection (FBP) reconstructions of normal and abnormal human brain PET data in a clinical setting. A previous statistical study of reconstructions of the Hoffman brain phantom with real data indicated that the pixel-to-pixel standard deviation in feasible MLE images is approximately proportional to the square root of the number of counts in a region, as opposed to a standard deviation which is high and largely independent of the number of counts in FBP. A preliminary ROC study carried out with 10 non-medical observers performing a relatively simple detectability task indicates that, for the majority of observers, lower standard deviation translates itself into a statistically significant detectability advantage in MLE reconstructions. The initial results of ongoing tests with four experienced neurologists/nuclear medicine physicians are presented. Normal cases of 18 F -- fluorodeoxyglucose (FDG) cerebral metabolism studies and abnormal cases in which a variety of lesions have been introduced into normal data sets have been evaluated. We report on the results of reading the reconstructions of 90 data sets, each corresponding to a single brain slice. It has become apparent that the design of the study based on reading single brain slices is too insensitive and we propose a variation based on reading three consecutive slices at a time, rating only the center slice. 9 refs., 2 figs., 1 tab

  20. Ensemble standar deviation of wind speed and direction of the FDDA input to WRF

    Data.gov (United States)

    U.S. Environmental Protection Agency — NetCDF file of the SREF standard deviation of wind speed and direction that was used to inject variability in the FDDA input. variable U_NDG_OLD contains standard...

  1. Properties of pattern standard deviation in open-angle glaucoma patients with hemi-optic neuropathy and bi-optic neuropathy.

    Science.gov (United States)

    Heo, Dong Won; Kim, Kyoung Nam; Lee, Min Woo; Lee, Sung Bok; Kim, Chang-Sik

    2017-01-01

    To evaluate the properties of pattern standard deviation (PSD) according to localization of the glaucomatous optic neuropathy. We enrolled 242 eyes of 242 patients with primary open-angle glaucoma, with a best-corrected visual acuity ≥ 20/25, and no media opacity. Patients were examined via dilated fundus photography, spectral-domain optical coherence tomography, and Humphrey visual field examination, and divided into those with hemi-optic neuropathy (superior or inferior) and bi-optic neuropathy (both superior and inferior). We assessed the relationship between mean deviation (MD) and PSD. Using broken stick regression analysis, the tipping point was identified, i.e., the point at which MD became significantly associated with a paradoxical reversal of PSD. In 91 patients with hemi-optic neuropathy, PSD showed a strong correlation with MD (r = -0.973, β = -0.965, p < 0.001). The difference between MD and PSD ("-MD-PSD") was constant (mean, -0.32 dB; 95% confidence interval, -2.48~1.84 dB) regardless of visual field defect severity. However, in 151 patients with bi-optic neuropathy, a negative correlation was evident between "-MD-PSD" and MD (r2 = 0.907, p < 0.001). Overall, the MD tipping point was -14.0 dB, which was close to approximately 50% damage of the entire visual field (p < 0.001). Although a false decrease of PSD usually begins at approximately 50% visual field damage, in patients with hemi-optic neuropathy, the PSD shows no paradoxical decrease and shows a linear correlation with MD.

  2. The maximum entropy method of moments and Bayesian probability theory

    Science.gov (United States)

    Bretthorst, G. Larry

    2013-08-01

    The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.

  3. Large deviations and idempotent probability

    CERN Document Server

    Puhalskii, Anatolii

    2001-01-01

    In the view of many probabilists, author Anatolii Puhalskii''s research results stand among the most significant achievements in the modern theory of large deviations. In fact, his work marked a turning point in the depth of our understanding of the connections between the large deviation principle (LDP) and well-known methods for establishing weak convergence results.Large Deviations and Idempotent Probability expounds upon the recent methodology of building large deviation theory along the lines of weak convergence theory. The author develops an idempotent (or maxitive) probability theory, introduces idempotent analogues of martingales (maxingales), Wiener and Poisson processes, and Ito differential equations, and studies their properties. The large deviation principle for stochastic processes is formulated as a certain type of convergence of stochastic processes to idempotent processes. The author calls this large deviation convergence.The approach to establishing large deviation convergence uses novel com...

  4. Final height in survivors of childhood cancer compared with Height Standard Deviation Scores at diagnosis.

    Science.gov (United States)

    Knijnenburg, S L; Raemaekers, S; van den Berg, H; van Dijk, I W E M; Lieverst, J A; van der Pal, H J; Jaspers, M W M; Caron, H N; Kremer, L C; van Santen, H M

    2013-04-01

    Our study aimed to evaluate final height in a cohort of Dutch childhood cancer survivors (CCS) and assess possible determinants of final height, including height at diagnosis. We calculated standard deviation scores (SDS) for height at initial cancer diagnosis and height in adulthood in a cohort of 573 CCS. Multivariable regression analyses were carried out to estimate the influence of different determinants on height SDS at follow-up. Overall, survivors had a normal height SDS at cancer diagnosis. However, at follow-up in adulthood, 8.9% had a height ≤-2 SDS. Height SDS at diagnosis was an important determinant for adult height SDS. Children treated with (higher doses of) radiotherapy showed significantly reduced final height SDS. Survivors treated with total body irradiation (TBI) and craniospinal radiation had the greatest loss in height (-1.56 and -1.37 SDS, respectively). Younger age at diagnosis contributed negatively to final height. Height at diagnosis was an important determinant for height SDS at follow-up. Survivors treated with TBI, cranial and craniospinal irradiation should be monitored periodically for adequate linear growth, to enable treatment on time if necessary. For correct interpretation of treatment-related late effects studies in CCS, pre-treatment data should always be included.

  5. Telemetry Standards, RCC Standard 106-17. Chapter 3. Frequency Division Multiplexing Telemetry Standards

    Science.gov (United States)

    2017-07-01

    Standard 106-17 Chapter 3, July 2017 3-5 Table 3-4. Constant-Bandwidth FM Subcarrier Channels Frequency Criteria\\Channels: A B C D E F G H Deviation ...Telemetry Standards , RCC Standard 106-17 Chapter 3, July 2017 3-i CHAPTER 3 Frequency Division Multiplexing Telemetry Standards Acronyms...Frequency Division Multiplexing Telemetry Standards ................................ 3-1 3.1 General

  6. 48 CFR 2001.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Individual deviations. 2001... Individual deviations. In individual cases, deviations from either the FAR or the NRCAR will be authorized... deviations clearly in the best interest of the Government. Individual deviations must be authorized in...

  7. 48 CFR 801.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Individual deviations. 801... Individual deviations. (a) Authority to authorize individual deviations from the FAR and VAAR is delegated to... nature of the deviation. (d) The DSPE may authorize individual deviations from the FAR and VAAR when an...

  8. Detecting deviating behaviors without models

    NARCIS (Netherlands)

    Lu, X.; Fahland, D.; van den Biggelaar, F.J.H.M.; van der Aalst, W.M.P.; Reichert, M.; Reijers, H.A.

    2016-01-01

    Deviation detection is a set of techniques that identify deviations from normative processes in real process executions. These diagnostics are used to derive recommendations for improving business processes. Existing detection techniques identify deviations either only on the process instance level

  9. Implementation of an Algorithm for Prosthetic Joint Infection: Deviations and Problems.

    Science.gov (United States)

    Mühlhofer, Heinrich M L; Kanz, Karl-Georg; Pohlig, Florian; Lenze, Ulrich; Lenze, Florian; Toepfer, Andreas; von Eisenhart-Rothe, Ruediger; Schauwecker, Johannes

    The outcome of revision surgery in arthroplasty is based on a precise diagnosis. In addition, the treatment varies based on whether the prosthetic failure is caused by aseptic or septic loosening. Algorithms can help to identify periprosthetic joint infections (PJI) and standardize diagnostic steps, however, algorithms tend to oversimplify the treatment of complex cases. We conducted a process analysis during the implementation of a PJI algorithm to determine problems and deviations associated with the implementation of this algorithm. Fifty patients who were treated after implementing a standardized algorithm were monitored retrospectively. Their treatment plans and diagnostic cascades were analyzed for deviations from the implemented algorithm. Each diagnostic procedure was recorded, compared with the algorithm, and evaluated statistically. We detected 52 deviations while treating 50 patients. In 25 cases, no discrepancy was observed. Synovial fluid aspiration was not performed in 31.8% of patients (95% confidence interval [CI], 18.1%-45.6%), while white blood cell counts (WBCs) and neutrophil differentiation were assessed in 54.5% of patients (95% CI, 39.8%-69.3%). We also observed that the prolonged incubation of cultures was not requested in 13.6% of patients (95% CI, 3.5%-23.8%). In seven of 13 cases (63.6%; 95% CI, 35.2%-92.1%), arthroscopic biopsy was performed; 6 arthroscopies were performed in discordance with the algorithm (12%; 95% CI, 3%-21%). Self-critical analysis of diagnostic processes and monitoring of deviations using algorithms are important and could increase the quality of treatment by revealing recurring faults.

  10. Prognostic implications of mutation-specific QTc standard deviation in congenital long QT syndrome.

    Science.gov (United States)

    Mathias, Andrew; Moss, Arthur J; Lopes, Coeli M; Barsheshet, Alon; McNitt, Scott; Zareba, Wojciech; Robinson, Jennifer L; Locati, Emanuela H; Ackerman, Michael J; Benhorin, Jesaia; Kaufman, Elizabeth S; Platonov, Pyotr G; Qi, Ming; Shimizu, Wataru; Towbin, Jeffrey A; Michael Vincent, G; Wilde, Arthur A M; Zhang, Li; Goldenberg, Ilan

    2013-05-01

    Individual corrected QT interval (QTc) may vary widely among carriers of the same long QT syndrome (LQTS) mutation. Currently, neither the mechanism nor the implications of this variable penetrance are well understood. To hypothesize that the assessment of QTc variance in patients with congenital LQTS who carry the same mutation provides incremental prognostic information on the patient-specific QTc. The study population comprised 1206 patients with LQTS with 95 different mutations and ≥ 5 individuals who carry the same mutation. Multivariate Cox proportional hazards regression analysis was used to assess the effect of mutation-specific standard deviation of QTc (QTcSD) on the risk of cardiac events (comprising syncope, aborted cardiac arrest, and sudden cardiac death) from birth through age 40 years in the total population and by genotype. Assessment of mutation-specific QTcSD showed large differences among carriers of the same mutations (median QTcSD 45 ms). Multivariate analysis showed that each 20 ms increment in QTcSD was associated with a significant 33% (P = .002) increase in the risk of cardiac events after adjustment for the patient-specific QTc duration and the family effect on QTc. The risk associated with QTcSD was pronounced among patients with long QT syndrome type 1 (hazard ratio 1.55 per 20 ms increment; P<.001), whereas among patients with long QT syndrome type 2, the risk associated with QTcSD was not statistically significant (hazard ratio 0.99; P = .95; P value for QTcSD-by-genotype interaction = .002). Our findings suggest that mutations with a wider variation in QTc duration are associated with increased risk of cardiac events. These findings appear to be genotype-specific, with a pronounced effect among patients with the long QT syndrome type 1 genotype. Copyright © 2013. Published by Elsevier Inc.

  11. 48 CFR 1501.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Individual deviations. 1501.403 Section 1501.403 Federal Acquisition Regulations System ENVIRONMENTAL PROTECTION AGENCY GENERAL GENERAL Deviations 1501.403 Individual deviations. Requests for individual deviations from the FAR and the...

  12. 48 CFR 2401.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Individual deviations. 2401... DEVELOPMENT GENERAL FEDERAL ACQUISITION REGULATION SYSTEM Deviations 2401.403 Individual deviations. In individual cases, proposed deviations from the FAR or HUDAR shall be submitted to the Senior Procurement...

  13. Effects of central nervous system drugs on driving: speed variability versus standard deviation of lateral position as outcome measure of the on-the-road driving test.

    Science.gov (United States)

    Verster, Joris C; Roth, Thomas

    2014-01-01

    The on-the-road driving test in normal traffic is used to examine the impact of drugs on driving performance. This paper compares the sensitivity of standard deviation of lateral position (SDLP) and SD speed in detecting driving impairment. A literature search was conducted to identify studies applying the on-the-road driving test, examining the effects of anxiolytics, antidepressants, antihistamines, and hypnotics. The proportion of comparisons (treatment versus placebo) where a significant impairment was detected with SDLP and SD speed was compared. About 40% of 53 relevant papers did not report data on SD speed and/or SDLP. After placebo administration, the correlation between SDLP and SD speed was significant but did not explain much variance (r = 0.253, p = 0.0001). A significant correlation was found between ΔSDLP and ΔSD speed (treatment-placebo), explaining 48% of variance. When using SDLP as outcome measure, 67 significant treatment-placebo comparisons were found. Only 17 (25.4%) were significant when SD speed was used as outcome measure. Alternatively, for five treatment-placebo comparisons, a significant difference was found for SD speed but not for SDLP. Standard deviation of lateral position is a more sensitive outcome measure to detect driving impairment than speed variability.

  14. INDICATIVE MODEL OF DEVIATIONS IN PROJECT

    Directory of Open Access Journals (Sweden)

    Олена Борисівна ДАНЧЕНКО

    2016-02-01

    Full Text Available The article shows the process of constructing the project deviations indicator model. It based on a conceptual model of project deviations integrated management (PDIM. During the project different causes (such as risks, changes, problems, crises, conflicts, stress lead to deviations of integrated project indicators - time, cost, quality, and content. For a more detailed definition of where in the project deviations occur and how they are dangerous for the whole project, it needs to develop an indicative model of project deviations. It allows identifying the most dangerous deviations that require PDIM. As a basis for evaluation of project's success has been taken famous model IPMA Delta. During the evaluation, IPMA Delta estimated project management competence of organization in three modules: I-Module ("Individuals" - a self-assessment personnel, P-module ("Projects" - self-assessment of projects and/or programs, and O-module ("Organization" - used to conduct interviews with selected people during auditing company. In the process of building an indicative model of deviations in the project, the first step is the assessment of project management in the organization by IPMA Delta. In the future, built cognitive map and matrix of system interconnections of the project, which conducted simulations and built a scale of deviations for the selected project. They determined a size and place of deviations. To identify the detailed causes of deviations in the project management has been proposed to use the extended system of indicators, which is based on indicators of project management model Project Excellence. The proposed indicative model of deviations in projects allows to estimate the size of variation and more accurately identify the place of negative deviations in the project and provides the project manager information for operational decision making for the management of deviations in the implementation of the project

  15. 48 CFR 1301.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Individual deviations... DEPARTMENT OF COMMERCE ACQUISITION REGULATIONS SYSTEM Deviations From the FAR 1301.403 Individual deviations. The designee authorized to approve individual deviations from the FAR is set forth in CAM 1301.70. ...

  16. 48 CFR 301.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Individual deviations. 301... ACQUISITION REGULATION SYSTEM Deviations From the FAR 301.403 Individual deviations. Contracting activities shall prepare requests for individual deviations to either the FAR or HHSAR in accordance with 301.470. ...

  17. 48 CFR 1201.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Individual deviations... FEDERAL ACQUISITION REGULATIONS SYSTEM 70-Deviations From the FAR and TAR 1201.403 Individual deviations... Executive Service (SES) official or that of a Flag Officer, may authorize individual deviations (unless (FAR...

  18. 48 CFR 501.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Individual deviations. 501... Individual deviations. (a) An individual deviation affects only one contract action. (1) The Head of the Contracting Activity (HCA) must approve an individual deviation to the FAR. The authority to grant an...

  19. 48 CFR 401.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Individual deviations. 401... AGRICULTURE ACQUISITION REGULATION SYSTEM Deviations From the FAR and AGAR 401.403 Individual deviations. In individual cases, deviations from either the FAR or the AGAR will be authorized only when essential to effect...

  20. 48 CFR 2801.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Individual deviations. 2801... OF JUSTICE ACQUISITION REGULATIONS SYSTEM Deviations From the FAR and JAR 2801.403 Individual deviations. Individual deviations from the FAR or the JAR shall be approved by the head of the contracting...

  1. Allan deviation analysis of financial return series

    Science.gov (United States)

    Hernández-Pérez, R.

    2012-05-01

    We perform a scaling analysis for the return series of different financial assets applying the Allan deviation (ADEV), which is used in the time and frequency metrology to characterize quantitatively the stability of frequency standards since it has demonstrated to be a robust quantity to analyze fluctuations of non-stationary time series for different observation intervals. The data used are opening price daily series for assets from different markets during a time span of around ten years. We found that the ADEV results for the return series at short scales resemble those expected for an uncorrelated series, consistent with the efficient market hypothesis. On the other hand, the ADEV results for absolute return series for short scales (first one or two decades) decrease following approximately a scaling relation up to a point that is different for almost each asset, after which the ADEV deviates from scaling, which suggests that the presence of clustering, long-range dependence and non-stationarity signatures in the series drive the results for large observation intervals.

  2. [Study on physical deviation factors on laser induced breakdown spectroscopy measurement].

    Science.gov (United States)

    Wan, Xiong; Wang, Peng; Wang, Qi; Zhang, Qing; Zhang, Zhi-Min; Zhang, Hua-Ming

    2013-10-01

    In order to eliminate the deviation between the measured LIBS spectral line and the standard LIBS spectral line, and improve the accuracy of elements measurement, a research of physical deviation factors in laser induced breakdown spectroscopy technology was proposed. Under the same experimental conditions, the relationship of ablated hole effect and spectral wavelength was tested, the Stark broadening data of Mg plasma laser induced breakdown spectroscopy with sampling time-delay from 1.00 to 3.00 micros was also studied, thus the physical deviation influences such as ablated hole effect and Stark broadening could be obtained while collecting the spectrum. The results and the method of the research and analysis can also be applied to other laser induced breakdown spectroscopy experiment system, which is of great significance to improve the accuracy of LIBS elements measuring and is also important to the research on the optimum sampling time-delay of LIBS.

  3. Search for Standard Model Higgs boson in the two-photon final state in ATLAS

    Directory of Open Access Journals (Sweden)

    Davignon Olivier

    2012-06-01

    Full Text Available We report on the search for the Standard Model Higgs boson decaying into two photons based on proton-proton collision data with a center-of-mass energy of 7 TeV recorded by the ATLAS experiment at the LHC. The dataset has an integrated luminosity of about 1:08 fb−1. The expected cross section exclusion at 95% confidence level varies between 2:0 and 5:8 times the Standard Model cross section over the diphoton mass range 110 – 150 GeV. The maximum deviations from the background-only expectation are consistent with statistical fluctuations.

  4. 48 CFR 3401.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Individual deviations. 3401.403 Section 3401.403 Federal Acquisition Regulations System DEPARTMENT OF EDUCATION ACQUISITION REGULATION GENERAL ED ACQUISITION REGULATION SYSTEM Deviations 3401.403 Individual deviations. An individual...

  5. 48 CFR 1.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Individual deviations. 1.403 Section 1.403 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION GENERAL FEDERAL ACQUISITION REGULATIONS SYSTEM Deviations from the FAR 1.403 Individual deviations. Individual...

  6. 48 CFR 2501.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Individual deviations. 2501.403 Section 2501.403 Federal Acquisition Regulations System NATIONAL SCIENCE FOUNDATION GENERAL FEDERAL ACQUISITION REGULATIONS SYSTEM Deviations From the FAR 2501.403 Individual deviations. Individual...

  7. Wind power limit calculation basedon frequency deviation using Matlab

    International Nuclear Information System (INIS)

    Santos Fuentefria, Ariel; Salgado Duarte, Yorlandis; MejutoFarray, Davis

    2017-01-01

    The utilization of the wind energy for the production of electricity it’s a technology that has promoted itself in the last years, like an alternative before the environmental deterioration and the scarcity of the fossil fuels. When the power generation of wind energy is integrated into the electrical power systems, maybe take place problems in the frequency stability due to, mainly, the stochastic characteristic of the wind and the impossibility of the wind power control on behalf of the dispatchers. In this work, is make an analysis of frequency deviation when the wind power generation rise in an isolated electrical power system. This analysis develops in a computerized frame with the construction of an algorithm using Matlab, which allowed to make several simulations in order to obtain the frequency behavior for different loads and wind power conditions. Besides, it was determined the wind power limit for minimum, medium and maximum load. The results show that the greatest values on wind power are obtained in maximum load condition. However, the minimum load condition limit the introduction of wind power into the system. (author)

  8. 48 CFR 601.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Individual deviations. 601.403 Section 601.403 Federal Acquisition Regulations System DEPARTMENT OF STATE GENERAL DEPARTMENT OF STATE ACQUISITION REGULATIONS SYSTEM Deviations from the FAR 601.403 Individual deviations. The...

  9. Statistical properties of the deviations of f 0 F 2 from monthly medians

    Directory of Open Access Journals (Sweden)

    Y. Tulunay

    2002-06-01

    Full Text Available The deviations of hourly f 0 F 2 from monthly medians for 20 stations in Europe during the period 1958-1998 are studied. Spectral analysis is used to show that, both for original data (for each hour and for the deviations from monthly medians, the deterministic components are the harmonics of 11 years (solar cycle, 1 year and its harmonics, 27 days and 12 h 50.49 m (2nd harmonic of lunar rotation period L 2 periodicities. Using histograms for one year samples, it is shown that the deviations from monthly medians are nearly zero mean (mean < 0.5 and approximately Gaussian (relative difference range between %10 to %20 and their standard deviations are larger for daylight hours (in the range 5-7. It is shown that the amplitude distribution of the positive and negative deviations is nearly symmetrical at night hours, but asymmetrical for day hours. The positive and negative deviations are then studied separately and it is observed that the positive deviations are nearly independent of R12 except for high latitudes, but negative deviations are modulated by R12 . The 90% confidence interval for negative deviations for each station and each hour is computed as a linear model in terms of R12. After correction for local time, it is shown that for all hours the confidence intervals increase with latitude but decrease above 60N. Long-term trend analysis showed that there is an increase in the amplitude of positive deviations from monthly means irrespective of the solar conditions. Using spectral analysis it is also shown that the seasonal dependency of negative deviations is more accentuated than the seasonal dependency of positive deviations especially at low latitudes. In certain stations, it is also observed that the 4th harmonic of 1 year corresponding to a periodicity of 3 months, which is missing in f 0 F 2 data, appears in the spectra of negative variations.

  10. 48 CFR 201.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Individual deviations. 201.403 Section 201.403 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM... Individual deviations. (1) Individual deviations, except those described in 201.402(1) and paragraph (2) of...

  11. Stress-testing the Standard Model at the LHC

    CERN Document Server

    2016-01-01

    With the high-energy run of the LHC now underway, and clear manifestations of beyond-Standard-Model physics not yet seen in data from the previous run, the search for new physics at the LHC may be a quest for small deviations with big consequences. If clear signals are present, precise predictions and measurements will again be crucial for extracting the maximum information from the data, as in the case of the Higgs boson. Precision will therefore remain a key theme for particle physics research in the coming years. The conference will provide a forum for experimentalists and theorists to identify the challenges and refine the tools for high-precision tests of the Standard Model and searches for signals of new physics at Run II of the LHC. Topics to be discussed include: pinning down Standard Model corrections to key LHC processes; combining fixed-order QCD calculations with all-order resummations and parton showers; new developments in jet physics concerning jet substructure, associated jets and boosted je...

  12. 48 CFR 3001.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Individual deviations... from the FAR and HSAR 3001.403 Individual deviations. Unless precluded by law, executive order, or other regulation, the HCA is authorized to approve individual deviation (except with respect to (FAR) 48...

  13. 48 CFR 1901.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Individual deviations. 1901.403 Section 1901.403 Federal Acquisition Regulations System BROADCASTING BOARD OF GOVERNORS GENERAL... Individual deviations. Deviations from the IAAR or the FAR in individual cases shall be authorized by the...

  14. Deviation Management: Key Management Subsystem Driver of Knowledge-Based Continuous Improvement in the Henry Ford Production System.

    Science.gov (United States)

    Zarbo, Richard J; Copeland, Jacqueline R; Varney, Ruan C

    2017-10-01

    To develop a business subsystem fulfilling International Organization for Standardization 15189 nonconformance management regulatory standard, facilitating employee engagement in problem identification and resolution to effect quality improvement and risk mitigation. From 2012 to 2016, the integrated laboratories of the Henry Ford Health System used a quality technical team to develop and improve a management subsystem designed to identify, track, trend, and summarize nonconformances based on frequency, risk, and root cause for elimination at the level of the work. Programmatic improvements and training resulted in markedly increased documentation culminating in 71,641 deviations in 2016 classified by a taxonomy of 281 defect types into preanalytic (74.8%), analytic (23.6%), and postanalytic (1.6%) testing phases. The top 10 deviations accounted for 55,843 (78%) of the total. Deviation management is a key subsystem of managers' standard work whereby knowledge of nonconformities assists in directing corrective actions and continuous improvements that promote consistent execution and higher levels of performance. © American Society for Clinical Pathology, 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  15. 40 CFR 60.2215 - What else must I report if I have a deviation from the operating limits or the emission limitations?

    Science.gov (United States)

    2010-07-01

    ... performance test was conducted that deviated from any emission limitation. (b) The deviation report must be... deviation from the operating limits or the emission limitations? 60.2215 Section 60.2215 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR...

  16. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...

  17. 49 CFR 192.943 - When can an operator deviate from these reassessment intervals?

    Science.gov (United States)

    2010-10-01

    ... (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Gas Transmission Pipeline Integrity Management § 192.943 When can an operator deviate from these reassessment...

  18. The natural background approach to setting radiation standards

    International Nuclear Information System (INIS)

    Adler, H.I.; Federow, H.; Weinberg, A.M.

    1979-01-01

    The suggestion has often been made that an additional radiation exposure imposed on humanity as a result of some important activity such as electricity generation would be acceptable if the exposure was 'small' compared to the natural background. In order to make this concept quantitative and objective, we propose that 'small compared with the natural background' be interpreted as the standard deviation (weighted with the exposed population) of the natural background. We believe that this use of the variation in natural background radiation is less arbitrary and requires fewer unfounded assumptions than some current approaches to standard-setting. The standard deviation is an easily calculated statistic that is small compared with the mean value for natural exposures of populations. It is an objectively determined quantity and its significance is generally understood. Its determination does not omit any of the pertinent data. When this method is applied to the population of the USA, it implies that a dose of 20 mrem/year would be an acceptable standard. This is closely comparable to the 25 mrem/year suggested by the Environmental Protection Agency as the maximum allowable exposure to an individual in the general population as a result of the operation of the complete uranium fuel cycle. Other agents for which a natural background exists can be treated in the same way as radiation. In addition, a second method for determining permissible exposure levels for agents other than radiation is presented. This method makes use of the natural background radiation data as a primary standard. Some observations on benzo(a)pyrene, using this latter method, are presented. (author)

  19. Computer generation of random deviates

    International Nuclear Information System (INIS)

    Cormack, John

    1991-01-01

    The need for random deviates arises in many scientific applications. In medical physics, Monte Carlo simulations have been used in radiology, radiation therapy and nuclear medicine. Specific instances include the modelling of x-ray scattering processes and the addition of random noise to images or curves in order to assess the effects of various processing procedures. Reliable sources of random deviates with statistical properties indistinguishable from true random deviates are a fundamental necessity for such tasks. This paper provides a review of computer algorithms which can be used to generate uniform random deviates and other distributions of interest to medical physicists, along with a few caveats relating to various problems and pitfalls which can occur. Source code listings for the generators discussed (in FORTRAN, Turbo-PASCAL and Data General ASSEMBLER) are available on request from the authors. 27 refs., 3 tabs., 5 figs

  20. Large deviations and portfolio optimization

    Science.gov (United States)

    Sornette, Didier

    Risk control and optimal diversification constitute a major focus in the finance and insurance industries as well as, more or less consciously, in our everyday life. We present a discussion of the characterization of risks and of the optimization of portfolios that starts from a simple illustrative model and ends by a general functional integral formulation. A major item is that risk, usually thought of as one-dimensional in the conventional mean-variance approach, has to be addressed by the full distribution of losses. Furthermore, the time-horizon of the investment is shown to play a major role. We show the importance of accounting for large fluctuations and use the theory of Cramér for large deviations in this context. We first treat a simple model with a single risky asset that exemplifies the distinction between the average return and the typical return and the role of large deviations in multiplicative processes, and the different optimal strategies for the investors depending on their size. We then analyze the case of assets whose price variations are distributed according to exponential laws, a situation that is found to describe daily price variations reasonably well. Several portfolio optimization strategies are presented that aim at controlling large risks. We end by extending the standard mean-variance portfolio optimization theory, first within the quasi-Gaussian approximation and then using a general formulation for non-Gaussian correlated assets in terms of the formalism of functional integrals developed in the field theory of critical phenomena.

  1. Large deviation estimates for a Non-Markovian Lévy generator of big order

    International Nuclear Information System (INIS)

    Léandre, Rémi

    2015-01-01

    We give large deviation estimates for a non-markovian convolution semi-group with a non-local generator of Lévy type of big order and with the standard normalisation of semi-classical analysis. No stochastic process is associated to this semi-group. (paper)

  2. Surgical Success Rates for Horizontal Concomitant Deviations According to the Type and Degree of Deviation

    Directory of Open Access Journals (Sweden)

    İhsan Çaça

    2004-01-01

    Full Text Available We evaluated the correlation with success rates and deviation type and degree inhorizontal concomitant deviations. 104 horizontal concomitan strabismus cases whowere operated in our clinic between January 1994 – December 2000 were included in thestudy. 56 cases undergone recession-resection procedure in the same eye 19 cases twomuscle recession and one muscle resection, 20 cases two muscle recession, 9 cases onlyone muscle recession. 10 ± prism diopter deviation in postoperative sixth monthexamination was accepted as surgical success.Surgical success rate was 90% and 89.3% in the cases with deviation angle of 15-30and 31-50 prism diopter respectively. Success rate was 78.9% if the angle was more than50 prism diopter. According to strabismus type when surgical success rate examined; inalternan esotropia 88.33%, in alternan exotropia 84.6%, in monocular esotropia 88%and in monocular exotropia 83.3% success was fixed. Statistically significant differencewas not found between strabismus type and surgical success rate. The binocular visiongaining rate was found as 51.8% after the treatment of cases.In strabismus surgery, preoperative deviation angle was found to be an effectivefactor on the success rate.

  3. Correlation of pattern reversal visual evoked potential parameters with the pattern standard deviation in primary open angle glaucoma.

    Science.gov (United States)

    Kothari, Ruchi; Bokariya, Pradeep; Singh, Ramji; Singh, Smita; Narang, Purvasha

    2014-01-01

    To evaluate whether glaucomatous visual field defect particularly the pattern standard deviation (PSD) of Humphrey visual field could be associated with visual evoked potential (VEP) parameters of patients having primary open angle glaucoma (POAG). Visual field by Humphrey perimetry and simultaneous recordings of pattern reversal visual evoked potential (PRVEP) were assessed in 100 patients with POAG. The stimulus configuration for VEP recordings consisted of the transient pattern reversal method in which a black and white checker board pattern was generated (full field) and displayed on VEP monitor (colour 14″) by an electronic pattern regenerator inbuilt in an evoked potential recorder (RMS EMG EP MARK II). The results of our study indicate that there is a highly significant (P<0.001) negative correlation of P100 amplitude and a statistically significant (P<0.05) positive correlation of N70 latency, P100 latency and N155 latency with the PSD of Humphrey visual field in the subjects of POAG in various age groups as evaluated by Student's t-test. Prolongation of VEP latencies were mirrored by a corresponding increase of PSD values. Conversely, as PSD increases the magnitude of VEP excursions were found to be diminished.

  4. Estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean.

    Science.gov (United States)

    Schillaci, Michael A; Schillaci, Mario E

    2009-02-01

    The use of small sample sizes in human and primate evolutionary research is commonplace. Estimating how well small samples represent the underlying population, however, is not commonplace. Because the accuracy of determinations of taxonomy, phylogeny, and evolutionary process are dependant upon how well the study sample represents the population of interest, characterizing the uncertainty, or potential error, associated with analyses of small sample sizes is essential. We present a method for estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean using small (nresearchers to determine post hoc the probability that their sample is a meaningful approximation of the population parameter. We tested the method using a large craniometric data set commonly used by researchers in the field. Given our results, we suggest that sample estimates of the population mean can be reasonable and meaningful even when based on small, and perhaps even very small, sample sizes.

  5. LB02.03: EVALUATION OF DAY-BY-DAY BLOOD PRESSURE VARIABILITY IN CLINIC (DO WE STILL NEED STANDARD DEVIATION?).

    Science.gov (United States)

    Ryuzaki, M; Nakamoto, H; Hosoya, K; Komatsu, M; Hibino, Y

    2015-06-01

    Blood pressure (BP) variability correlates with cardio-vascular disease as BP level itself. There is not known easy way to evaluate the BP variability in clinic.To evaluate the usefulness of maximum-minimum difference (MMD) of BP in a month compared to standard deviation (SD), as an index of BP variability. Study-1: Twelve patients (age 65.9 ± 12.1 y/o) were enrolled. Measurements of home systolic (S) BP were required in the morning. The 12 months consecutive data and at least 3 times measurements a month were required for including. (Mean 29.0 ± 4.5 times/month in the morning). We checked the correlation between MMD and SD. Study-2: Six hemodialized patients monitored with i-TECHO system (J of Hypertens 2007: 25: 2353-2358) for longer than one year were analyzed. As in study-1, we analyzed the correlation between SD and MMD of SBP. 17.4 ± 11.9 times per month. Study-3: The data from our previous study (FUJIYAM study Clin. Exp Hypertens 2014: 36:508-16) were extracted. 1524 patient-month morning BP data were calculated as in study-1. Picking up data measuring more than 24 times a month, 517 patient-month BP data were analyzed. We compared the ratio to 25 times measured data of SD and MMD, in the setting 5, 10, 15, 20 times measured data. Study-1: SBP, MMD was correlated very well to SD (p  2 times. If data were extracted (measurements>24 times), correlation was 0.927 (P < 0.0001). The equation of SBPSD = 1.520+ 0.201xMMD. The ratios of SD to 25 times were as follows; 0.956 in 5 times, 0.956 in 10, 0.979 in 15, 0.991 in 20 times. The ratios of MMD to 25 times were as follows; 0.558 in 5, 0.761 in 10, 0.874 in 15, 0.944 in 20. We can assume SD easily by measuring MMD as an index of day-by-day BP variability of a month. The equation formulas were very similar though the patients' groups were different. But we have to be careful how many times patients measure in a month.

  6. Entanglement transitions induced by large deviations

    Science.gov (United States)

    Bhosale, Udaysinh T.

    2017-12-01

    The probability of large deviations of the smallest Schmidt eigenvalue for random pure states of bipartite systems, denoted as A and B , is computed analytically using a Coulomb gas method. It is shown that this probability, for large N , goes as exp[-β N2Φ (ζ ) ] , where the parameter β is the Dyson index of the ensemble, ζ is the large deviation parameter, while the rate function Φ (ζ ) is calculated exactly. Corresponding equilibrium Coulomb charge density is derived for its large deviations. Effects of the large deviations of the extreme (largest and smallest) Schmidt eigenvalues on the bipartite entanglement are studied using the von Neumann entropy. Effect of these deviations is also studied on the entanglement between subsystems 1 and 2, obtained by further partitioning the subsystem A , using the properties of the density matrix's partial transpose ρ12Γ. The density of states of ρ12Γ is found to be close to the Wigner's semicircle law with these large deviations. The entanglement properties are captured very well by a simple random matrix model for the partial transpose. The model predicts the entanglement transition across a critical large deviation parameter ζ . Log negativity is used to quantify the entanglement between subsystems 1 and 2. Analytical formulas for it are derived using the simple model. Numerical simulations are in excellent agreement with the analytical results.

  7. On the maximum Q in feedback controlled subignited plasmas

    International Nuclear Information System (INIS)

    Anderson, D.; Hamnen, H.; Lisak, M.

    1990-01-01

    High Q operation in feedback controlled subignited fusion plasma requires the operating temperature to be close to the ignition temperature. In the present work we discuss technological and physical effects which may restrict this temperature difference. The investigation is based on a simplified, but still accurate, 0=D analytical analysis of the maximum Q of a subignited system. Particular emphasis is given to sawtooth ocsillations which complicate the interpretation of diagnostic neutron emission data into plasma temperatures and may imply an inherent lower bound on the temperature deviation from the ignition point. The estimated maximum Q is found to be marginal (Q = 10-20) from the point of view of a fusion reactor. (authors)

  8. Power-Smoothing Scheme of a DFIG Using the Adaptive Gain Depending on the Rotor Speed and Frequency Deviation

    DEFF Research Database (Denmark)

    Lee, Hyewon; Hwang, Min; Muljadi, Eduard

    2017-01-01

    In an electric power grid that has a high penetration level of wind, the power fluctuation of a large-scale wind power plant (WPP) caused by varying wind speeds deteriorates the system frequency regulation. This paper proposes a power-smoothing scheme of a doubly-fed induction generator (DFIG...... demonstrate that the proposed scheme significantly lessens the output power fluctuation of a WPP under various scenarios by modifying the gain with the rotor speed and frequency deviation, and thereby it can regulate the frequency deviation within a narrow range.......) that significantly mitigates the system frequency fluctuation while preventing over-deceleration of the rotor speed. The proposed scheme employs an additional control loop relying on the system frequency deviation that operates in combination with the maximum power point tracking control loop. To improve the power...

  9. Incidental colonic focal FDG uptake on PET/CT: can the maximum standardized uptake value (SUVmax) guide us in the timing of colonoscopy?

    NARCIS (Netherlands)

    van Hoeij, F. B.; Keijsers, R. G. M.; Loffeld, B. C. A. J.; Dun, G.; Stadhouders, P. H. G. M.; Weusten, B. L. A. M.

    2015-01-01

    In patients undergoing F-18-FDG PET/CT, incidental colonic focal lesions can be indicative of inflammatory, premalignant or malignant lesions. The maximum standardized uptake value (SUVmax) of these lesions, representing the FDG uptake intensity, might be helpful in differentiating malignant from

  10. Comparison of setup deviations for two thermoplastic immobilization masks in glottis cancer

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Jae Hong [Dept. of Biomedical Engineering, College of Medicine, The Catholic University, Seoul (Korea, Republic of)

    2017-03-15

    The purpose of this study was compare to the patient setup deviation of two different type thermoplastic immobilization masks for glottis cancer in the intensity-modulated radiation therapy (IMRT). A total of 16 glottis cancer cases were divided into two groups based on applied mask type: standard or alternative group. The mean error (M), three-dimensional setup displacement error (3D-error), systematic error (Σ), random error (σ) were calculated for each group, and also analyzed setup margin (mm). The 3D-errors were 5.2 ± 1.3 mm and 5.9 ± 0.7 mm for the standard and alternative groups, respectively; the alternative group was 13.6% higher than the standard group. The systematic errors in the roll angle and the x, y, z directions were 0.8°, 1.7 mm, 1.0 mm, and 1.5 mm in the alternative group and 0.8°, 1.1 mm, 1.8 mm, and 2.0 mm in the alternative group. The random errors in the x, y, z directions were 10.9%, 1.7%, and 23.1% lower in the alternative group than in the standard group. However, absolute rotational angle (i.e., roll) in the alternative group was 12.4% higher than in the standard group. For calculated setup margin, the alternative group in x direction was 31.8% lower than in standard group. In contrast, the y and z direction were 52.6% and 21.6% higher than in the standard group. Although using a modified thermoplastic immobilization mask could be affect patient setup deviation in terms of numerical results, various point of view for an immobilization masks has need to research in terms of clinic issue.

  11. 22 CFR 226.4 - Deviations.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Deviations. 226.4 Section 226.4 Foreign Relations AGENCY FOR INTERNATIONAL DEVELOPMENT ADMINISTRATION OF ASSISTANCE AWARDS TO U.S. NON-GOVERNMENTAL ORGANIZATIONS General § 226.4 Deviations. The Office of Management and Budget (OMB) may grant exceptions for...

  12. Moving standard deviation and moving sum of outliers as quality tools for monitoring analytical precision.

    Science.gov (United States)

    Liu, Jiakai; Tan, Chin Hon; Badrick, Tony; Loh, Tze Ping

    2018-02-01

    An increase in analytical imprecision (expressed as CV a ) can introduce additional variability (i.e. noise) to the patient results, which poses a challenge to the optimal management of patients. Relatively little work has been done to address the need for continuous monitoring of analytical imprecision. Through numerical simulations, we describe the use of moving standard deviation (movSD) and a recently described moving sum of outlier (movSO) patient results as means for detecting increased analytical imprecision, and compare their performances against internal quality control (QC) and the average of normal (AoN) approaches. The power of detecting an increase in CV a is suboptimal under routine internal QC procedures. The AoN technique almost always had the highest average number of patient results affected before error detection (ANPed), indicating that it had generally the worst capability for detecting an increased CV a . On the other hand, the movSD and movSO approaches were able to detect an increased CV a at significantly lower ANPed, particularly for measurands that displayed a relatively small ratio of biological variation to CV a. CONCLUSION: The movSD and movSO approaches are effective in detecting an increase in CV a for high-risk measurands with small biological variation. Their performance is relatively poor when the biological variation is large. However, the clinical risks of an increase in analytical imprecision is attenuated for these measurands as an increased analytical imprecision will only add marginally to the total variation and less likely to impact on the clinical care. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  13. Modified Moment, Maximum Likelihood and Percentile Estimators for the Parameters of the Power Function Distribution

    Directory of Open Access Journals (Sweden)

    Azam Zaka

    2014-10-01

    Full Text Available This paper is concerned with the modifications of maximum likelihood, moments and percentile estimators of the two parameter Power function distribution. Sampling behavior of the estimators is indicated by Monte Carlo simulation. For some combinations of parameter values, some of the modified estimators appear better than the traditional maximum likelihood, moments and percentile estimators with respect to bias, mean square error and total deviation.

  14. Deviations from thermal equilibrium in plasmas

    International Nuclear Information System (INIS)

    Burm, K.T.A.L.

    2004-01-01

    A plasma system in local thermal equilibrium can usually be described with only two parameters. To describe deviations from equilibrium two extra parameters are needed. However, it will be shown that deviations from temperature equilibrium and deviations from Saha equilibrium depend on one another. As a result, non-equilibrium plasmas can be described with three parameters. This reduction in parameter space will ease the plasma describing effort enormously

  15. Deviating measurements in radiation protection. Legal assessment of deviations in radiation protection measurements

    International Nuclear Information System (INIS)

    Hoegl, A.

    1996-01-01

    This study investigates how, from a legal point of view, deviations in radiation protection measurements should be treated in comparisons between measured results and limits stipulated by nuclear legislation or goods transport regulations. A case-by-case distinction is proposed which is based on the legal concequences of the respective measurement. Commentaries on nuclear law contain no references to the legal assessment of deviating measurements in radiation protection. The examples quoted in legal commentaries on civil and criminal proceedings of the way in which errors made in measurements for speed control and determinations of the alcohol content in the blood are to be taken into account, and a commentary on ozone legislation, are examined for analogies with radiation protection measurements. Leading cases in the nuclear field are evaluated in the light of the requirements applying in case of deviations in measurements. The final section summarizes the most important findings and conclusions. (orig.) [de

  16. Changes in deviation of absorbed dose to water among users by chamber calibration shift.

    Science.gov (United States)

    Katayose, Tetsurou; Saitoh, Hidetoshi; Igari, Mitsunobu; Chang, Weishan; Hashimoto, Shimpei; Morioka, Mie

    2017-07-01

    The JSMP01 dosimetry protocol had adopted the provisional 60 Co calibration coefficient [Formula: see text], namely, the product of exposure calibration coefficient N C and conversion coefficient k D,X . After that, the absorbed dose to water D w  standard was established, and the JSMP12 protocol adopted the [Formula: see text] calibration. In this study, the influence of the calibration shift on the measurement of D w among users was analyzed. The intercomparison of the D w using an ionization chamber was annually performed by visiting related hospitals. Intercomparison results before and after the calibration shift were analyzed, the deviation of D w among users was re-evaluated, and the cause of deviation was estimated. As a result, the stability of LINAC, calibration of the thermometer and barometer, and collection method of ion recombination were confirmed. The statistical significance of standard deviation of D w was not observed, but that of difference of D w among users was observed between N C and [Formula: see text] calibration. Uncertainty due to chamber-to-chamber variation was reduced by the calibration shift, consequently reducing the uncertainty among users regarding D w . The result also pointed out uncertainty might be reduced by accurate and detailed instructions on the setup of an ionization chamber.

  17. New reference charts for testicular volume in Dutch children and adolescents allow the calculation of standard deviation scores.

    Science.gov (United States)

    Joustra, Sjoerd D; van der Plas, Evelyn M; Goede, Joery; Oostdijk, Wilma; Delemarre-van de Waal, Henriette A; Hack, Wilfried W M; van Buuren, Stef; Wit, Jan M

    2015-06-01

    Accurate calculations of testicular volume standard deviation (SD) scores are not currently available. We constructed LMS-smoothed age-reference charts for testicular volume in healthy boys. The LMS method was used to calculate reference data, based on testicular volumes from ultrasonography and Prader orchidometer of 769 healthy Dutch boys aged 6 months to 19 years. We also explored the association between testicular growth and pubic hair development, and data were compared to orchidometric testicular volumes from the 1997 Dutch nationwide growth study. The LMS-smoothed reference charts showed that no revision of the definition of normal onset of male puberty - from nine to 14 years of age - was warranted. In healthy boys, the pubic hair stage SD scores corresponded with testicular volume SD scores (r = 0.394). However, testes were relatively small for pubic hair stage in Klinefelter's syndrome and relatively large in immunoglobulin superfamily member 1 deficiency syndrome. The age-corrected SD scores for testicular volume will aid in the diagnosis and follow-up of abnormalities in the timing and progression of male puberty and in research evaluations. The SD scores can be compared with pubic hair SD scores to identify discrepancies between cell functions that result in relative microorchidism or macroorchidism. ©2015 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.

  18. Correlation of pattern reversal visual evoked potential parameters with the pattern standard deviation in primary open angle glaucoma

    Directory of Open Access Journals (Sweden)

    Ruchi Kothari

    2014-04-01

    Full Text Available AIM:To evaluate whether glaucomatous visual field defect particularly the pattern standard deviation (PSD of Humphrey visual field could be associated with visual evoked potential (VEP parameters of patients having primary open angle glaucoma (POAG.METHODS:Visual field by Humphrey perimetry and simultaneous recordings of pattern reversal visual evoked potential (PRVEP were assessed in 100 patients with POAG. The stimulus configuration for VEP recordings consisted of the transient pattern reversal method in which a black and white checker board pattern was generated (full field and displayed on VEP monitor (colour 14” by an electronic pattern regenerator inbuilt in an evoked potential recorder (RMS EMG EP MARK II.RESULTS:The results of our study indicate that there is a highly significant (P<0.001 negative correlation of P100 amplitude and a statistically significant (P<0.05 positive correlation of N70 latency, P100 latency and N155 latency with the PSD of Humphrey visual field in the subjects of POAG in various age groups as evaluated by Student’s t-test.CONCLUSION:Prolongation of VEP latencies were mirrored by a corresponding increase of PSD values. Conversely, as PSD increases the magnitude of VEP excursions were found to be diminished.

  19. [DIN-compatible vision assessment of increased reproducibility using staircase measurement and maximum likelihood analysis].

    Science.gov (United States)

    Weigmann, U; Petersen, J

    1996-08-01

    Visual acuity determination according to DIN 58,220 does not make full use of the information received about the patient, in contrast to the staircase method. Thus, testing the same number of optotypes, the staircase method should yield more reproducible acuity results. On the other hand, the staircase method gives systematically higher acuity values because it converges on the 48% point of the psychometric function (for Landolt rings in eight positions) and not on the 65% probability, as DIN 58,220 with criterion 3/5 does. This bias can be avoided by means of a modified evaluation. Using the staircase data we performed a maximum likelihood estimate of the psychometric function as a whole and computed the acuity value for 65% probability of correct answers. We determined monocular visual acuity in 102 persons with widely differing visual performance. Each subject underwent four tests in random order, two according to DIN 58,220 and two using the modified staircase method (Landolt rings in eight positions scaled by a factor 1.26; PC monitor with 1024 x 768 pixels; distance 4.5 m). Each test was performed with 25 optotypes. The two procedures provide the same mean visual acuity values (difference less than 0.02 acuity steps). The test-retest results match in 30.4% of DIN repetitions but in 50% of the staircases. The standard deviation of the test-retest difference is 1.41 (DIN) and 1.06 (modified staircase) acuity steps. Thus the standard deviation of the single test is 1.0 (DIN) and 0.75 (modified staircase) acuity steps. The new method provides visual acuity values identical to DIN 58,220 but is superior with respect to reproducibility.

  20. 9 CFR 318.308 - Deviations in processing.

    Science.gov (United States)

    2010-01-01

    ...) Deviations in processing (or process deviations) must be handled according to: (1)(i) A HACCP plan for canned...) of this section. (c) [Reserved] (d) Procedures for handling process deviations where the HACCP plan... accordance with the following procedures: (a) Emergency stops. (1) When retort jams or breakdowns occur...

  1. Improved differentiation between hepatic hemangioma and metastases on diffusion-weighted MRI by measurement of standard deviation of apparent diffusion coefficient.

    Science.gov (United States)

    Hardie, Andrew D; Egbert, Robert E; Rissing, Michael S

    2015-01-01

    Diffusion-weighted magnetic resonance imaging (DW-MR) can be useful in the differentiation of hemangiomata from liver metastasis, but improved methods other than by mean apparent diffusion coefficient (mADC) are needed. A retrospective review identified 109 metastatic liver lesions and 86 hemangiomata in 128 patients who had undergone DW-MR. For each lesion, mADC and the standard deviation of the mean ADC (sdADC) were recorded and compared by receiver operating characteristic analysis. Mean mADC was higher in benign hemangiomata (1.52±0.12 mm(2)/s) than in liver metastases (1.33±0.18 mm(2)/s), but there was significant overlap in values. The mean sdADC was lower in hemangiomata (101±17 mm(2)/s) than metastases (245±25 mm(2)/s) and demonstrated no overlap in values, which was significantly different (P<.0001). Hemangiomata may be better able to be differentiated from liver metastases on the basis of sdADC than by mADC, although further studies are needed. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Maximum Entropy, Word-Frequency, Chinese Characters, and Multiple Meanings

    Science.gov (United States)

    Yan, Xiaoyong; Minnhagen, Petter

    2015-01-01

    The word-frequency distribution of a text written by an author is well accounted for by a maximum entropy distribution, the RGF (random group formation)-prediction. The RGF-distribution is completely determined by the a priori values of the total number of words in the text (M), the number of distinct words (N) and the number of repetitions of the most common word (kmax). It is here shown that this maximum entropy prediction also describes a text written in Chinese characters. In particular it is shown that although the same Chinese text written in words and Chinese characters have quite differently shaped distributions, they are nevertheless both well predicted by their respective three a priori characteristic values. It is pointed out that this is analogous to the change in the shape of the distribution when translating a given text to another language. Another consequence of the RGF-prediction is that taking a part of a long text will change the input parameters (M, N, kmax) and consequently also the shape of the frequency distribution. This is explicitly confirmed for texts written in Chinese characters. Since the RGF-prediction has no system-specific information beyond the three a priori values (M, N, kmax), any specific language characteristic has to be sought in systematic deviations from the RGF-prediction and the measured frequencies. One such systematic deviation is identified and, through a statistical information theoretical argument and an extended RGF-model, it is proposed that this deviation is caused by multiple meanings of Chinese characters. The effect is stronger for Chinese characters than for Chinese words. The relation between Zipf’s law, the Simon-model for texts and the present results are discussed. PMID:25955175

  3. Deviation of the Variances of Classical Estimators and Negative Integer Moment Estimator from Minimum Variance Bound with Reference to Maxwell Distribution

    Directory of Open Access Journals (Sweden)

    G. R. Pasha

    2006-07-01

    Full Text Available In this paper, we present that how much the variances of the classical estimators, namely, maximum likelihood estimator and moment estimator deviate from the minimum variance bound while estimating for the Maxwell distribution. We also sketch this difference for the negative integer moment estimator. We note the poor performance of the negative integer moment estimator in the said consideration while maximum likelihood estimator attains minimum variance bound and becomes an attractive choice.

  4. A Hubble Space Telescope Survey for Novae in M87. II. Snuffing out the Maximum Magnitude–Rate of Decline Relation for Novae as a Non-standard Candle, and a Prediction of the Existence of Ultrafast Novae

    Energy Technology Data Exchange (ETDEWEB)

    Shara, Michael M.; Doyle, Trisha; Zurek, David [Department of Astrophysics, American Museum of Natural History, Central Park West and 79th Street, New York, NY 10024-5192 (United States); Lauer, Tod R. [National Optical Astronomy Observatory, P.O. Box 26732, Tucson, AZ 85726 (United States); Baltz, Edward A. [KIPAC, SLAC, 2575 Sand Hill Road, M/S 29, Menlo Park, CA 94025 (United States); Kovetz, Attay [School of Physics and Astronomy, Faculty of Exact Sciences, Tel Aviv University, Tel Aviv (Israel); Madrid, Juan P. [CSIRO, Astronomy and Space Science, P.O. Box 76, Epping, NSW 1710 (Australia); Mikołajewska, Joanna [N. Copernicus Astronomical Center, Polish Academy of Sciences, Bartycka 18, PL 00-716 Warsaw (Poland); Neill, J. D. [California Institute of Technology, 1200 East California Boulevard, MC 278-17, Pasadena CA 91125 (United States); Prialnik, Dina [Department of Geosciences, Tel Aviv University, Ramat Aviv, Tel Aviv 69978 (Israel); Welch, D. L. [Department of Physics and Astronomy, McMaster University, Hamilton, L8S 4M1, Ontario (Canada); Yaron, Ofer [Department of Particle Physics and Astrophysics, Weizmann Institute of Science, 76100 Rehovot (Israel)

    2017-04-20

    The extensive grid of numerical simulations of nova eruptions from the work of Yaron et al. first predicted that some classical novae might significantly deviate from the Maximum Magnitude–Rate of Decline (MMRD) relation, which purports to characterize novae as standard candles. Kasliwal et al. have announced the observational detection of a new class of faint, fast classical novae in the Andromeda galaxy. These objects deviate strongly from the MMRD relationship, as predicted by Yaron et al. Recently, Shara et al. reported the first detections of faint, fast novae in M87. These previously overlooked objects are as common in the giant elliptical galaxy M87 as they are in the giant spiral M31; they comprise about 40% of all classical nova eruptions and greatly increase the observational scatter in the MMRD relation. We use the extensive grid of the nova simulations of Yaron et al. to identify the underlying causes of the existence of faint, fast novae. These are systems that have accreted, and can thus eject, only very low-mass envelopes, of the order of 10{sup −7}–10{sup −8} M {sub ⊙}, on massive white dwarfs. Such binaries include, but are not limited to, the recurrent novae. These same models predict the existence of ultrafast novae that display decline times, t {sub 2,} to be as short as five hours. We outline a strategy for their future detection.

  5. HU deviation in lung and bone tissues: Characterization and a corrective strategy.

    Science.gov (United States)

    Ai, Hua A; Meier, Joseph G; Wendt, Richard E

    2018-05-01

    In the era of precision medicine, quantitative applications of x-ray Computed Tomography (CT) are on the rise. These require accurate measurement of the CT number, also known as the Hounsfield Unit. In this study, we evaluated the effect of patient attenuation-induced beam hardening of the x-ray spectrum on the accuracy of the HU values and a strategy to correct for the resulting deviations in the measured HU values. A CIRS electron density phantom was scanned on a Siemens Biograph mCT Flow CT scanner and a GE Discovery 710 CT scanner using standard techniques that are employed in the clinic to assess the HU deviation caused by beam hardening in different tissue types. In addition, an anthropomorphic ATOM adult male upper torso phantom was scanned on the GE Discovery 710 scanner. Various amounts of Superflab bolus material were wrapped around the phantoms to simulate different patient sizes. The mean HU values that were measured in the phantoms were evaluated as a function of the water-equivalent area (A w ), a parameter that is described in the report of AAPM Task Group 220. A strategy by which to correct the HU values was developed and tested. The variation in the HU values in the anthropomorphic ATOM phantom under different simulated body sizes, both before and after correction, were compared, with a focus on the lung and bone tissues. Significant HU deviations that depended on the simulated patient size were observed. A positive correlation between HU and A w was observed for tissue types that have an HU of less than zero, while a negative correlation was observed for tissue types with HU values that are greater than zero. The magnitude of the difference increases as the underlying attenuation property deviates further away from that of water. In the electron density phantom study, the maximum observed HU differences between the measured and reference values in the cortical bone and lung materials were 426 and 94 HU, respectively. In the anthropomorphic phantom

  6. The recursive combination filter approach of pre-processing for the estimation of standard deviation of RR series.

    Science.gov (United States)

    Mishra, Alok; Swati, D

    2015-09-01

    Variation in the interval between the R-R peaks of the electrocardiogram represents the modulation of the cardiac oscillations by the autonomic nervous system. This variation is contaminated by anomalous signals called ectopic beats, artefacts or noise which mask the true behaviour of heart rate variability. In this paper, we have proposed a combination filter of recursive impulse rejection filter and recursive 20% filter, with recursive application and preference of replacement over removal of abnormal beats to improve the pre-processing of the inter-beat intervals. We have tested this novel recursive combinational method with median method replacement to estimate the standard deviation of normal to normal (SDNN) beat intervals of congestive heart failure (CHF) and normal sinus rhythm subjects. This work discusses the improvement in pre-processing over single use of impulse rejection filter and removal of abnormal beats for heart rate variability for the estimation of SDNN and Poncaré plot descriptors (SD1, SD2, and SD1/SD2) in detail. We have found the 22 ms value of SDNN and 36 ms value of SD2 descriptor of Poincaré plot as clinical indicators in discriminating the normal cases from CHF cases. The pre-processing is also useful in calculation of Lyapunov exponent which is a nonlinear index as Lyapunov exponents calculated after proposed pre-processing modified in a way that it start following the notion of less complex behaviour of diseased states.

  7. Analysis of standard substance human hair

    International Nuclear Information System (INIS)

    Zou Shuyun; Zhang Yongbao

    2005-01-01

    The human hair samples as standard substances were analyzed by the neutron activation analysis (NAA) on the miniature neutron source reactor. 19 elements, i.e. Al, As, Ba, Br, Ca, Cl, Cr, Co, Cu, Fe, Hg, I, Mg, Mn, Na, S, Se, V and Zn, were measured. The average content, standard deviation, relative standard deviation and the detection limit under the present research conditions were given for each element, and the results showed that the measured values of the samples were in agreement with the recommended values, which indicated that NAA can be used to analyze standard substance human hair with a relatively high accuracy. (authors)

  8. Maximum Likelihood, Consistency and Data Envelopment Analysis: A Statistical Foundation

    OpenAIRE

    Rajiv D. Banker

    1993-01-01

    This paper provides a formal statistical basis for the efficiency evaluation techniques of data envelopment analysis (DEA). DEA estimators of the best practice monotone increasing and concave production function are shown to be also maximum likelihood estimators if the deviation of actual output from the efficient output is regarded as a stochastic variable with a monotone decreasing probability density function. While the best practice frontier estimator is biased below the theoretical front...

  9. Large deviations for noninteracting infinite-particle systems

    International Nuclear Information System (INIS)

    Donsker, M.D.; Varadhan, S.R.S.

    1987-01-01

    A large deviation property is established for noninteracting infinite particle systems. Previous large deviation results obtained by the authors involved a single I-function because the cases treated always involved a unique invariant measure for the process. In the context of this paper there is an infinite family of invariant measures and a corresponding infinite family of I-functions governing the large deviations

  10. 48 CFR 1401.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Individual deviations. 1401.403 Section 1401.403 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR GENERAL DEPARTMENT OF THE INTERIOR ACQUISITION REGULATION SYSTEM Deviations from the FAR and DIAR 1401.403 Individual...

  11. TERMINOLOGY MANAGEMENT FRAMEWORK DEVIATIONS IN PROJECTS

    Directory of Open Access Journals (Sweden)

    Олена Борисівна ДАНЧЕНКО

    2015-05-01

    Full Text Available The article reviews new approaches to managing projects deviations (risks, changes, problems. By offering integrated control these parameters of the project and by analogy with medical terminological systems building a new system for managing terminological variations in the projects. With an improved method of triads system definitions are analyzed medical terms that make up terminological basis. Using the method of analogy proposed new definitions for managing deviations in projects. By using triad integrity built a new system triad in project management, which will subsequently also analogous to develop a new methodology of deviations in projects.

  12. MUSiC - An Automated Scan for Deviations between Data and Monte Carlo Simulation

    CERN Document Server

    Meyer, Arnd

    2009-01-01

    A model independent analysis approach is presented, systematically scanning the data for deviations from the standard model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of event generators. The approach is sensitive to a variety of models of new physics, including those not yet thought of.

  13. MUSiC - An Automated Scan for Deviations between Data and Monte Carlo Simulation

    International Nuclear Information System (INIS)

    Meyer, Arnd

    2010-01-01

    A model independent analysis approach is presented, systematically scanning the data for deviations from the standard model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of event generators. The approach is sensitive to a variety of models of new physics, including those not yet thought of.

  14. Detection of severe storm signatures in loblolly pine using seven-year periodic standardized averages and standard deviations

    Science.gov (United States)

    Stevenson Douglas; Thomas Hennessey; Thomas Lynch; Giulia Caterina; Rodolfo Mota; Robert Heineman; Randal Holeman; Dennis Wilson; Keith Anderson

    2016-01-01

    A loblolly pine plantation near Eagletown, Oklahoma was used to test standardized tree ring widths in detecting snow and ice storms. Widths of two rings immediately following suspected storms were standardized against widths of seven rings following the storm (Stan1 and Stan2). Values of Stan1 less than -0.900 predict a severe (usually ice) storm when Stan 2 is less...

  15. 41 CFR 115-1.110 - Deviations.

    Science.gov (United States)

    2010-07-01

    ... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false Deviations. 115-1.110 Section 115-1.110 Public Contracts and Property Management Federal Property Management Regulations System (Continued) ENVIRONMENTAL PROTECTION AGENCY 1-INTRODUCTION 1.1-Regulation System § 115-1.110 Deviations...

  16. 40 CFR 60.3052 - What else must I report if I have a deviation from the operating limits or the emission limitations?

    Science.gov (United States)

    2010-07-01

    ... control device was bypassed, or if a performance test was conducted that showed a deviation from any... deviation from the operating limits or the emission limitations? 60.3052 Section 60.3052 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR...

  17. 40 CFR 60.2957 - What else must I report if I have a deviation from the operating limits or the emission limitations?

    Science.gov (United States)

    2010-07-01

    ..., or if a performance test was conducted that showed a deviation from any emission limitation. (b) The... deviation from the operating limits or the emission limitations? 60.2957 Section 60.2957 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR...

  18. Adaptive behaviors of experts in following standard protocol in trauma management: implications for developing flexible guidelines.

    Science.gov (United States)

    Vankipuram, Mithra; Ghaemmaghami, Vafa; Patel, Vimla L

    2012-01-01

    Critical care environments are complex and dynamic. To adapt to such environments, clinicians may be required to make alterations to their workflows resulting in deviations from standard procedures. In this work, deviations from standards in trauma critical care are studied. Thirty trauma cases were observed in a Level 1 trauma center. Activities tracked were compared to the Advance Trauma Life Support standard to determine (i) if deviations had occurred, (ii) type of deviations and (iii) whether deviations were initiated by individuals or collaboratively by the team. Results show that expert clinicians deviated to innovate, while deviations of novices result mostly in error. Experts' well developed knowledge allows for flexibility and adaptiveness in dealing with standards, resulting in innovative deviations while minimizing errors made. Providing informatics solution, in such a setting, would mean that standard protocols would have be flexible enough to "learn" from new knowledge, yet provide strong support for the trainees.

  19. 41 CFR 105-1.110 - Deviation.

    Science.gov (United States)

    2010-07-01

    ... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false Deviation. 105-1.110 Section 105-1.110 Public Contracts and Property Management Federal Property Management Regulations System (Continued) GENERAL SERVICES ADMINISTRATION 1-INTRODUCTION 1.1-Regulations System § 105-1.110 Deviation. (a...

  20. 41 CFR 101-1.110 - Deviation.

    Science.gov (United States)

    2010-07-01

    ... 41 Public Contracts and Property Management 2 2010-07-01 2010-07-01 true Deviation. 101-1.110 Section 101-1.110 Public Contracts and Property Management Federal Property Management Regulations System FEDERAL PROPERTY MANAGEMENT REGULATIONS GENERAL 1-INTRODUCTION 1.1-Regulation System § 101-1.110 Deviation...

  1. Some clarifications about the Bohmian geodesic deviation equation and Raychaudhuri's equation

    OpenAIRE

    Rahmani, Faramarz; Golshani, Mehdi

    2017-01-01

    One of the important and famous topics in general theory of relativity and gravitation is the problem of geodesic deviation and its related singularity theorems. An interesting subject is the investigation of these concepts when quantum effects are considered. Since, the definition of trajectory is not possible in the framework of standard quantum mechanics (SQM), we investigate the problem of geodesic equation and its related topics in the framework of Bohmian quantum mechanics in which the ...

  2. Application of Allan Deviation to Assessing Uncertainties of Continuous-measurement Instruments, and Optimizing Calibration Schemes

    Science.gov (United States)

    Jacobson, Gloria; Rella, Chris; Farinas, Alejandro

    2014-05-01

    Technological advancement of instrumentation in atmospheric and other geoscience disciplines over the past decade has lead to a shift from discrete sample analysis to continuous, in-situ monitoring. Standard error analysis used for discrete measurements is not sufficient to assess and compare the error contribution of noise and drift from continuous-measurement instruments, and a different statistical analysis approach should be applied. The Allan standard deviation analysis technique developed for atomic clock stability assessment by David W. Allan [1] can be effectively and gainfully applied to continuous measurement instruments. As an example, P. Werle et al has applied these techniques to look at signal averaging for atmospheric monitoring by Tunable Diode-Laser Absorption Spectroscopy (TDLAS) [2]. This presentation will build on, and translate prior foundational publications to provide contextual definitions and guidelines for the practical application of this analysis technique to continuous scientific measurements. The specific example of a Picarro G2401 Cavity Ringdown Spectroscopy (CRDS) analyzer used for continuous, atmospheric monitoring of CO2, CH4 and CO will be used to define the basics features the Allan deviation, assess factors affecting the analysis, and explore the time-series to Allan deviation plot translation for different types of instrument noise (white noise, linear drift, and interpolated data). In addition, the useful application of using an Allan deviation to optimize and predict the performance of different calibration schemes will be presented. Even though this presentation will use the specific example of the Picarro G2401 CRDS Analyzer for atmospheric monitoring, the objective is to present the information such that it can be successfully applied to other instrument sets and disciplines. [1] D.W. Allan, "Statistics of Atomic Frequency Standards," Proc, IEEE, vol. 54, pp 221-230, Feb 1966 [2] P. Werle, R. Miicke, F. Slemr, "The Limits

  3. A study on the deviation aspects of the poem “The Eightieth Stage”

    Directory of Open Access Journals (Sweden)

    Soghra Salmaninejad Mehrabadi

    2016-01-01

    of synergistic base has helped to the poet's innovation. New expressions are also used in other parts of abnormality in “The Eightieth Stag e” . Stylistic deviation Sometimes, Akhavan uses local and slang words, and words with different songs and music produces deviation as well. This Application is one kind of abnormality. Words such as “han, hey, by the truth, pity, hoome, kope, meydanak and ...” are of this type of abnormality .   Ancient deviation One way to break out of the habit of poetry , is attention to ancient words and actions . Archaism is one of the factors affecting the deviation. Archaism deviation helps to make the old sp. According to Leach, the ancient is the survival of the old language in the now. Syntactic factors, type of music and words, are effective in escape from the standard language. ”Sowrat (sharpness, hamgenan (counterparts, parine (last year, pour ( son, pahlaw (champion’’are Words that show Akhavan’s attention to archaism. The ancient pronunciation is another part of his work. Furthermore, use of mythology and allusion have created deviation of this type. Cases such as anagram adjectival compounds, the use of two prepositions for a word, the use of the adjective and noun in the plural form, are signs of archaism in grammar and syntax. He is interested in grammatical elements of Khorasani Style. Most elements of this style used in “The Eightieth Stage” poetry. S emantic deviation Semantic deviation is caused by the imagery . The poet uses frequently literary figures. By this way, he produces new meaning and therefore highlights his poem. Simile, metaphor, personification and irony are the most important examples of this deviation. Apparently the maximum deviation from the norm in this poem is of periodic deviation (ancient or archaism. The second row belongs to the semantic deviation in which metaphor is the most meaningful. The effect of metaphor in this poem is quite well. In

  4. Test of the nonexponential deviations from decay curve of 52V using continuous kinetic function method

    International Nuclear Information System (INIS)

    Tran Dai Nghiep; Vu Hoang Lam; Vo Tuong Hanh; Do Nguyet Minh; Nguyen Ngoc Son

    1995-01-01

    Present work is aimed at a formulation of an experimental approach to search the proposed nonexponential deviations from decay curve and at description of an attempt to test them in case of 52 V. Some theoretical description of decay processes are formulated in clarified forms. A continuous kinetic function (CKF) method is described for analysis of experimental data and CKF for purely exponential case is considered as a standard for comparison between theoretical and experimental data. The degree of agreement is defined by the factor of goodness. Typical deviations of oscillation behaviour of 52 V decay were observed in a wide range of time. The proposed deviation related to interaction between decay products and environment is researched. A complex type of decay is discussed. (authors). 10 refs., 4 figs., 2 tabs

  5. 40 CFR 60.2225 - What else must I report if I have a deviation from the requirement to have a qualified operator...

    Science.gov (United States)

    2010-07-01

    ... deviation from the requirement to have a qualified operator accessible? 60.2225 Section 60.2225 Protection... PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Commercial and Industrial Solid Waste... report if I have a deviation from the requirement to have a qualified operator accessible? (a) If all...

  6. Transport Coefficients from Large Deviation Functions

    Directory of Open Access Journals (Sweden)

    Chloe Ya Gao

    2017-10-01

    Full Text Available We describe a method for computing transport coefficients from the direct evaluation of large deviation functions. This method is general, relying on only equilibrium fluctuations, and is statistically efficient, employing trajectory based importance sampling. Equilibrium fluctuations of molecular currents are characterized by their large deviation functions, which are scaled cumulant generating functions analogous to the free energies. A diffusion Monte Carlo algorithm is used to evaluate the large deviation functions, from which arbitrary transport coefficients are derivable. We find significant statistical improvement over traditional Green–Kubo based calculations. The systematic and statistical errors of this method are analyzed in the context of specific transport coefficient calculations, including the shear viscosity, interfacial friction coefficient, and thermal conductivity.

  7. Transport Coefficients from Large Deviation Functions

    Science.gov (United States)

    Gao, Chloe; Limmer, David

    2017-10-01

    We describe a method for computing transport coefficients from the direct evaluation of large deviation function. This method is general, relying on only equilibrium fluctuations, and is statistically efficient, employing trajectory based importance sampling. Equilibrium fluctuations of molecular currents are characterized by their large deviation functions, which is a scaled cumulant generating function analogous to the free energy. A diffusion Monte Carlo algorithm is used to evaluate the large deviation functions, from which arbitrary transport coefficients are derivable. We find significant statistical improvement over traditional Green-Kubo based calculations. The systematic and statistical errors of this method are analyzed in the context of specific transport coefficient calculations, including the shear viscosity, interfacial friction coefficient, and thermal conductivity.

  8. A simple maximum power point tracker for thermoelectric generators

    International Nuclear Information System (INIS)

    Paraskevas, Alexandros; Koutroulis, Eftichios

    2016-01-01

    Highlights: • A Maximum Power Point Tracking (MPPT) method for thermoelectric generators is proposed. • A power converter is controlled to operate on a pre-programmed locus. • The proposed MPPT technique has the advantage of operational and design simplicity. • The experimental average deviation from the MPP power of the TEG source is 1.87%. - Abstract: ThermoElectric Generators (TEGs) are capable to harvest the ambient thermal energy for power-supplying sensors, actuators, biomedical devices etc. in the μW up to several hundreds of Watts range. In this paper, a Maximum Power Point Tracking (MPPT) method for TEG elements is proposed, which is based on controlling a power converter such that it operates on a pre-programmed locus of operating points close to the MPPs of the power–voltage curves of the TEG power source. Compared to the past-proposed MPPT methods for TEGs, the technique presented in this paper has the advantage of operational and design simplicity. Thus, its implementation using off-the-shelf microelectronic components with low-power consumption characteristics is enabled, without being required to employ specialized integrated circuits or signal processing units of high development cost. Experimental results are presented, which demonstrate that for MPP power levels of the TEG source in the range of 1–17 mW, the average deviation of the power produced by the proposed system from the MPP power of the TEG source is 1.87%.

  9. Methods for determining the effect of flatness deviations, eccentricity and pyramidal errors on angle measurements

    CSIR Research Space (South Africa)

    Kruger, OA

    2000-01-01

    Full Text Available on face-to-face angle measurements. The results show that flatness and eccentricity deviations have less effect on angle measurements than do pyramidal errors. 1. Introduction Polygons and angle blocks are the most important transfer standards in the field... of angle metrology. Polygons are used by national metrology institutes (NMIs) as transfer standards to industry, where they are used in conjunction with autocollimators to calibrate index tables, rotary tables and other forms of angle- measuring equipment...

  10. 41 CFR 109-1.110-50 - Deviation procedures.

    Science.gov (United States)

    2010-07-01

    ... best interest of the Government; (3) If applicable, the name of the contractor and identification of... background information which will contribute to a full understanding of the desired deviation. (b)(1... authorized to grant deviations to the DOE-PMR. (d) Requests for deviations from the FPMR will be coordinated...

  11. Total focusing method (TFM) robustness to material deviations

    Science.gov (United States)

    Painchaud-April, Guillaume; Badeau, Nicolas; Lepage, Benoit

    2018-04-01

    The total focusing method (TFM) is becoming an accepted nondestructive evaluation method for industrial inspection. What was a topic of discussion in the applied research community just a few years ago is now being deployed in critical industrial applications, such as inspecting welds in pipelines. However, the method's sensitivity to unexpected parametric changes (material and geometric) has not been rigorously assessed. In this article, we investigate the robustness of TFM in relation to unavoidable deviations from modeled nominal inspection component characteristics, such as sound velocities and uncertainties about the parts' internal and external diameters. We also review TFM's impact on the standard inspection modes often encountered in industrial inspections, and we present a theoretical model supported by empirical observations to illustrate the discussion.

  12. The large deviation approach to statistical mechanics

    International Nuclear Information System (INIS)

    Touchette, Hugo

    2009-01-01

    The theory of large deviations is concerned with the exponential decay of probabilities of large fluctuations in random systems. These probabilities are important in many fields of study, including statistics, finance, and engineering, as they often yield valuable information about the large fluctuations of a random system around its most probable state or trajectory. In the context of equilibrium statistical mechanics, the theory of large deviations provides exponential-order estimates of probabilities that refine and generalize Einstein's theory of fluctuations. This review explores this and other connections between large deviation theory and statistical mechanics, in an effort to show that the mathematical language of statistical mechanics is the language of large deviation theory. The first part of the review presents the basics of large deviation theory, and works out many of its classical applications related to sums of random variables and Markov processes. The second part goes through many problems and results of statistical mechanics, and shows how these can be formulated and derived within the context of large deviation theory. The problems and results treated cover a wide range of physical systems, including equilibrium many-particle systems, noise-perturbed dynamics, nonequilibrium systems, as well as multifractals, disordered systems, and chaotic systems. This review also covers many fundamental aspects of statistical mechanics, such as the derivation of variational principles characterizing equilibrium and nonequilibrium states, the breaking of the Legendre transform for nonconcave entropies, and the characterization of nonequilibrium fluctuations through fluctuation relations.

  13. The large deviation approach to statistical mechanics

    Science.gov (United States)

    Touchette, Hugo

    2009-07-01

    The theory of large deviations is concerned with the exponential decay of probabilities of large fluctuations in random systems. These probabilities are important in many fields of study, including statistics, finance, and engineering, as they often yield valuable information about the large fluctuations of a random system around its most probable state or trajectory. In the context of equilibrium statistical mechanics, the theory of large deviations provides exponential-order estimates of probabilities that refine and generalize Einstein’s theory of fluctuations. This review explores this and other connections between large deviation theory and statistical mechanics, in an effort to show that the mathematical language of statistical mechanics is the language of large deviation theory. The first part of the review presents the basics of large deviation theory, and works out many of its classical applications related to sums of random variables and Markov processes. The second part goes through many problems and results of statistical mechanics, and shows how these can be formulated and derived within the context of large deviation theory. The problems and results treated cover a wide range of physical systems, including equilibrium many-particle systems, noise-perturbed dynamics, nonequilibrium systems, as well as multifractals, disordered systems, and chaotic systems. This review also covers many fundamental aspects of statistical mechanics, such as the derivation of variational principles characterizing equilibrium and nonequilibrium states, the breaking of the Legendre transform for nonconcave entropies, and the characterization of nonequilibrium fluctuations through fluctuation relations.

  14. Transport Coefficients from Large Deviation Functions

    OpenAIRE

    Gao, Chloe Ya; Limmer, David T.

    2017-01-01

    We describe a method for computing transport coefficients from the direct evaluation of large deviation functions. This method is general, relying on only equilibrium fluctuations, and is statistically efficient, employing trajectory based importance sampling. Equilibrium fluctuations of molecular currents are characterized by their large deviation functions, which are scaled cumulant generating functions analogous to the free energies. A diffusion Monte Carlo algorithm is used to evaluate th...

  15. Severe obesity is a limitation for the use of body mass index standard deviation scores in children and adolescents.

    Science.gov (United States)

    Júlíusson, Pétur B; Roelants, Mathieu; Benestad, Beate; Lekhal, Samira; Danielsen, Yngvild; Hjelmesaeth, Jøran; Hertel, Jens K

    2018-02-01

    We analysed the distribution of the body mass index standard deviation scores (BMI-SDS) in children and adolescents seeking treatment for severe obesity, according to the International Obesity Task Force (IOTF), World Health Organization (WHO) and the national Norwegian Bergen Growth Study (BGS) BMI reference charts and the percentage above the International Obesity Task Force 25 cut-off (IOTF-25). This was a cross-sectional study of 396 children aged four to 17 years, who attended a tertiary care obesity centre in Norway from 2009 to 2015. Their BMI was converted to SDS using the three growth references and expressed as the percentage above IOTF-25. The percentage of body fat was assessed by bioelectrical impedance analysis. Regardless of which BMI reference chart was used, the BMI-SDS was significantly different between the age groups, with a wider range of higher values up to 10 years of age and a more narrow range of lower values thereafter. The distributions of the percentage above IOTF-25 and percentage of body fat were more consistent across age groups. Our findings suggest that it may be more appropriate to use the percentage above a particular BMI cut-off, such as the percentage above IOTF-25, than the IOTF, WHO and BGS BMI-SDS in paediatric patients with severe obesity. ©2017 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.

  16. a Web-Based Framework for Visualizing Industrial Spatiotemporal Distribution Using Standard Deviational Ellipse and Shifting Routes of Gravity Centers

    Science.gov (United States)

    Song, Y.; Gui, Z.; Wu, H.; Wei, Y.

    2017-09-01

    Analysing spatiotemporal distribution patterns and its dynamics of different industries can help us learn the macro-level developing trends of those industries, and in turn provides references for industrial spatial planning. However, the analysis process is challenging task which requires an easy-to-understand information presentation mechanism and a powerful computational technology to support the visual analytics of big data on the fly. Due to this reason, this research proposes a web-based framework to enable such a visual analytics requirement. The framework uses standard deviational ellipse (SDE) and shifting route of gravity centers to show the spatial distribution and yearly developing trends of different enterprise types according to their industry categories. The calculation of gravity centers and ellipses is paralleled using Apache Spark to accelerate the processing. In the experiments, we use the enterprise registration dataset in Mainland China from year 1960 to 2015 that contains fine-grain location information (i.e., coordinates of each individual enterprise) to demonstrate the feasibility of this framework. The experiment result shows that the developed visual analytics method is helpful to understand the multi-level patterns and developing trends of different industries in China. Moreover, the proposed framework can be used to analyse any nature and social spatiotemporal point process with large data volume, such as crime and disease.

  17. A WEB-BASED FRAMEWORK FOR VISUALIZING INDUSTRIAL SPATIOTEMPORAL DISTRIBUTION USING STANDARD DEVIATIONAL ELLIPSE AND SHIFTING ROUTES OF GRAVITY CENTERS

    Directory of Open Access Journals (Sweden)

    Y. Song

    2017-09-01

    Full Text Available Analysing spatiotemporal distribution patterns and its dynamics of different industries can help us learn the macro-level developing trends of those industries, and in turn provides references for industrial spatial planning. However, the analysis process is challenging task which requires an easy-to-understand information presentation mechanism and a powerful computational technology to support the visual analytics of big data on the fly. Due to this reason, this research proposes a web-based framework to enable such a visual analytics requirement. The framework uses standard deviational ellipse (SDE and shifting route of gravity centers to show the spatial distribution and yearly developing trends of different enterprise types according to their industry categories. The calculation of gravity centers and ellipses is paralleled using Apache Spark to accelerate the processing. In the experiments, we use the enterprise registration dataset in Mainland China from year 1960 to 2015 that contains fine-grain location information (i.e., coordinates of each individual enterprise to demonstrate the feasibility of this framework. The experiment result shows that the developed visual analytics method is helpful to understand the multi-level patterns and developing trends of different industries in China. Moreover, the proposed framework can be used to analyse any nature and social spatiotemporal point process with large data volume, such as crime and disease.

  18. Mortality and morbidity risks vary with birth weight standard deviation score in growth restricted extremely preterm infants.

    Science.gov (United States)

    Yamakawa, Takuji; Itabashi, Kazuo; Kusuda, Satoshi

    2016-01-01

    To assess whether the mortality and morbidity risks vary with birth weight standard deviation score (BWSDS) in growth restricted extremely preterm infants. This was a multicenter retrospective cohort study using the database of the Neonatal Research Network of Japan and including 9149 infants born between 2003 and 2010 at <28 weeks gestation. According to the BWSDSs, the infants were classified as: <-2.0, -2.0 to -1.5, -1.5 to -1.0, -1.0 to -0.5, and ≥-0.5. Infants with BWSDS≥-0.5 were defined as non-growth restricted group. After adjusting for covariates, the risks of mortality and some morbidities were different among the BWSDS groups. Compared with non-growth restricted group, the adjusted odds ratio (aOR) for mortality [aOR, 1.69; 95% confidence interval (CI), 1.35-2.12] and chronic lung disease (CLD) (aOR, 1.28; 95% CI, 1.07-1.54) were higher among the infants with BWSDS -1.5 to <-1.0. The aOR for severe retinopathy of prematurity (ROP) (aOR, 1.36; 95% CI, 1.09-1.71) and sepsis (aOR, 1.72; 95% CI, 1.32-2.24) were higher among the infants with BWSDS -2.0 to <-1.5. The aOR for necrotizing enterocolitis (NEC) (aOR, 2.41; 95% CI, 1.64-3.55) was increased at a BWSDS<-2.0. Being growth restricted extremely preterm infants confer additional risks for mortality and morbidities such as CLD, ROP, sepsis and NEC, and these risks may vary with BWSDS. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  19. Towards a large deviation theory for strongly correlated systems

    International Nuclear Information System (INIS)

    Ruiz, Guiomar; Tsallis, Constantino

    2012-01-01

    A large-deviation connection of statistical mechanics is provided by N independent binary variables, the (N→∞) limit yielding Gaussian distributions. The probability of n≠N/2 out of N throws is governed by e −Nr , r related to the entropy. Large deviations for a strong correlated model characterized by indices (Q,γ) are studied, the (N→∞) limit yielding Q-Gaussians (Q→1 recovers a Gaussian). Its large deviations are governed by e q −Nr q (∝1/N 1/(q−1) , q>1), q=(Q−1)/(γ[3−Q])+1. This illustration opens the door towards a large-deviation foundation of nonextensive statistical mechanics. -- Highlights: ► We introduce the formalism of relative entropy for a single random binary variable and its q-generalization. ► We study a model of N strongly correlated binary random variables and their large-deviation probabilities. ► Large-deviation probability of strongly correlated model exhibits a q-exponential decay whose argument is proportional to N, as extensivity requires. ► Our results point to a q-generalized large deviation theory and suggest a large-deviation foundation of nonextensive statistical mechanics.

  20. 49 CFR 192.913 - When may an operator deviate its program from certain requirements of this subpart?

    Science.gov (United States)

    2010-10-01

    ... Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Gas Transmission Pipeline Integrity Management § 192.913 When may an operator deviate its program...

  1. Comparison of least-squares vs. maximum likelihood estimation for standard spectrum technique of β−γ coincidence spectrum analysis

    International Nuclear Information System (INIS)

    Lowrey, Justin D.; Biegalski, Steven R.F.

    2012-01-01

    The spectrum deconvolution analysis tool (SDAT) software code was written and tested at The University of Texas at Austin utilizing the standard spectrum technique to determine activity levels of Xe-131m, Xe-133m, Xe-133, and Xe-135 in β–γ coincidence spectra. SDAT was originally written to utilize the method of least-squares to calculate the activity of each radionuclide component in the spectrum. Recently, maximum likelihood estimation was also incorporated into the SDAT tool. This is a robust statistical technique to determine the parameters that maximize the Poisson distribution likelihood function of the sample data. In this case it is used to parameterize the activity level of each of the radioxenon components in the spectra. A new test dataset was constructed utilizing Xe-131m placed on a Xe-133 background to compare the robustness of the least-squares and maximum likelihood estimation methods for low counting statistics data. The Xe-131m spectra were collected independently from the Xe-133 spectra and added to generate the spectra in the test dataset. The true independent counts of Xe-131m and Xe-133 are known, as they were calculated before the spectra were added together. Spectra with both high and low counting statistics are analyzed. Studies are also performed by analyzing only the 30 keV X-ray region of the β–γ coincidence spectra. Results show that maximum likelihood estimation slightly outperforms least-squares for low counting statistics data.

  2. USL/DBMS NASA/PC R and D project C programming standards

    Science.gov (United States)

    Dominick, Wayne D. (Editor); Moreau, Dennis R.

    1984-01-01

    A set of programming standards intended to promote reliability, readability, and portability of C programs written for PC research and development projects is established. These standards must be adhered to except where reasons for deviation are clearly identified and approved by the PC team. Any approved deviation from these standards must also be clearly documented in the pertinent source code.

  3. The large deviations theorem and ergodicity

    International Nuclear Information System (INIS)

    Gu Rongbao

    2007-01-01

    In this paper, some relationships between stochastic and topological properties of dynamical systems are studied. For a continuous map f from a compact metric space X into itself, we show that if f satisfies the large deviations theorem then it is topologically ergodic. Moreover, we introduce the topologically strong ergodicity, and prove that if f is a topologically strongly ergodic map satisfying the large deviations theorem then it is sensitively dependent on initial conditions

  4. Large deviations

    CERN Document Server

    Deuschel, Jean-Dominique; Deuschel, Jean-Dominique

    2001-01-01

    This is the second printing of the book first published in 1988. The first four chapters of the volume are based on lectures given by Stroock at MIT in 1987. They form an introduction to the basic ideas of the theory of large deviations and make a suitable package on which to base a semester-length course for advanced graduate students with a strong background in analysis and some probability theory. A large selection of exercises presents important material and many applications. The last two chapters present various non-uniform results (Chapter 5) and outline the analytic approach that allow

  5. PoDMan: Policy Deviation Management

    Directory of Open Access Journals (Sweden)

    Aishwarya Bakshi

    2017-07-01

    Full Text Available Whenever an unexpected or exceptional situation occurs, complying with the existing policies may not be possible. The main objective of this work is to assist individuals and organizations to decide in the process of deviating from policies and performing a non-complying action. The paper proposes utilizing software agents as supportive tools to provide the best non-complying action while deviating from policies. The article also introduces a process in which the decision on the choice of non-complying action can be made. The work is motivated by a real scenario observed in a hospital in Norway and demonstrated through the same settings.

  6. Surface of Maximums of AR(2 Process Spectral Densities and its Application in Time Series Statistics

    Directory of Open Access Journals (Sweden)

    Alexander V. Ivanov

    2017-09-01

    Conclusions. The obtained formula of surface of maximums of noise spectral densities gives an opportunity to realize for which values of AR(2 process characteristic polynomial coefficients it is possible to look for greater rate of convergence to zero of the probabilities of large deviations of the considered estimates.

  7. Evaluating deviations in prostatectomy patients treated with IMRT.

    Science.gov (United States)

    Sá, Ana Cravo; Peres, Ana; Pereira, Mónica; Coelho, Carina Marques; Monsanto, Fátima; Macedo, Ana; Lamas, Adrian

    2016-01-01

    To evaluate the deviations in prostatectomy patients treated with IMRT in order to calculate appropriate margins to create the PTV. Defining inappropriate margins can lead to underdosing in target volumes and also overdosing in healthy tissues, increasing morbidity. 223 CBCT images used for alignment with the CT planning scan based on bony anatomy were analyzed in 12 patients treated with IMRT following prostatectomy. Shifts of CBCT images were recorded in three directions to calculate the required margin to create PTV. The mean and standard deviation (SD) values in millimetres were -0.05 ± 1.35 in the LR direction, -0.03 ± 0.65 in the SI direction and -0.02 ± 2.05 the AP direction. The systematic error measured in the LR, SI and AP direction were 1.35 mm, 0.65 mm, and 2.05 mm with a random error of 2.07 mm; 1.45 mm and 3.16 mm, resulting in a PTV margin of 4.82 mm; 2.64 mm, and 7.33 mm, respectively. With IGRT we suggest a margin of 5 mm, 3 mm and 8 mm in the LR, SI and AP direction, respectively, to PTV1 and PTV2. Therefore, this study supports an anisotropic margin expansion to the PTV being the largest expansion in the AP direction and lower in SI.

  8. Prevalence of postural deviations and associated factors in children and adolescents: a cross-sectional study

    Directory of Open Access Journals (Sweden)

    Mariana Vieira Batistão

    Full Text Available Abstract Introduction: Postural deviations are frequent in childhood and may cause pain and functional impairment. Previously, only a few studies have examined the association between body posture and intrinsic and extrinsic factors. Objective: To assess the prevalence of postural changes in school children, and to determine, using multiple logistic regression analysis, whether factors such as age, gender, BMI, handedness and physical activity might explain these deviations. Methods: The posture of 288 students was assessed by observation. Subjects were aged between 6 and 15 years, 59.4% (n = 171 of which were female. The mean age was 10.6 (± 2.4 years. Mean body weight was 38.6 (± 12.7 kg and mean height was 1.5 (± 0.1 m. A digital scale, a tapeline, a plumb line and standardized forms were used to collect data. The data were analyzed descriptively using the chi-square test and logistic regression analysis (significance level of 5%. Results: We found the following deviations to be prevalent among schoolchildren: forward head posture, 53.5%, shoulder elevation, 74.3%, asymmetry of the iliac crests, 51.7%, valgus knees, 43.1%, thoracic hyperkyphosis, 30.2%, lumbar hyperlordosis, 37.2% and winged shoulder blades, 66.3%. The associated factors were age, gender, BMI and physical activity. Discussion: There was a high prevalence of postural deviations and the intrinsic and extrinsic factors partially explain the postural deviations. Conclusion: These findings contribute to the understanding of how and why these deviations develop, and to the implementation of preventive and rehabilitation programs, given that some of the associated factors are modifiable.

  9. Two examples of non strictly convex large deviations

    OpenAIRE

    De Marco, Stefano; Jacquier, Antoine; Roome, Patrick

    2016-01-01

    We present two examples of a large deviations principle where the rate function is not strictly convex. This is motivated by a model used in mathematical finance (the Heston model), and adds a new item to the zoology of non strictly convex large deviations. For one of these examples, we show that the rate function of the Cramer-type of large deviations coincides with that of the Freidlin-Wentzell when contraction principles are applied.

  10. Residual standard deviation: Validation of a new measure of dual-task cost in below-knee prosthesis users.

    Science.gov (United States)

    Howard, Charla L; Wallace, Chris; Abbas, James; Stokic, Dobrivoje S

    2017-01-01

    We developed and evaluated properties of a new measure of variability in stride length and cadence, termed residual standard deviation (RSD). To calculate RSD, stride length and cadence are regressed against velocity to derive the best fit line from which the variability (SD) of the distance between the actual and predicted data points is calculated. We examined construct, concurrent, and discriminative validity of RSD using dual-task paradigm in 14 below-knee prosthesis users and 13 age- and education-matched controls. Subjects walked first over an electronic walkway while performing separately a serial subtraction and backwards spelling task, and then at self-selected slow, normal, and fast speeds used to derive the best fit line for stride length and cadence against velocity. Construct validity was demonstrated by significantly greater increase in RSD during dual-task gait in prosthesis users than controls (group-by-condition interaction, stride length p=0.0006, cadence p=0.009). Concurrent validity was established against coefficient of variation (CV) by moderate-to-high correlations (r=0.50-0.87) between dual-task cost RSD and dual-task cost CV for both stride length and cadence in prosthesis users and controls. Discriminative validity was documented by the ability of dual-task cost calculated from RSD to effectively differentiate prosthesis users from controls (area under the receiver operating characteristic curve, stride length 0.863, p=0.001, cadence 0.808, p=0.007), which was better than the ability of dual-task cost CV (0.692, 0.648, respectively, not significant). These results validate RSD as a new measure of variability in below-knee prosthesis users. Future studies should include larger cohorts and other populations to ascertain its generalizability. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Maximum Principle for General Controlled Systems Driven by Fractional Brownian Motions

    International Nuclear Information System (INIS)

    Han Yuecai; Hu Yaozhong; Song Jian

    2013-01-01

    We obtain a maximum principle for stochastic control problem of general controlled stochastic differential systems driven by fractional Brownian motions (of Hurst parameter H>1/2). This maximum principle specifies a system of equations that the optimal control must satisfy (necessary condition for the optimal control). This system of equations consists of a backward stochastic differential equation driven by both fractional Brownian motions and the corresponding underlying standard Brownian motions. In addition to this backward equation, the maximum principle also involves the Malliavin derivatives. Our approach is to use conditioning and Malliavin calculus. To arrive at our maximum principle we need to develop some new results of stochastic analysis of the controlled systems driven by fractional Brownian motions via fractional calculus. Our approach of conditioning and Malliavin calculus is also applied to classical system driven by standard Brownian motions while the controller has only partial information. As a straightforward consequence, the classical maximum principle is also deduced in this more natural and simpler way.

  12. Algorithms of maximum likelihood data clustering with applications

    Science.gov (United States)

    Giada, Lorenzo; Marsili, Matteo

    2002-12-01

    We address the problem of data clustering by introducing an unsupervised, parameter-free approach based on maximum likelihood principle. Starting from the observation that data sets belonging to the same cluster share a common information, we construct an expression for the likelihood of any possible cluster structure. The likelihood in turn depends only on the Pearson's coefficient of the data. We discuss clustering algorithms that provide a fast and reliable approximation to maximum likelihood configurations. Compared to standard clustering methods, our approach has the advantages that (i) it is parameter free, (ii) the number of clusters need not be fixed in advance and (iii) the interpretation of the results is transparent. In order to test our approach and compare it with standard clustering algorithms, we analyze two very different data sets: time series of financial market returns and gene expression data. We find that different maximization algorithms produce similar cluster structures whereas the outcome of standard algorithms has a much wider variability.

  13. Test of nonexponential deviations from decay curve of 52V using continuous kinetic function method

    International Nuclear Information System (INIS)

    Tran Dai Nghiep; Vu Hoang Lam; Vo Tuong Hanh; Do Nguyet Minh; Nguyen Ngoc Son

    1993-01-01

    The present work is aimed at a formulation of an experimental approach to search the proposed description of an attempt to test them in case of 52 V. Some theoretical description of decay processes are formulated in clarified forms. The continuous kinetic function (CKF) method is used for analysis of experimental data and CKF for purely exponential case is considered as a standard for comparison between theoretical and experimental data. The degree of agreement is defined by the factor of goodness. Typical deviations of oscillation behavior of 52 V decay were observed in a wide range of time. The proposed deviation related to interaction between decay products and environment is research. A complex type of decay is discussed. (author). 10 refs, 2 tabs, 5 figs

  14. 21 CFR 330.11 - NDA deviations from applicable monograph.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 5 2010-04-01 2010-04-01 false NDA deviations from applicable monograph. 330.11... EFFECTIVE AND NOT MISBRANDED Administrative Procedures § 330.11 NDA deviations from applicable monograph. A new drug application requesting approval of an OTC drug deviating in any respect from a monograph that...

  15. Sensitivity Analysis of Deviation Source for Fast Assembly Precision Optimization

    Directory of Open Access Journals (Sweden)

    Jianjun Tang

    2014-01-01

    Full Text Available Assembly precision optimization of complex product has a huge benefit in improving the quality of our products. Due to the impact of a variety of deviation source coupling phenomena, the goal of assembly precision optimization is difficult to be confirmed accurately. In order to achieve optimization of assembly precision accurately and rapidly, sensitivity analysis of deviation source is proposed. First, deviation source sensitivity is defined as the ratio of assembly dimension variation and deviation source dimension variation. Second, according to assembly constraint relations, assembly sequences and locating, deviation transmission paths are established by locating the joints between the adjacent parts, and establishing each part’s datum reference frame. Third, assembly multidimensional vector loops are created using deviation transmission paths, and the corresponding scalar equations of each dimension are established. Then, assembly deviation source sensitivity is calculated by using a first-order Taylor expansion and matrix transformation method. Finally, taking assembly precision optimization of wing flap rocker as an example, the effectiveness and efficiency of the deviation source sensitivity analysis method are verified.

  16. Performance of penalized maximum likelihood in estimation of genetic covariances matrices

    Directory of Open Access Journals (Sweden)

    Meyer Karin

    2011-11-01

    Full Text Available Abstract Background Estimation of genetic covariance matrices for multivariate problems comprising more than a few traits is inherently problematic, since sampling variation increases dramatically with the number of traits. This paper investigates the efficacy of regularized estimation of covariance components in a maximum likelihood framework, imposing a penalty on the likelihood designed to reduce sampling variation. In particular, penalties that "borrow strength" from the phenotypic covariance matrix are considered. Methods An extensive simulation study was carried out to investigate the reduction in average 'loss', i.e. the deviation in estimated matrices from the population values, and the accompanying bias for a range of parameter values and sample sizes. A number of penalties are examined, penalizing either the canonical eigenvalues or the genetic covariance or correlation matrices. In addition, several strategies to determine the amount of penalization to be applied, i.e. to estimate the appropriate tuning factor, are explored. Results It is shown that substantial reductions in loss for estimates of genetic covariance can be achieved for small to moderate sample sizes. While no penalty performed best overall, penalizing the variance among the estimated canonical eigenvalues on the logarithmic scale or shrinking the genetic towards the phenotypic correlation matrix appeared most advantageous. Estimating the tuning factor using cross-validation resulted in a loss reduction 10 to 15% less than that obtained if population values were known. Applying a mild penalty, chosen so that the deviation in likelihood from the maximum was non-significant, performed as well if not better than cross-validation and can be recommended as a pragmatic strategy. Conclusions Penalized maximum likelihood estimation provides the means to 'make the most' of limited and precious data and facilitates more stable estimation for multi-dimensional analyses. It should

  17. Maximum physical capacity testing in cancer patients undergoing chemotherapy

    DEFF Research Database (Denmark)

    Knutsen, L.; Quist, M; Midtgaard, J

    2006-01-01

    BACKGROUND: Over the past few years there has been a growing interest in the field of physical exercise in rehabilitation of cancer patients, leading to requirements for objective maximum physical capacity measurement (maximum oxygen uptake (VO(2max)) and one-repetition maximum (1RM)) to determin...... early in the treatment process. However, the patients were self-referred and thus highly motivated and as such are not necessarily representative of the whole population of cancer patients treated with chemotherapy....... in performing maximum physical capacity tests as these motivated them through self-perceived competitiveness and set a standard that served to encourage peak performance. CONCLUSION: The positive attitudes in this sample towards maximum physical capacity open the possibility of introducing physical testing...

  18. Reference sources for the calibration of surface contamination monitors - Beta-emitters (maximum beta energy greater than MeV) and alpha-emitters (International Standard Publication ISO 8769:1988)

    International Nuclear Information System (INIS)

    Stefanik, J.

    2001-01-01

    This International Standard specifies the characteristics of reference sources of radioactive surface contamination, traceable to national measurement standards, for the calibration of surface contamination monitors. This International Standard relates to alpha-emitters and to beta-emitters of maximum beta energy greater than 0,15 MeV. It does not describe the procedures involved in the use of these reference sources for the calibration of surface contamination monitors. Such procedures are specified in IEC Publication 325 and other documents. This International Standard specifies reference radiations for the calibration of surface contamination monitors which take the form of adequately characterized large area sources specified, without exception, in terms of activity and surface emission rate, the evaluation of these quantities being traceable to national standards

  19. Assessing factors that influence deviations between measured and calculated reference evapotranspiration

    Science.gov (United States)

    Rodny, Marek; Nolz, Reinhard

    2017-04-01

    Evapotranspiration (ET) is a fundamental component of the hydrological cycle, but challenging to be quantified. Lysimeter facilities, for example, can be installed and operated to determine ET, but they are costly and represent only point measurements. Therefore, lysimeter data are traditionally used to develop, calibrate, and validate models that allow calculating reference evapotranspiration (ET0) based on meteorological data, which can be measured more easily. The standardized form of the well-known FAO Penman-Monteith equation (ASCE-EWRI) is recommended as a standard procedure for estimating ET0 and subsequently plant water requirements. Applied and validated under different climatic conditions, the Penman-Monteith equation is generally known to deliver proper results. On the other hand, several studies documented deviations between measured and calculated ET0 depending on environmental conditions. Potential reasons are, for example, differing or varying surface characteristics of the lysimeter and the location where the weather instruments are placed. Advection of sensible heat (transport of dry and hot air from surrounding areas) might be another reason for deviating ET-values. However, elaborating causal processes is complex and requires comprehensive data of high quality and specific analysis techniques. In order to assess influencing factors, we correlated differences between measured and calculated ET0 with pre-selected meteorological parameters and related system parameters. Basic data were hourly ET0-values from a weighing lysimeter (ET0_lys) with a surface area of 2.85 m2 (reference crop: frequently irrigated grass), weather data (air and soil temperature, relative humidity, air pressure, wind velocity, and solar radiation), and soil water content in different depths. ET0_ref was calculated in hourly time steps according to the standardized procedure after ASCE-EWRI (2005). Deviations between both datasets were calculated as ET0_lys-ET0_ref and

  20. Perception of midline deviations in smile esthetics by laypersons.

    Science.gov (United States)

    Ferreira, Jamille Barros; Silva, Licínio Esmeraldo da; Caetano, Márcia Tereza de Oliveira; Motta, Andrea Fonseca Jardim da; Cury-Saramago, Adriana de Alcantara; Mucha, José Nelson

    2016-01-01

    To evaluate the esthetic perception of upper dental midline deviation by laypersons and if adjacent structures influence their judgment. An album with 12 randomly distributed frontal view photographs of the smile of a woman with the midline digitally deviated was evaluated by 95 laypersons. The frontal view smiling photograph was modified to create from 1 mm to 5 mm deviations in the upper midline to the left side. The photographs were cropped in two different manners and divided into two groups of six photographs each: group LCN included the lips, chin, and two-thirds of the nose, and group L included the lips only. The laypersons performed the rate of each smile using a visual analog scale (VAS). Wilcoxon test, Student's t-test and Mann-Whitney test were applied, adopting a 5% level of significance. Laypersons were able to perceive midline deviations starting at 1 mm. Statistically significant results (p< 0.05) were found for all multiple comparisons of the values in photographs of group LCN and for almost all comparisons in photographs of group L. Comparisons between the photographs of groups LCN and L showed statistically significant values (p< 0.05) when the deviation was 1 mm. Laypersons were able to perceive the upper dental midline deviations of 1 mm, and above when the adjacent structures of the smiles were included. Deviations of 2 mm and above when the lips only were included. The visualization of structures adjacent to the smile demonstrated influence on the perception of midline deviation.

  1. TEC variability over Havana

    International Nuclear Information System (INIS)

    Lazo, B.; Alazo, K.; Rodriguez, M.; Calzadilla, A.

    2003-01-01

    The variability of total electron content (TEC) measured over Havana using ATS-6, SMS-1 and GOES-3 geosynchronous satellite signals has been investigated for low, middle and high solar activity periods from 1974 to 1982. The obtained results show that standard deviation is smooth during nighttime hours and maximum at noon or postnoon hours. Strong solar activity dependence of standard deviation with a maximum values during HSA has been found. (author)

  2. Association between septal deviation and sinonasal papilloma.

    Science.gov (United States)

    Nomura, Kazuhiro; Ogawa, Takenori; Sugawara, Mitsuru; Honkura, Yohei; Oshima, Hidetoshi; Arakawa, Kazuya; Oshima, Takeshi; Katori, Yukio

    2013-12-01

    Sinonasal papilloma is a common benign epithelial tumor of the sinonasal tract and accounts for 0.5% to 4% of all nasal tumors. The etiology of sinonasal papilloma remains unclear, although human papilloma virus has been proposed as a major risk factor. Other etiological factors, such as anatomical variations of the nasal cavity, may be related to the pathogenesis of sinonasal papilloma, because deviated nasal septum is seen in patients with chronic rhinosinusitis. We, therefore, investigated the involvement of deviated nasal septum in the development of sinonasal papilloma. Preoperative computed tomography or magnetic resonance imaging findings of 83 patients with sinonasal papilloma were evaluated retrospectively. The side of papilloma and the direction of septal deviation showed a significant correlation. Septum deviated to the intact side in 51 of 83 patients (61.4%) and to the affected side in 18 of 83 patients (21.7%). Straight or S-shaped septum was observed in 14 of 83 patients (16.9%). Even after excluding 27 patients who underwent revision surgery and 15 patients in whom the papilloma touched the concave portion of the nasal septum, the concave side of septal deviation was associated with the development of sinonasal papilloma (p = 0.040). The high incidence of sinonasal papilloma in the concave side may reflect the consequences of the traumatic effects caused by wall shear stress of the high-velocity airflow and the increased chance of inhaling viruses and pollutants. The present study supports the causative role of human papilloma virus and toxic chemicals in the occurrence of sinonasal papilloma.

  3. Critical Assessment of the Surface Tension determined by the Maximum Pressure Bubble Method

    OpenAIRE

    Benedetto, Franco Emmanuel; Zolotucho, Hector; Prado, Miguel Oscar

    2015-01-01

    The main factors that influence the value of surface tension of a liquid measured with the Maximum Pressure Bubble Method are critically evaluated. We present experimental results showing the effect of capillary diameter, capillary depth, bubble spheroidicity and liquid density at room temperature. We show that the decrease of bubble spheroidicity due to increase of capillary immersion depth is not sufficient to explain the deviations found in the measured surface tension values. Thus, we pro...

  4. Effect of multizone refractive multifocal contact lenses on standard automated perimetry.

    Science.gov (United States)

    Madrid-Costa, David; Ruiz-Alcocer, Javier; García-Lázaro, Santiago; Albarrán-Diego, César; Ferrer-Blasco, Teresa

    2012-09-01

    The aim of this study was to evaluate whether the creation of 2 foci (distance and near) provided by multizone refractive multifocal contact lenses (CLs) for presbyopia correction affects the measurements on Humphreys 24-2 Swedish interactive threshold algorithm (SITA) standard automated perimetry (SAP). In this crossover study, 30 subjects were fitted in random order with either a multifocal CL or a monofocal CL. After 1 month, a Humphrey 24-2 SITA standard strategy was performed. The visual field global indices (the mean deviation [MD] and pattern standard deviation [PSD]), reliability indices, test duration, and number of depressed points deviating at P0.5% on pattern deviation probability plots were determined and compared between multifocal and monofocal CLs. Thirty eyes of 30 subjects were included in this study. There were no statistically significant differences in reliability indices or test duration. There was a statistically significant reduction in the MD with the multifocal CL compared with monfocal CL (P=0.001). Differences were not found in PSD nor in the number of depressed points deviating at P0.5% in the pattern deviation probability maps studied. The results of this study suggest that the multizone refractive lens produces a generalized depression in threshold sensitivity as measured by the Humphreys 24-2 SITA SAP.

  5. Spatio-temporal observations of the tertiary ozone maximum

    Directory of Open Access Journals (Sweden)

    V. F. Sofieva

    2009-07-01

    Full Text Available We present spatio-temporal distributions of the tertiary ozone maximum (TOM, based on GOMOS (Global Ozone Monitoring by Occultation of Stars ozone measurements in 2002–2006. The tertiary ozone maximum is typically observed in the high-latitude winter mesosphere at an altitude of ~72 km. Although the explanation for this phenomenon has been found recently – low concentrations of odd-hydrogen cause the subsequent decrease in odd-oxygen losses – models have had significant deviations from existing observations until recently. Good coverage of polar night regions by GOMOS data has allowed for the first time to obtain spatial and temporal observational distributions of night-time ozone mixing ratio in the mesosphere.

    The distributions obtained from GOMOS data have specific features, which are variable from year to year. In particular, due to a long lifetime of ozone in polar night conditions, the downward transport of polar air by the meridional circulation is clearly observed in the tertiary ozone maximum time series. Although the maximum tertiary ozone mixing ratio is achieved close to the polar night terminator (as predicted by the theory, TOM can be observed also at very high latitudes, not only in the beginning and at the end, but also in the middle of winter. We have compared the observational spatio-temporal distributions of the tertiary ozone maximum with that obtained using WACCM (Whole Atmosphere Community Climate Model and found that the specific features are reproduced satisfactorily by the model.

    Since ozone in the mesosphere is very sensitive to HOx concentrations, energetic particle precipitation can significantly modify the shape of the ozone profiles. In particular, GOMOS observations have shown that the tertiary ozone maximum was temporarily destroyed during the January 2005 and December 2006 solar proton events as a result of the HOx enhancement from the increased ionization.

  6. PTree: pattern-based, stochastic search for maximum parsimony phylogenies

    OpenAIRE

    Gregor, Ivan; Steinbr?ck, Lars; McHardy, Alice C.

    2013-01-01

    Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we ...

  7. A Hubble Space Telescope survey for novae in M87 - III. Are novae good standard candles 15 d after maximum brightness?

    Science.gov (United States)

    Shara, Michael M.; Doyle, Trisha F.; Pagnotta, Ashley; Garland, James T.; Lauer, Tod R.; Zurek, David; Baltz, Edward A.; Goerl, Ariel; Kovetz, Attay; Machac, Tamara; Madrid, Juan P.; Mikołajewska, Joanna; Neill, J. D.; Prialnik, Dina; Welch, D. L.; Yaron, Ofer

    2018-02-01

    Ten weeks of daily imaging of the giant elliptical galaxy M87 with the Hubble Space Telescope (HST) has yielded 41 nova light curves of unprecedented quality for extragalactic cataclysmic variables. We have recently used these light curves to demonstrate that the observational scatter in the so-called maximum-magnitude rate of decline (MMRD) relation for classical novae is so large as to render the nova-MMRD useless as a standard candle. Here, we demonstrate that a modified Buscombe-de Vaucouleurs hypothesis, namely that novae with decline times t2 > 10 d converge to nearly the same absolute magnitude about two weeks after maximum light in a giant elliptical galaxy, is supported by our M87 nova data. For 13 novae with daily sampled light curves, well determined times of maximum light in both the F606W and F814W filters, and decline times t2 > 10 d we find that M87 novae display M606W,15 = -6.37 ± 0.46 and M814W,15 = -6.11 ± 0.43. If very fast novae with decline times t2 < 10 d are excluded, the distances to novae in elliptical galaxies with stellar binary populations similar to those of M87 should be determinable with 1σ accuracies of ± 20 per cent with the above calibrations.

  8. A morphological study of the masseter muscle using magnetic resonance imaging in patients with jaw deformities. Cases demonstrating mandibular deviation

    International Nuclear Information System (INIS)

    Higashi, Katsuhiko; Goto, Tazuko K.; Kanda, Shigenobu; Shiratsuchi, Yuji; Nakashima, Akihiko; Horinouchi, Yasufumi

    2006-01-01

    Numerous studies on the cross-sectional area of masticatory muscles, which are correlated to the facial shape, have been reported for normal subjects in previous articles. However to date, there have been no such studies on masseter muscles at jaw-closing and jaw-opening in patients with jaw deformities involving mandibular deviation. The MRI data sets of the masseter muscles at jaw-closing and jaw-opening in 14 female patients with mandibular deviation, who demonstrated a more than 3-mm deviation in the median line in the lower first incisors in comparison to the upper ones, were utilized. The cross-sectional areas from the origin to the insertion at jaw-closing and jaw-opening which were reconstructed perpendicular to the three-dimensional long axis of each masseter muscle, each maximum cross-sectional area (MCSA) and the ratio of the change in MCSA after jaw-opening were analyzed. As a result, a significant difference was observed between the MCSA at jaw-closing and jaw-opening on the same side. However, no difference in MCSA was seen between the deviated and non-deviated side of the mandible. The line chart patterns of the masseter muscles from the origin to the insertion could be classified into four types. Our results suggest that it is important to analyze cross-sectional areas of the masseter muscles in each subject while considering the three-dimensional axis of each muscle. (author)

  9. Novel TPPO Based Maximum Power Point Method for Photovoltaic System

    Directory of Open Access Journals (Sweden)

    ABBASI, M. A.

    2017-08-01

    Full Text Available Photovoltaic (PV system has a great potential and it is installed more when compared with other renewable energy sources nowadays. However, the PV system cannot perform optimally due to its solid reliance on climate conditions. Due to this dependency, PV system does not operate at its maximum power point (MPP. Many MPP tracking methods have been proposed for this purpose. One of these is the Perturb and Observe Method (P&O which is the most famous due to its simplicity, less cost and fast track. But it deviates from MPP in continuously changing weather conditions, especially in rapidly changing irradiance conditions. A new Maximum Power Point Tracking (MPPT method, Tetra Point Perturb and Observe (TPPO, has been proposed to improve PV system performance in changing irradiance conditions and the effects on characteristic curves of PV array module due to varying irradiance are delineated. The Proposed MPPT method has shown better results in increasing the efficiency of a PV system.

  10. Linear versus non-linear measures of temporal variability in finger tapping and their relation to performance on open- versus closed-loop motor tasks: comparing standard deviations to Lyapunov exponents.

    Science.gov (United States)

    Christman, Stephen D; Weaver, Ryan

    2008-05-01

    The nature of temporal variability during speeded finger tapping was examined using linear (standard deviation) and non-linear (Lyapunov exponent) measures. Experiment 1 found that right hand tapping was characterised by lower amounts of both linear and non-linear measures of variability than left hand tapping, and that linear and non-linear measures of variability were often negatively correlated with one another. Experiment 2 found that increased non-linear variability was associated with relatively enhanced performance on a closed-loop motor task (mirror tracing) and relatively impaired performance on an open-loop motor task (pointing in a dark room), especially for left hand performance. The potential uses and significance of measures of non-linear variability are discussed.

  11. Precipitation Interpolation by Multivariate Bayesian Maximum Entropy Based on Meteorological Data in Yun- Gui-Guang region, Mainland China

    Science.gov (United States)

    Wang, Chaolin; Zhong, Shaobo; Zhang, Fushen; Huang, Quanyi

    2016-11-01

    Precipitation interpolation has been a hot area of research for many years. It had close relation to meteorological factors. In this paper, precipitation from 91 meteorological stations located in and around Yunnan, Guizhou and Guangxi Zhuang provinces (or autonomous region), Mainland China was taken into consideration for spatial interpolation. Multivariate Bayesian maximum entropy (BME) method with auxiliary variables, including mean relative humidity, water vapour pressure, mean temperature, mean wind speed and terrain elevation, was used to get more accurate regional distribution of annual precipitation. The means, standard deviations, skewness and kurtosis of meteorological factors were calculated. Variogram and cross- variogram were fitted between precipitation and auxiliary variables. The results showed that the multivariate BME method was precise with hard and soft data, probability density function. Annual mean precipitation was positively correlated with mean relative humidity, mean water vapour pressure, mean temperature and mean wind speed, negatively correlated with terrain elevation. The results are supposed to provide substantial reference for research of drought and waterlog in the region.

  12. Application of Mean of Absolute Deviation Method for the Selection of Best Nonlinear Component Based on Video Encryption

    Science.gov (United States)

    Anees, Amir; Khan, Waqar Ahmad; Gondal, Muhammad Asif; Hussain, Iqtadar

    2013-07-01

    The aim of this work is to make use of the mean of absolute deviation (MAD) method for the evaluation process of substitution boxes used in the advanced encryption standard. In this paper, we use the MAD technique to analyze some popular and prevailing substitution boxes used in encryption processes. In particular, MAD is applied to advanced encryption standard (AES), affine power affine (APA), Gray, Lui J., Residue Prime, S8 AES, SKIPJACK, and Xyi substitution boxes.

  13. Prosthodontic management of mandibular deviation using palatal ramp appliance

    Directory of Open Access Journals (Sweden)

    Prince Kumar

    2012-08-01

    Full Text Available Segmental resection of the mandible generally results in deviation of the mandible to the defective side. This loss of continuity of the mandible destroys the balance of the lower face and leads to decreased mandibular function by deviation of the residual segment toward the surgical site. Prosthetic methods advocated to reduce or eliminate mandibular deviation include intermaxillary fixation, removable mandibular guide flange, palatal ramp, implant-supported prosthesis and palatal guidance restorations which may be useful in reducing mandibular deviation and improving masticatory performance and efficiency. These methods and restorations would be combined with a well organized mandibular exercise regimen. This clinical report describes the rehabilitation following segmental mandibulectomy using palatal ramp prosthesis.

  14. Scaphoid and lunate movement in different ranges of carpal radioulnar deviation.

    Science.gov (United States)

    Tang, Jin Bo; Xu, Jing; Xie, Ren Guo

    2011-01-01

    We aimed to investigate scaphoid and lunate movement in radial deviation and in slight and moderate ulnar deviation ranges in vivo. We obtained computed tomography scans of the right wrists from 20° radial deviation to 40° ulnar deviation in 20° increments in 6 volunteers. The 3-dimensional bony structures of the wrist, including the distal radius and ulna, were reconstructed with customized software. The changes in position of the scaphoid and lunate along flexion-extension motion (FEM), radioulnar deviation (RUD), and supination-pronation axes in 3 parts--radial deviation and slight and moderate ulnar deviation--of the carpal RUD were calculated and analyzed. During carpal RUD, scaphoid and lunate motion along 3 axes--FEM, RUD, and supination-pronation--were the greatest in the middle third of the measured RUD (from neutral position to 20° ulnar deviation) and the smallest in radial deviation. Scaphoid motion along the FEM, RUD, and supination-pronation axes in the middle third was about half that in the entire motion range. In the middle motion range, lunate movement along the FEM and RUD axes was also the greatest. During carpal RUD, the greatest scaphoid and lunate movement occurs in the middle of the arc--slight ulnar deviation--which the wrist frequently adopts to accomplish major hand actions. At radial deviation, scaphoid and lunate motion is the smallest. Copyright © 2011 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  15. 38 CFR 36.4304 - Deviations; changes of identity.

    Science.gov (United States)

    2010-07-01

    ... identity. 36.4304 Section 36.4304 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS... Deviations; changes of identity. A deviation of more than 5 percent between the estimates upon which a... change in the identity of the property upon which the original appraisal was based, will invalidate the...

  16. Moderate deviations principles for the kernel estimator of ...

    African Journals Online (AJOL)

    Abstract. The aim of this paper is to provide pointwise and uniform moderate deviations principles for the kernel estimator of a nonrandom regression function. Moreover, we give an application of these moderate deviations principles to the construction of condence regions for the regression function. Resume. L'objectif de ...

  17. 48 CFR 1352.219-71 - Notification to delay performance (Deviation).

    Science.gov (United States)

    2010-10-01

    ... performance (Deviation). 1352.219-71 Section 1352.219-71 Federal Acquisition Regulations System DEPARTMENT OF....219-71 Notification to delay performance (Deviation). As prescribed in 48 CFR 1319.811-3(b), insert the following clause: Notification To Delay Performance (Deviation) (APR 2010) The contractor shall...

  18. Maximum gain of Yagi-Uda arrays

    DEFF Research Database (Denmark)

    Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.

    1971-01-01

    Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....

  19. Explorations in Statistics: Standard Deviations and Standard Errors

    Science.gov (United States)

    Curran-Everett, Douglas

    2008-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This series in "Advances in Physiology Education" provides an opportunity to do just that: we will investigate basic concepts in statistics using the free software package R. Because this series uses R solely as a vehicle…

  20. Search for Standard Model H→τ"+τ"- decays in the lepton-hadron final state in proton-proton collisions with the ATLAS detector at the LHC

    International Nuclear Information System (INIS)

    Ruthmann, Nils

    2014-01-01

    This thesis presents a search for Standard Model (SM) Higgs boson decays to a pair of τ leptons in the lepton-hadron final state with the ATLAS detector at the Large Hadron Collider (LHC). The analysis is based on proton-proton collision data recorded during Run 1 of the LHC, corresponding to integrated luminosities of 4.5 fb"-"1 and 20.3 fb"-"1 at centre-of-mass energies of 7 TeV and 8 TeV, respectively. Background events from various SM processes contribute to the selected event sample at a high rate. Their contribution is efficiently separated from the expected Higgs boson signal by using boosted decision trees (BDT) in two analysis categories, which are enriched in events emerging from vector boson fusion and gluon fusion processes. The expected number of events from background processes is modelled using data-driven estimation techniques. The signal contribution is measured using a maximum likelihood fit of the BDT output distributions. An excess of events over the expected level of background events is found and corresponds to an observed (expected) significance of 2.3(2.4) standard deviations at a Higgs boson mass hypothesis of 125 GeV. The signal strength normalised to the Standard Model expectation is measured to be 0.98"+"0"."5_-_0_._5. A combined analysis of all τ-τ final states rejects the background-only hypothesis at a level of 4.5 standard deviations at m_H=125 GeV, while a significance of 3.5 standard deviations is expected. This provides evidence for the direct coupling of the recently discovered Higgs boson to tau leptons. The measured normalised signal strength of 1.4"+"0"."4"3_-_0_._3_7 is consistent with the predicted Yukawa coupling strength in the Standard Model.

  1. Heterodyne Angle Deviation Interferometry in Vibration and Bubble Measurements

    OpenAIRE

    Ming-Hung Chiu; Jia-Ze Shen; Jian-Ming Huang

    2016-01-01

    We proposed heterodyne angle deviation interferometry (HADI) for angle deviation measurements. The phase shift of an angular sensor (which can be a metal film or a surface plasmon resonance (SPR) prism) is proportional to the deviation angle of the test beam. The method has been demonstrated in bubble and speaker’s vibration measurements in this paper. In the speaker’s vibration measurement, the voltage from the phase channel of a lock-in amplifier includes the vibration level and frequency. ...

  2. Stone heterogeneity index as the standard deviation of Hounsfield units: A novel predictor for shock-wave lithotripsy outcomes in ureter calculi.

    Science.gov (United States)

    Lee, Joo Yong; Kim, Jae Heon; Kang, Dong Hyuk; Chung, Doo Yong; Lee, Dae Hun; Do Jung, Hae; Kwon, Jong Kyou; Cho, Kang Su

    2016-04-01

    We investigated whether stone heterogeneity index (SHI), which a proxy of such variations, was defined as the standard deviation of a Hounsfield unit (HU) on non-contrast computed tomography (NCCT), can be a novel predictor for shock-wave lithotripsy (SWL) outcomes in patients with ureteral stones. Medical records were obtained from the consecutive database of 1,519 patients who underwent the first session of SWL for urinary stones between 2005 and 2013. Ultimately, 604 patients with radiopaque ureteral stones were eligible for this study. Stone related variables including stone size, mean stone density (MSD), skin-to-stone distance, and SHI were obtained on NCCT. Patients were classified into the low and high SHI groups using mean SHI and compared. One-session success rate in the high SHI group was better than in the low SHI group (74.3% vs. 63.9%, P = 0.008). Multivariate logistic regression analyses revealed that smaller stone size (OR 0.889, 95% CI: 0.841-0.937, P < 0.001), lower MSD (OR 0.995, 95% CI: 0.994-0.996, P < 0.001), and higher SHI (OR 1.011, 95% CI: 1.008-1.014, P < 0.001) were independent predictors of one-session success. The radiologic heterogeneity of urinary stones or SHI was an independent predictor for SWL success in patients with ureteral calculi and a useful clinical parameter for stone fragility.

  3. Some clarifications about the Bohmian geodesic deviation equation and Raychaudhuri’s equation

    Science.gov (United States)

    Rahmani, Faramarz; Golshani, Mehdi

    2018-01-01

    One of the important and famous topics in general theory of relativity and gravitation is the problem of geodesic deviation and its related singularity theorems. An interesting subject is the investigation of these concepts when quantum effects are considered. Since the definition of trajectory is not possible in the framework of standard quantum mechanics (SQM), we investigate the problem of geodesic equation and its related topics in the framework of Bohmian quantum mechanics in which the definition of trajectory is possible. We do this in a fixed background and we do not consider the backreaction effects of matter on the space-time metric.

  4. Weak scale from the maximum entropy principle

    Science.gov (United States)

    Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu

    2015-03-01

    The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.

  5. Improving image-quality of interference fringes of out-of-plane vibration using temporal speckle pattern interferometry and standard deviation for piezoelectric plates.

    Science.gov (United States)

    Chien-Ching Ma; Ching-Yuan Chang

    2013-07-01

    Interferometry provides a high degree of accuracy in the measurement of sub-micrometer deformations; however, the noise associated with experimental measurement undermines the integrity of interference fringes. This study proposes the use of standard deviation in the temporal domain to improve the image quality of patterns obtained from temporal speckle pattern interferometry. The proposed method combines the advantages of both mean and subtractive methods to remove background noise and ambient disturbance simultaneously, resulting in high-resolution images of excellent quality. The out-of-plane vibration of a thin piezoelectric plate is the main focus of this study, providing information useful to the development of energy harvesters. First, ten resonant states were measured using the proposed method, and both mode shape and resonant frequency were investigated. We then rebuilt the phase distribution of the first resonant mode based on the clear interference patterns obtained using the proposed method. This revealed instantaneous deformations in the dynamic characteristics of the resonant state. The proposed method also provides a frequency-sweeping function, facilitating its practical application in the precise measurement of resonant frequency. In addition, the mode shapes and resonant frequencies obtained using the proposed method were recorded and compared with results obtained using finite element method and laser Doppler vibrometery, which demonstrated close agreement.

  6. [The crooked nose: correction of dorsal and caudal septal deviations].

    Science.gov (United States)

    Foda, H M T

    2010-09-01

    The deviated nose represents a complex cosmetic and functional problem. Septal surgery plays a central role in the successful management of the externally deviated nose. This study included 800 patients seeking rhinoplasty to correct external nasal deviations; 71% of these suffered from variable degrees of nasal obstruction. Septal surgery was necessary in 736 (92%) patients, not only to improve breathing, but also to achieve a straight, symmetric external nose. A graduated surgical approach was adopted to allow correction of the dorsal and caudal deviations of the nasal septum without weakening its structural support to the nasal dorsum or nasal tip. The approach depended on full mobilization of deviated cartilage, followed by straightening of the cartilage and its fixation in the corrected position by using bony splinting grafts through an external rhinoplasty approach.

  7. WE-A-17A-05: Differences in Applicator Configuration and Dwell Loading Between Standard and Image-Guided Tandem and Ring (T and R) HDR Brachytherapy

    Energy Technology Data Exchange (ETDEWEB)

    Damato, A; Cormack, R; Bhagwat, M; Buzurovic, I; Lee, L; Viswanathan, A [Brigham and Women' s Hospital, Boston, MA (United States)

    2014-06-15

    Purpose: To investigate differences in: (i) relative location of the tandem and the ring compared to a rigid standard applicator model; and (ii) relative loading and changes in loading pattern between standard and image-guided planning. Methods: All T and R insertions performed in 2013 in our institution under CT- or MR-guidance were analyzed. Standard plans were generated using library applicator models with a fixed relationship between ring and tandem, standardized uniform dwell loading and normalization to point A. The graphic plans and the associated standard-plan dwell configurations were compared: the rings were rigidly registered, and the residual tandem shift, rotation and maximum distance between plan tandem dwell and corresponding model tandem dwell were calculated. The normalization ratio (NR = the ratio of graphic versus standard-plan total reference air kerma [TRAK]), the general loading difference (GLD = the difference between graphic and standard ratios of the tandem versus the ring TRAK), and the percent standard deviation (SD% = SD/mean) of the tandem and the ring TRAK for the graphic plan (all standard-plans SD% = 0) were calculated. Results: 71 T and R were analyzed. Residual tandem shift, rotation and maximum corresponding dwell distance were 1.2±0.8mm (0.4±0.4mm lateral, 0.9±0.8mm craniocaudal, 0.4±0.3mm anterior-posterior), 2.3±1.9deg and 3.4±2.3mm. NR was 0.86±0.11 indicating a lower overall loading of the graphic compared to the standard plans. GLD was -0.12±0.16 indicating a modest increased ring loading relative to the tandem in the graphic plans. SD% was 2.1±1.6% for tandem and 2.8±1.9% for ring, indicating small deviations from uniform loading. Conclusion: Variability in the relative locations of the tandem and the ring necessitates the independent registration of each component model for accurate digitization. Our clinical experience suggests that graphically planned T and R results on average in a lower total dose to the

  8. WE-A-17A-05: Differences in Applicator Configuration and Dwell Loading Between Standard and Image-Guided Tandem and Ring (T and R) HDR Brachytherapy

    International Nuclear Information System (INIS)

    Damato, A; Cormack, R; Bhagwat, M; Buzurovic, I; Lee, L; Viswanathan, A

    2014-01-01

    Purpose: To investigate differences in: (i) relative location of the tandem and the ring compared to a rigid standard applicator model; and (ii) relative loading and changes in loading pattern between standard and image-guided planning. Methods: All T and R insertions performed in 2013 in our institution under CT- or MR-guidance were analyzed. Standard plans were generated using library applicator models with a fixed relationship between ring and tandem, standardized uniform dwell loading and normalization to point A. The graphic plans and the associated standard-plan dwell configurations were compared: the rings were rigidly registered, and the residual tandem shift, rotation and maximum distance between plan tandem dwell and corresponding model tandem dwell were calculated. The normalization ratio (NR = the ratio of graphic versus standard-plan total reference air kerma [TRAK]), the general loading difference (GLD = the difference between graphic and standard ratios of the tandem versus the ring TRAK), and the percent standard deviation (SD% = SD/mean) of the tandem and the ring TRAK for the graphic plan (all standard-plans SD% = 0) were calculated. Results: 71 T and R were analyzed. Residual tandem shift, rotation and maximum corresponding dwell distance were 1.2±0.8mm (0.4±0.4mm lateral, 0.9±0.8mm craniocaudal, 0.4±0.3mm anterior-posterior), 2.3±1.9deg and 3.4±2.3mm. NR was 0.86±0.11 indicating a lower overall loading of the graphic compared to the standard plans. GLD was -0.12±0.16 indicating a modest increased ring loading relative to the tandem in the graphic plans. SD% was 2.1±1.6% for tandem and 2.8±1.9% for ring, indicating small deviations from uniform loading. Conclusion: Variability in the relative locations of the tandem and the ring necessitates the independent registration of each component model for accurate digitization. Our clinical experience suggests that graphically planned T and R results on average in a lower total dose to the

  9. Electroweak interaction: Standard and beyond

    International Nuclear Information System (INIS)

    Harari, H.

    1987-02-01

    Several important topics within the standard model raise questions which are likely to be answered only by further theoretical understanding which goes beyond the standard model. In these lectures we present a discussion of some of these problems, including the quark masses and angles, the Higgs sector, neutrino masses, W and Z properties and possible deviations from a pointlike structure. 44 refs

  10. Variable Step Size Maximum Correntropy Criteria Based Adaptive Filtering Algorithm

    Directory of Open Access Journals (Sweden)

    S. Radhika

    2016-04-01

    Full Text Available Maximum correntropy criterion (MCC based adaptive filters are found to be robust against impulsive interference. This paper proposes a novel MCC based adaptive filter with variable step size in order to obtain improved performance in terms of both convergence rate and steady state error with robustness against impulsive interference. The optimal variable step size is obtained by minimizing the Mean Square Deviation (MSD error from one iteration to the other. Simulation results in the context of a highly impulsive system identification scenario show that the proposed algorithm has faster convergence and lesser steady state error than the conventional MCC based adaptive filters.

  11. Centroid and full-width at half maximum uncertainties of histogrammed data with an underlying Gaussian distribution -- The moments method

    International Nuclear Information System (INIS)

    Valentine, J.D.; Rana, A.E.

    1996-01-01

    The effect of approximating a continuous Gaussian distribution with histogrammed data are studied. The expressions for theoretical uncertainties in centroid and full-width at half maximum (FWHM), as determined by calculation of moments, are derived using the error propagation method for a histogrammed Gaussian distribution. The results are compared with the corresponding pseudo-experimental uncertainties for computer-generated histogrammed Gaussian peaks to demonstrate the effect of binning the data. It is shown that increasing the number of bins in the histogram improves the continuous distribution approximation. For example, a FWHM ≥ 9 and FWHM ≥ 12 bins are needed to reduce the pseudo-experimental standard deviation of FWHM to within ≥5% and ≥1%, respectively, of the theoretical value for a peak containing 10,000 counts. In addition, the uncertainties in the centroid and FWHM as a function of peak area are studied. Finally, Sheppard's correction is applied to partially correct for the binning effect

  12. Density, viscosity, isothermal (vapour + liquid) equilibrium, excess molar volume, viscosity deviation, and their correlations for chloroform + methyl isobutyl ketone binary system

    International Nuclear Information System (INIS)

    Clara, Rene A.; Gomez Marigliano, Ana C.; Solimo, Horacio N.

    2007-01-01

    Density and viscosity measurements for pure chloroform and methyl isobutyl ketone at T = (283.15, 293.15, 303.15, and 313.15) K as well as for the binary system {x 1 chloroform + (1 - x 1 ) methyl isobutyl ketone} at the same temperatures were made over the whole concentration range. The experimental results were fitted to empirical equations, which permit the calculation of these properties over the whole concentration and temperature ranges studied. Data of the binary mixture were further used to calculate the excess molar volume and viscosity deviation. The (vapour + liquid) equilibrium (VLE) at T = 303.15 K for this binary system was also measured in order to calculate the activity coefficients and the excess molar Gibbs energy. This binary system shows no azeotrope and negative deviations from ideal behaviour. The excess or deviation properties were fitted to the Redlich-Kister polynomial relation to obtain their coefficients and standard deviations

  13. Post-Newtonian approximation of the maximum four-dimensional Yang-Mills gauge theory

    International Nuclear Information System (INIS)

    Smalley, L.L.

    1982-01-01

    We have calculated the post-Newtonian approximation of the maximum four-dimensional Yang-Mills theory proposed by Hsu. The theory contains torsion; however, torsion is not active at the level of the post-Newtonian approximation of the metric. Depending on the nature of the approximation, we obtain the general-relativistic values for the classical Robertson parameters (γ = β = 1), but deviations for the Nordtvedt effect and violations of post-Newtonian conservation laws. We conclude that in its present form the theory is not a viable theory of gravitation

  14. Maximum standardized uptake value of fluorodeoxyglucose positron emission tomography/computed tomography is a prognostic factor in ovarian clear cell adenocarcinoma.

    Science.gov (United States)

    Konishi, Haruhisa; Takehara, Kazuhiro; Kojima, Atsumi; Okame, Shinichi; Yamamoto, Yasuko; Shiroyama, Yuko; Yokoyama, Takashi; Nogawa, Takayoshi; Sugawara, Yoshifumi

    2014-09-01

    Fluorodeoxyglucose positron emission tomography/computed tomography (FDG-PET/CT) is useful for diagnosing malignant tumors. Intracellular FDG uptake is measured as the standardized uptake value (SUV), which differs depending on tumor characteristics. This study investigated differences in maximum SUV (SUVmax) according to histologic type in ovarian epithelial cancer and the relationship of SUVmax with prognosis. This study included 80 patients with ovarian epithelial cancer based on histopathologic findings at surgery and who had undergone PET/CT before treatment. Maximum SUV on PET/CT of primary lesions and histopathology were compared based on histologic type, and the prognosis associated with different SUVmax was evaluated. Clinical tumor stage was I in 35 patients, II in 8, III in 25, and IV in 12. Histologic type was serous adenocarcinoma (AC) in 33 patients, clear cell AC in 27, endometrioid AC in 15, and mucinous AC in 5. Median SUVmax was lower in mucinous AC (2.76) and clear cell AC (4.9) than in serous AC (11.4) or endometrioid AC (11.4). Overall, median SUVmax was lower in clinical stage I (5.37) than in clinical stage ≥II (10.3). However, in both clear cell AC and endometrioid AC, when histologic evaluation was possible, no difference was seen between stage I and stage ≥II. Moreover, in clear cell AC, the 5-year survival rate was significantly higher in the low-SUVmax group (100%) than in the high-SUVmax group (43.0%, P = 0.009). Maximum SUV on preoperative FDG-PET/CT in ovarian epithelial cancer differs according to histologic type. In clear cell AC, SUVmax may represent a prognostic factor.

  15. MPBoot: fast phylogenetic maximum parsimony tree inference and bootstrap approximation.

    Science.gov (United States)

    Hoang, Diep Thi; Vinh, Le Sy; Flouri, Tomáš; Stamatakis, Alexandros; von Haeseler, Arndt; Minh, Bui Quang

    2018-02-02

    The nonparametric bootstrap is widely used to measure the branch support of phylogenetic trees. However, bootstrapping is computationally expensive and remains a bottleneck in phylogenetic analyses. Recently, an ultrafast bootstrap approximation (UFBoot) approach was proposed for maximum likelihood analyses. However, such an approach is still missing for maximum parsimony. To close this gap we present MPBoot, an adaptation and extension of UFBoot to compute branch supports under the maximum parsimony principle. MPBoot works for both uniform and non-uniform cost matrices. Our analyses on biological DNA and protein showed that under uniform cost matrices, MPBoot runs on average 4.7 (DNA) to 7 times (protein data) (range: 1.2-20.7) faster than the standard parsimony bootstrap implemented in PAUP*; but 1.6 (DNA) to 4.1 times (protein data) slower than the standard bootstrap with a fast search routine in TNT (fast-TNT). However, for non-uniform cost matrices MPBoot is 5 (DNA) to 13 times (protein data) (range:0.3-63.9) faster than fast-TNT. We note that MPBoot achieves better scores more frequently than PAUP* and fast-TNT. However, this effect is less pronounced if an intensive but slower search in TNT is invoked. Moreover, experiments on large-scale simulated data show that while both PAUP* and TNT bootstrap estimates are too conservative, MPBoot bootstrap estimates appear more unbiased. MPBoot provides an efficient alternative to the standard maximum parsimony bootstrap procedure. It shows favorable performance in terms of run time, the capability of finding a maximum parsimony tree, and high bootstrap accuracy on simulated as well as empirical data sets. MPBoot is easy-to-use, open-source and available at http://www.cibiv.at/software/mpboot .

  16. Predicted and verified deviations from Zipf's law in ecology of competing products.

    Science.gov (United States)

    Hisano, Ryohei; Sornette, Didier; Mizuno, Takayuki

    2011-08-01

    Zipf's power-law distribution is a generic empirical statistical regularity found in many complex systems. However, rather than universality with a single power-law exponent (equal to 1 for Zipf's law), there are many reported deviations that remain unexplained. A recently developed theory finds that the interplay between (i) one of the most universal ingredients, namely stochastic proportional growth, and (ii) birth and death processes, leads to a generic power-law distribution with an exponent that depends on the characteristics of each ingredient. Here, we report the first complete empirical test of the theory and its application, based on the empirical analysis of the dynamics of market shares in the product market. We estimate directly the average growth rate of market shares and its standard deviation, the birth rates and the "death" (hazard) rate of products. We find that temporal variations and product differences of the observed power-law exponents can be fully captured by the theory with no adjustable parameters. Our results can be generalized to many systems for which the statistical properties revealed by power-law exponents are directly linked to the underlying generating mechanism.

  17. Complexity analysis based on generalized deviation for financial markets

    Science.gov (United States)

    Li, Chao; Shang, Pengjian

    2018-03-01

    In this paper, a new modified method is proposed as a measure to investigate the correlation between past price and future volatility for financial time series, known as the complexity analysis based on generalized deviation. In comparison with the former retarded volatility model, the new approach is both simple and computationally efficient. The method based on the generalized deviation function presents us an exhaustive way showing the quantization of the financial market rules. Robustness of this method is verified by numerical experiments with both artificial and financial time series. Results show that the generalized deviation complexity analysis method not only identifies the volatility of financial time series, but provides a comprehensive way distinguishing the different characteristics between stock indices and individual stocks. Exponential functions can be used to successfully fit the volatility curves and quantify the changes of complexity for stock market data. Then we study the influence for negative domain of deviation coefficient and differences during the volatile periods and calm periods. after the data analysis of the experimental model, we found that the generalized deviation model has definite advantages in exploring the relationship between the historical returns and future volatility.

  18. Deviation equation in spaces with affine connection. Pts. 3 and 4

    International Nuclear Information System (INIS)

    Iliev, B.Z.

    1987-01-01

    The concept of a parallel transport is used to define a class of displacement vectors in spaces with affine connection. The nonlocal deviation equation in such spaces is introduced using a definition of the deviation vector based on the displacement vector. It turns out to be a special of the generalized deviation equation, but having an appropriate physical interpretation. The equation of geodesic deviation is presented as an example

  19. 9 CFR 319.10 - Requirements for substitute standardized meat food products named by use of an expressed nutrient...

    Science.gov (United States)

    2010-01-01

    ... INSPECTION AND CERTIFICATION DEFINITIONS AND STANDARDS OF IDENTITY OR COMPOSITION General § 319.10... identity, but that do not comply with the established standard because of a compositional deviation that... for roller grilling”). Deviations from the ingredient provisions of the standard must be the minimum...

  20. 21 CFR 130.10 - Requirements for foods named by use of a nutrient content claim and a standardized term.

    Science.gov (United States)

    2010-04-01

    ... standardized term. (a) Description. The foods prescribed by this general definition and standard of identity... of identity but that do not comply with the standard of identity because of a deviation that is.... Deviations from noningredient provisions of the standard of identity (e.g., moisture content, food solids...

  1. Influence of asymmetrical drawing radius deviation in micro deep drawing

    Science.gov (United States)

    Heinrich, L.; Kobayashi, H.; Shimizu, T.; Yang, M.; Vollertsen, F.

    2017-09-01

    Nowadays, an increasing demand for small metal parts in electronic and automotive industries can be observed. Deep drawing is a well-suited technology for the production of such parts due to its excellent qualities for mass production. However, the downscaling of the forming process leads to new challenges in tooling and process design, such as high relative deviation of tool geometry or blank displacement compared to the macro scale. FEM simulation has been a widely-used tool to investigate the influence of symmetrical process deviations as for instance a global variance of the drawing radius. This study shows a different approach that allows to determine the impact of asymmetrical process deviations on micro deep drawing. In this particular case the impact of an asymmetrical drawing radius deviation and blank displacement on cup geometry deviation was investigated for different drawing ratios by experiments and FEM simulation. It was found that both variations result in an increasing cup height deviation. Nevertheless, with increasing drawing ratio a constant drawing radius deviation has an increasing impact, while blank displacement results in a decreasing offset of the cups geometry. This is explained by different mechanisms that result in an uneven cup geometry. While blank displacement leads to material surplus on one side of the cup, an unsymmetrical radius deviation on the other hand generates uneven stretching of the cups wall. This is intensified for higher drawing ratios. It can be concluded that the effect of uneven radius geometry proves to be of major importance for the production of accurately shaped micro cups and cannot be compensated by intentional blank displacement.

  2. a comparison of modified and standard papanicolaou staining ...

    African Journals Online (AJOL)

    2011-07-07

    Jul 7, 2011 ... modified pap method and standard Papanicolaou method respectively. The staining characteristics in .... alcohol was replaced by 0.5 % acetic acid and also, .... was 37.1, standard deviation of 8.0 and a median of. 36.5 years.

  3. Maximum likelihood estimation of the attenuated ultrasound pulse

    DEFF Research Database (Denmark)

    Rasmussen, Klaus Bolding

    1994-01-01

    The attenuated ultrasound pulse is divided into two parts: a stationary basic pulse and a nonstationary attenuation pulse. A standard ARMA model is used for the basic pulse, and a nonstandard ARMA model is derived for the attenuation pulse. The maximum likelihood estimator of the attenuated...

  4. correlation between maximum dry density and cohesion of ...

    African Journals Online (AJOL)

    HOD

    investigation on sandy soils to determine the correlation between relative density and compaction test parameter. Using twenty soil samples, they were able to develop correlations between relative density, coefficient of uniformity and maximum dry density. Khafaji [5] using standard proctor compaction method carried out an ...

  5. Limiting values of large deviation probabilities of quadratic statistics

    NARCIS (Netherlands)

    Jeurnink, Gerardus A.M.; Kallenberg, W.C.M.

    1990-01-01

    Application of exact Bahadur efficiencies in testing theory or exact inaccuracy rates in estimation theory needs evaluation of large deviation probabilities. Because of the complexity of the expressions, frequently a local limit of the nonlocal measure is considered. Local limits of large deviation

  6. Refraction in Terms of the Deviation of the Light.

    Science.gov (United States)

    Goldberg, Fred M.

    1985-01-01

    Discusses refraction in terms of the deviation of light. Points out that in physics courses where very little mathematics is used, it might be more suitable to describe refraction entirely in terms of the deviation, rather than by introducing Snell's law. (DH)

  7. ECG-Based Detection of Early Myocardial Ischemia in a Computational Model: Impact of Additional Electrodes, Optimal Placement, and a New Feature for ST Deviation.

    Science.gov (United States)

    Loewe, Axel; Schulze, Walther H W; Jiang, Yuan; Wilhelms, Mathias; Luik, Armin; Dössel, Olaf; Seemann, Gunnar

    2015-01-01

    In case of chest pain, immediate diagnosis of myocardial ischemia is required to respond with an appropriate treatment. The diagnostic capability of the electrocardiogram (ECG), however, is strongly limited for ischemic events that do not lead to ST elevation. This computational study investigates the potential of different electrode setups in detecting early ischemia at 10 minutes after onset: standard 3-channel and 12-lead ECG as well as body surface potential maps (BSPMs). Further, it was assessed if an additional ECG electrode with optimized position or the right-sided Wilson leads can improve sensitivity of the standard 12-lead ECG. To this end, a simulation study was performed for 765 different locations and sizes of ischemia in the left ventricle. Improvements by adding a single, subject specifically optimized electrode were similar to those of the BSPM: 2-11% increased detection rate depending on the desired specificity. Adding right-sided Wilson leads had negligible effect. Absence of ST deviation could not be related to specific locations of the ischemic region or its transmurality. As alternative to the ST time integral as a feature of ST deviation, the K point deviation was introduced: the baseline deviation at the minimum of the ST-segment envelope signal, which increased 12-lead detection rate by 7% for a reasonable threshold.

  8. Symphysis-fundal height curve in the diagnosis of fetal growth deviations

    Directory of Open Access Journals (Sweden)

    Djacyr Magna Cabral Freire

    2010-12-01

    Full Text Available OBJECTIVE: To validate a new symphysis-fundal curve for screening fetal growth deviations and to compare its performance with the standard curve adopted by the Brazilian Ministry of Health. METHODS: Observational study including a total of 753 low-risk pregnant women with gestational age above 27 weeks between March to October 2006 in the city of João Pessoa, Northeastern Brazil. Symphisys-fundal was measured using a standard technique recommended by the Brazilian Ministry of Health. Estimated fetal weight assessed through ultrasound using the Brazilian fetal weight chart for gestational age was the gold standard. A subsample of 122 women with neonatal weight measurements was taken up to seven days after estimated fetal weight measurements and symphisys-fundal classification was compared with Lubchenco growth reference curve as gold standard. Sensitivity, specificity, positive and negative predictive values were calculated. The McNemar χ2 test was used for comparing sensitivity of both symphisys-fundal curves studied. RESULTS: The sensitivity of the new curve for detecting small for gestational age fetuses was 51.6% while that of the Brazilian Ministry of Health reference curve was significantly lower (12.5%. In the subsample using neonatal weight as gold standard, the sensitivity of the new reference curve was 85.7% while that of the Brazilian Ministry of Health was 42.9% for detecting small for gestational age. CONCLUSIONS: The diagnostic performance of the new curve for detecting small for gestational age fetuses was significantly higher than that of the Brazilian Ministry of Health reference curve.

  9. Direct maximum parsimony phylogeny reconstruction from genotype data.

    Science.gov (United States)

    Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell

    2007-12-05

    Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.

  10. Direct maximum parsimony phylogeny reconstruction from genotype data

    Directory of Open Access Journals (Sweden)

    Ravi R

    2007-12-01

    Full Text Available Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. Results In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Conclusion Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.

  11. Mean-deviation analysis in the theory of choice.

    Science.gov (United States)

    Grechuk, Bogdan; Molyboha, Anton; Zabarankin, Michael

    2012-08-01

    Mean-deviation analysis, along with the existing theories of coherent risk measures and dual utility, is examined in the context of the theory of choice under uncertainty, which studies rational preference relations for random outcomes based on different sets of axioms such as transitivity, monotonicity, continuity, etc. An axiomatic foundation of the theory of coherent risk measures is obtained as a relaxation of the axioms of the dual utility theory, and a further relaxation of the axioms are shown to lead to the mean-deviation analysis. Paradoxes arising from the sets of axioms corresponding to these theories and their possible resolutions are discussed, and application of the mean-deviation analysis to optimal risk sharing and portfolio selection in the context of rational choice is considered. © 2012 Society for Risk Analysis.

  12. Illusory shadow person causing paradoxical gaze deviations during temporal lobe seizures

    NARCIS (Netherlands)

    Zijlmans, M.; van Eijsden, P.; Ferrier, C. H.; Kho, K. H.; van Rijen, P. C.; Leijten, F. S. S.

    Generally, activation of the frontal eye field during seizures can cause versive (forced) gaze deviation, while non-versive head deviation is hypothesised to result from ictal neglect after inactivation of the ipsilateral temporoparietal area. Almost all non-versive head deviations occurring during

  13. Deviations from Newton's law in supersymmetric large extra dimensions

    International Nuclear Information System (INIS)

    Callin, P.; Burgess, C.P.

    2006-01-01

    Deviations from Newton's inverse-squared law at the micron length scale are smoking-gun signals for models containing supersymmetric large extra dimensions (SLEDs), which have been proposed as approaches for resolving the cosmological constant problem. Just like their non-supersymmetric counterparts, SLED models predict gravity to deviate from the inverse-square law because of the advent of new dimensions at sub-millimeter scales. However SLED models differ from their non-supersymmetric counterparts in three important ways: (i) the size of the extra dimensions is fixed by the observed value of the dark energy density, making it impossible to shorten the range over which new deviations from Newton's law must be seen; (ii) supersymmetry predicts there to be more fields in the extra dimensions than just gravity, implying different types of couplings to matter and the possibility of repulsive as well as attractive interactions; and (iii) the same mechanism which is purported to keep the cosmological constant naturally small also keeps the extra-dimensional moduli effectively massless, leading to deviations from general relativity in the far infrared of the scalar-tensor form. We here explore the deviations from Newton's law which are predicted over micron distances, and show the ways in which they differ and resemble those in the non-supersymmetric case

  14. Performance of Phonatory Deviation Diagrams in Synthesized Voice Analysis.

    Science.gov (United States)

    Lopes, Leonardo Wanderley; da Silva, Karoline Evangelista; da Silva Evangelista, Deyverson; Almeida, Anna Alice; Silva, Priscila Oliveira Costa; Lucero, Jorge; Behlau, Mara

    2018-05-02

    To analyze the performance of a phonatory deviation diagram (PDD) in discriminating the presence and severity of voice deviation and the predominant voice quality of synthesized voices. A speech-language pathologist performed the auditory-perceptual analysis of the synthesized voice (n = 871). The PDD distribution of voice signals was analyzed according to area, quadrant, shape, and density. Differences in signal distribution regarding the PDD area and quadrant were detected when differentiating the signals with and without voice deviation and with different predominant voice quality. Differences in signal distribution were found in all PDD parameters as a function of the severity of voice disorder. The PDD area and quadrant can differentiate normal voices from deviant synthesized voices. There are differences in signal distribution in PDD area and quadrant as a function of the severity of voice disorder and the predominant voice quality. However, the PDD area and quadrant do not differentiate the signals as a function of severity of voice disorder and differentiated only the breathy and rough voices from the normal and strained voices. PDD density is able to differentiate only signals with moderate and severe deviation. PDD shape shows differences between signals with different severities of voice deviation. © 2018 S. Karger AG, Basel.

  15. Dataset on the mean, standard deviation, broad-sense heritability and stability of wheat quality bred in three different ways and grown under organic and low-input conventional systems.

    Science.gov (United States)

    Rakszegi, Marianna; Löschenberger, Franziska; Hiltbrunner, Jürg; Vida, Gyula; Mikó, Péter

    2016-06-01

    An assessment was previously made of the effects of organic and low-input field management systems on the physical, grain compositional and processing quality of wheat and on the performance of varieties developed using different breeding methods ("Comparison of quality parameters of wheat varieties with different breeding origin under organic and low-input conventional conditions" [1]). Here, accompanying data are provided on the performance and stability analysis of the genotypes using the coefficient of variation and the 'ranking' and 'which-won-where' plots of GGE biplot analysis for the most important quality traits. Broad-sense heritability was also evaluated and is given for the most important physical and quality properties of the seed in organic and low-input management systems, while mean values and standard deviation of the studied properties are presented separately for organic and low-input fields.

  16. Comprehensive performance analyses and optimization of the irreversible thermodynamic cycle engines (TCE) under maximum power (MP) and maximum power density (MPD) conditions

    International Nuclear Information System (INIS)

    Gonca, Guven; Sahin, Bahri; Ust, Yasin; Parlak, Adnan

    2015-01-01

    This paper presents comprehensive performance analyses and comparisons for air-standard irreversible thermodynamic cycle engines (TCE) based on the power output, power density, thermal efficiency, maximum dimensionless power output (MP), maximum dimensionless power density (MPD) and maximum thermal efficiency (MEF) criteria. Internal irreversibility of the cycles occurred during the irreversible-adiabatic processes is considered by using isentropic efficiencies of compression and expansion processes. The performances of the cycles are obtained by using engine design parameters such as isentropic temperature ratio of the compression process, pressure ratio, stroke ratio, cut-off ratio, Miller cycle ratio, exhaust temperature ratio, cycle temperature ratio and cycle pressure ratio. The effects of engine design parameters on the maximum and optimal performances are investigated. - Highlights: • Performance analyses are conducted for irreversible thermodynamic cycle engines. • Comprehensive computations are performed. • Maximum and optimum performances of the engines are shown. • The effects of design parameters on performance and power density are examined. • The results obtained may be guidelines to the engine designers

  17. Maximum Likelihood Compton Polarimetry with the Compton Spectrometer and Imager

    Energy Technology Data Exchange (ETDEWEB)

    Lowell, A. W.; Boggs, S. E; Chiu, C. L.; Kierans, C. A.; Sleator, C.; Tomsick, J. A.; Zoglauer, A. C. [Space Sciences Laboratory, University of California, Berkeley (United States); Chang, H.-K.; Tseng, C.-H.; Yang, C.-Y. [Institute of Astronomy, National Tsing Hua University, Taiwan (China); Jean, P.; Ballmoos, P. von [IRAP Toulouse (France); Lin, C.-H. [Institute of Physics, Academia Sinica, Taiwan (China); Amman, M. [Lawrence Berkeley National Laboratory (United States)

    2017-10-20

    Astrophysical polarization measurements in the soft gamma-ray band are becoming more feasible as detectors with high position and energy resolution are deployed. Previous work has shown that the minimum detectable polarization (MDP) of an ideal Compton polarimeter can be improved by ∼21% when an unbinned, maximum likelihood method (MLM) is used instead of the standard approach of fitting a sinusoid to a histogram of azimuthal scattering angles. Here we outline a procedure for implementing this maximum likelihood approach for real, nonideal polarimeters. As an example, we use the recent observation of GRB 160530A with the Compton Spectrometer and Imager. We find that the MDP for this observation is reduced by 20% when the MLM is used instead of the standard method.

  18. Generation of deviation parameters for amino acid singlets, doublets ...

    Indian Academy of Sciences (India)

    We present a new method, secondary structure prediction by deviation parameter (SSPDP) for predicting the secondary structure of proteins from amino acid sequence. Deviation parameters (DP) for amino acid singlets, doublets and triplets were computed with respect to secondary structural elements of proteins based on ...

  19. Maximum Range of a Projectile Thrown from Constant-Speed Circular Motion

    Science.gov (United States)

    Poljak, Nikola

    2016-01-01

    The problem of determining the angle ? at which a point mass launched from ground level with a given speed v[subscript 0] will reach a maximum distance is a standard exercise in mechanics. There are many possible ways of solving this problem, leading to the well-known answer of ? = p/4, producing a maximum range of D[subscript max] = v[superscript…

  20. Importance sampling large deviations in nonequilibrium steady states. I

    Science.gov (United States)

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T.

    2018-03-01

    Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.

  1. Importance sampling large deviations in nonequilibrium steady states. I.

    Science.gov (United States)

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T

    2018-03-28

    Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.

  2. Ethylene Production Maximum Achievable Control Technology (MACT) Compliance Manual

    Science.gov (United States)

    This July 2006 document is intended to help owners and operators of ethylene processes understand and comply with EPA's maximum achievable control technology standards promulgated on July 12, 2002, as amended on April 13, 2005 and April 20, 2006.

  3. The interactive role of job stress and organizational perceived support on psychological capital and job deviation behavior of hospital's nurses and staffs

    Directory of Open Access Journals (Sweden)

    Abolfazl Ghasemzadeh

    2017-06-01

    Full Text Available The phenomenon of job stress is an inevitable part of professional life and in the activities and efficiency is reflected in the organization. This study aimed to identify and predict the relationship between psychological capital and job deviation behavior through job stress regarding the moderating role of perceived organizational support. This study is correlation by using descriptive methods for applied goals. Standard questionnaire was used to collect data. 180 participants was estimated and stratified random sampling. The results showed the significance of the relationship between the variables except the relationship between deviant behaviors with psychological capital. Also, the interactive role of job stress and perceived organizational support on psychological capital and job deviation behavior was confirmed. This means that for the hospital's nurses and staffs with job stress, increasing perceived organizational support associated with enhancing psychological capital and decreasing job deviation behavior. These results emphasize necessity of recognizing interactive role of job stress and perceived organizational support in psychological capital and job deviation behavior

  4. Erratic tacrolimus exposure, assessed using the standard deviation of trough blood levels, predicts chronic lung allograft dysfunction and survival.

    Science.gov (United States)

    Gallagher, Harry M; Sarwar, Ghulam; Tse, Tracy; Sladden, Timothy M; Hii, Esmond; Yerkovich, Stephanie T; Hopkins, Peter M; Chambers, Daniel C

    2015-11-01

    Erratic tacrolimus blood levels are associated with liver and kidney graft failure. We hypothesized that erratic tacrolimus exposure would similarly compromise lung transplant outcomes. This study assessed the effect of tacrolimus mean and standard deviation (SD) levels on the risk of chronic lung allograft dysfunction (CLAD) and death after lung transplantation. We retrospectively reviewed 110 lung transplant recipients who received tacrolimus-based immunosuppression. Cox proportional hazard modeling was used to investigate the effect of tacrolimus mean and SD levels on survival and CLAD. At census, 48 patients (44%) had developed CLAD and 37 (34%) had died. Tacrolimus SD was highest for the first 6 post-transplant months (median, 4.01; interquartile range [IQR], 3.04-4.98 months) before stabilizing at 2.84 μg/liter (IQR, 2.16-4.13 μg/liter) between 6 and 12 months. The SD then remained the same (median, 2.85; IQR, 2.00-3.77 μg/liter) between 12 and 24 months. A high mean tacrolimus level 6 to 12 months post-transplant independently reduced the risk of CLAD (hazard ratio [HR], 0.74; 95% confidence interval [CI], 0.63-0.86; p < 0.001) but not death (HR, 0.96; 95% CI, 0.83-1.12; p = 0.65). In contrast, a high tacrolimus SD between 6 and 12 months independently increased the risk of CLAD (HR, 1.46; 95% CI, 1.23-1.73; p < 0.001) and death (HR, 1.27; 95% CI, 1.08-1.51; p = 0.005). Erratic tacrolimus levels are a risk factor for poor lung transplant outcomes. Identifying and modifying factors that contribute to this variability may significantly improve outcomes. Copyright © 2015 International Society for Heart and Lung Transplantation. Published by Elsevier Inc. All rights reserved.

  5. Standard deviation of carotid young's modulus and presence or absence of plaque improves prediction of coronary heart disease risk.

    Science.gov (United States)

    Niu, Lili; Zhang, Yanling; Qian, Ming; Xiao, Yang; Meng, Long; Zheng, Rongqin; Zheng, Hairong

    2017-11-01

    The stiffness of large arteries and the presence or absence of plaque are associated with coronary heart disease (CHD). Because arterial walls are biologically heterogeneous, the standard deviation of Young's modulus (YM-std) of the large arteries may better predict coronary atherosclerosis. However, the role of YM-std in the occurrence of coronary events has not been addressed so far. Therefore, this study investigated whether the carotid YM-std and the presence or absence of plaque improved CHD risk prediction. One hundred and three patients with CHD (age 66 ± 11 years) and 107 patients at high risk of atherosclerosis (age 61 ± 7 years) were recruited. Carotid YM was measured by the vessel texture matching method, and YM-std was calculated. Carotid intima-media thickness was measured by the MyLab 90 ultrasound Platform employed dedicated software RF-tracking technology. In logistic regression analysis, YM-std (OR = 1·010; 95% CI = 1·003-1·016), carotid plaque (OR = 16·759; 95% CI = 3·719-75·533) and YM-std plus plaque (OR = 0·989; 95% CI = 0·981-0·997) were independent predictors of CHD. The traditional risk factors (TRF) plus YM-std plus plaque model showed a significant improvement in area under the receiver-operating characteristic curve (AUC), which increased from 0·717 (TRF only) to 0·777 (95% CI for the difference in adjusted AUC: 0·010-0·110). Carotid YM-std is a powerful independent predictor of CHD. Adding plaque and YM-std to TRF improves CHD risk prediction. © 2016 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.

  6. Reassessment of carotid intima-media thickness by standard deviation score in children and adolescents after Kawasaki disease.

    Science.gov (United States)

    Noto, Nobutaka; Kato, Masataka; Abe, Yuriko; Kamiyama, Hiroshi; Karasawa, Kensuke; Ayusawa, Mamoru; Takahashi, Shori

    2015-01-01

    Previous studies that used carotid ultrasound have been largely conflicting in regards to whether or not patients after Kawasaki disease (KD) have a greater carotid intima-media thickness (CIMT) than controls. To test the hypothesis that there are significant differences between the values of CIMT expressed as absolute values and standard deviation scores (SDS) in children and adolescents after KD and controls, we reviewed 12 published articles regarding CIMT on KD patients and controls. The mean ± SD of absolute CIMT (mm) in the KD patients and controls obtained from each article was transformed to SDS (CIMT-SDS) using age-specific reference values established by Jourdan et al. (J: n = 247) and our own data (N: n = 175), and the results among these 12 articles were compared between the two groups and the references for comparison of racial disparities. There were no significant differences in mean absolute CIMT and mean CIMT-SDS for J between KD patients and controls (0.46 ± 0.06 mm vs. 0.44 ± 0.04 mm, p = 0.133, and 1.80 ± 0.84 vs. 1.25 ± 0.12, p = 0.159, respectively). However, there were significant differences in mean CIMT-SDS for N between KD patients and controls (0.60 ± 0.71 vs. 0.01 ± 0.65, p = 0.042). When we assessed the nine articles on Asian subjects, the difference of CIMT-SDS between the two groups was invariably significant only for N (p = 0.015). Compared with the reference values, CIMT-SDS of controls was within the normal range at a rate of 41.6 % for J and 91.6 % for N. These results indicate that age- and race-specific reference values for CIMT are mandatory for performing accurate assessment of the vascular status in healthy children and adolescents, particularly in those after KD considered at increased long-term cardiovascular risk.

  7. Minimum Wage and Maximum Hours Standards Under the Fair Labor Standards Act. Economic Effects Studies.

    Science.gov (United States)

    Wage and Labor Standards Administration (DOL), Washington, DC.

    This report describes the 1966 amendments to the Fair Labor Standards Act and summarizes the findings of three 1969 studies of the economic effects of these amendments. The studies found that economic growth continued through the third phase of the amendments, beginning February 1, 1969, despite increased wage and hours restrictions for recently…

  8. The deviation matrix of a continuous-time Markov chain

    NARCIS (Netherlands)

    Coolen-Schrijner, P.; van Doorn, E.A.

    2001-01-01

    The deviation matrix of an ergodic, continuous-time Markov chain with transition probability matrix $P(.)$ and ergodic matrix $\\Pi$ is the matrix $D \\equiv \\int_0^{\\infty} (P(t)-\\Pi)dt$. We give conditions for $D$ to exist and discuss properties and a representation of $D$. The deviation matrix of a

  9. The deviation matrix of a continuous-time Markov chain

    NARCIS (Netherlands)

    Coolen-Schrijner, Pauline; van Doorn, Erik A.

    2002-01-01

    he deviation matrix of an ergodic, continuous-time Markov chain with transition probability matrix $P(.)$ and ergodic matrix $\\Pi$ is the matrix $D \\equiv \\int_0^{\\infty} (P(t)-\\Pi)dt$. We give conditions for $D$ to exist and discuss properties and a representation of $D$. The deviation matrix of a

  10. Constraints on deviations from ΛCDM within Horndeski gravity

    Energy Technology Data Exchange (ETDEWEB)

    Bellini, Emilio; Cuesta, Antonio J. [ICCUB, University of Barcelona (IEEC-UB), Martí i Franquès 1, E08028 Barcelona (Spain); Jimenez, Raul; Verde, Licia, E-mail: emilio.bellini@icc.ub.edu, E-mail: ajcuesta@icc.ub.edu, E-mail: rauljimenez@g.harvard.edu, E-mail: liciaverde@icc.ub.edu [Institució Catalana de Recerca i Estudis Avançats (ICREA), 08010 Barcelona (Spain)

    2016-02-01

    Recent anomalies found in cosmological datasets such as the low multipoles of the Cosmic Microwave Background or the low redshift amplitude and growth of clustering measured by e.g., abundance of galaxy clusters and redshift space distortions in galaxy surveys, have motivated explorations of models beyond standard ΛCDM. Of particular interest are models where general relativity (GR) is modified on large cosmological scales. Here we consider deviations from ΛCDM+GR within the context of Horndeski gravity, which is the most general theory of gravity with second derivatives in the equations of motion. We adopt a parametrization in which the four additional Horndeski functions of time α{sub i}(t) are proportional to the cosmological density of dark energy Ω{sub DE}(t). Constraints on this extended parameter space using a suite of state-of-the art cosmological observations are presented for the first time. Although the theory is able to accommodate the low multipoles of the Cosmic Microwave Background and the low amplitude of fluctuations from redshift space distortions, we find no significant tension with ΛCDM+GR when performing a global fit to recent cosmological data and thus there is no evidence against ΛCDM+GR from an analysis of the value of the Bayesian evidence ratio of the modified gravity models with respect to ΛCDM, despite introducing extra parameters. The posterior distribution of these extra parameters that we derive return strong constraints on any possible deviations from ΛCDM+GR in the context of Horndeski gravity. We illustrate how our results can be applied to a more general frameworks of modified gravity models.

  11. 45 CFR 63.19 - Budget revisions and minor deviations.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Budget revisions and minor deviations. 63.19 Section 63.19 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION GRANT PROGRAMS... Budget revisions and minor deviations. Pursuant to § 74.102(d) of this title, paragraphs (b)(3) and (b)(4...

  12. The role of septal surgery in management of the deviated nose.

    Science.gov (United States)

    Foda, Hossam M T

    2005-02-01

    The deviated nose represents a complex cosmetic and functional problem. Septal surgery plays a central role in the successful management of the externally deviated nose. This study included 260 patients seeking rhinoplasty to correct external nasal deviations; 75 percent of them had various degrees of nasal obstruction. Septal surgery was necessary in 232 patients (89 percent), not only to improve breathing but also to achieve a straight, symmetrical, external nose as well. A graduated surgical approach was adopted to allow correction of the dorsal and caudal deviations of the nasal septum without weakening its structural support to the dorsum or nasal tip. The approach depended on full mobilization of deviated cartilage, followed by straightening of the cartilage and its fixation in the corrected position by using bony splinting grafts through an external rhinoplasty approach.

  13. An Analysis of the Linguistic Deviation in Chapter X of Oliver Twist

    Institute of Scientific and Technical Information of China (English)

    刘聪

    2013-01-01

    Charles Dickens is one of the greatest critical realist writers of the Victorian Age. In language, he is often compared with William Shakespeare for his adeptness with the vernacular and large vocabulary. Charles Dickens achieved a recognizable place among English writers through the use of the stylistic features in his fictional language. Oliver Twist is the best representative of Charles Dickens’style, which makes it the most appropriate choice for the present stylistic study on Charles Dickens. No one who has ever read the dehumanizing workhouse scenes of Oliver Twist and the dark, criminal underworld life can forget them. This thesis attempts to investigate Oliver Twist through the approach of modern stylistics, particularly the theory of linguistic devia-tion. This thesis consists of an introduction, the main body and a conclusion. The introduction offers a brief summary of the com-ments on Charles Dickens and Chapter X of Oliver Twist, introduces the newly rising linguistic deviation theories, and brings about the theories on which this thesis settles. The main body explores the deviation effects produced from four aspects: lexical deviation, grammatical deviation, graphological deviation, and semantic deviation. It endeavors to show Dickens ’manipulating language and the effects achieved through this manipulation. The conclusion mainly sums up the previous analysis, and reveals the theme of the novel, positive effect of linguistic deviation and significance of deviation application.

  14. Radial gradient and radial deviation radiomic features from pre-surgical CT scans are associated with survival among lung adenocarcinoma patients.

    Science.gov (United States)

    Tunali, Ilke; Stringfield, Olya; Guvenis, Albert; Wang, Hua; Liu, Ying; Balagurunathan, Yoganand; Lambin, Philippe; Gillies, Robert J; Schabath, Matthew B

    2017-11-10

    The goal of this study was to extract features from radial deviation and radial gradient maps which were derived from thoracic CT scans of patients diagnosed with lung adenocarcinoma and assess whether these features are associated with overall survival. We used two independent cohorts from different institutions for training (n= 61) and test (n= 47) and focused our analyses on features that were non-redundant and highly reproducible. To reduce the number of features and covariates into a single parsimonious model, a backward elimination approach was applied. Out of 48 features that were extracted, 31 were eliminated because they were not reproducible or were redundant. We considered 17 features for statistical analysis and identified a final model containing the two most highly informative features that were associated with lung cancer survival. One of the two features, radial deviation outside-border separation standard deviation, was replicated in a test cohort exhibiting a statistically significant association with lung cancer survival (multivariable hazard ratio = 0.40; 95% confidence interval 0.17-0.97). Additionally, we explored the biological underpinnings of these features and found radial gradient and radial deviation image features were significantly associated with semantic radiological features.

  15. Quality assurance: using the exposure index and the deviation index to monitor radiation exposure for portable chest radiographs in neonates

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, Mervyn D. [Indiana University School of Medicine, Department of Radiology, Riley Children' s Hospital, Indianapolis, IN (United States); Riley Hospital for Children, Department of Radiology, Indianapolis, IN (United States); Cooper, Matt L.; Piersall, Kelly [Indiana University School of Medicine, Department of Radiology, Riley Children' s Hospital, Indianapolis, IN (United States); Apgar, Bruce K. [Agfa HealthCare Corporation, Greenville, SC (United States)

    2011-05-15

    Many methods are used to track patient exposure during acquisition of plain film radiographs. A uniform international standard would aid this process. To evaluate and describe a new, simple quality-assurance method for monitoring patient exposure. This method uses the ''exposure index'' and the ''deviation index,'' recently developed by the International Electrotechnical Commission (IEC) and American Association of Physicists in Medicine (AAPM). The deviation index measures variation from an ideal target exposure index value. Our objective was to determine whether the exposure index and the deviation index can be used to monitor and control exposure drift over time. Our Agfa workstation automatically keeps a record of the exposure index for every patient. The exposure index and deviation index were calculated on 1,884 consecutive neonatal chest images. Exposure of a neonatal chest phantom was performed as a control. Acquisition of the exposure index and calculation of the deviation index was easily achieved. The weekly mean exposure index of the phantom and the patients was stable and showed <10% change during the study, indicating no exposure drift during the study period. The exposure index is an excellent tool to monitor the consistency of patient exposures. It does not indicate the exposure value used, but is an index to track compliance with a pre-determined target exposure. (orig.)

  16. Quality assurance: using the exposure index and the deviation index to monitor radiation exposure for portable chest radiographs in neonates

    International Nuclear Information System (INIS)

    Cohen, Mervyn D.; Cooper, Matt L.; Piersall, Kelly; Apgar, Bruce K.

    2011-01-01

    Many methods are used to track patient exposure during acquisition of plain film radiographs. A uniform international standard would aid this process. To evaluate and describe a new, simple quality-assurance method for monitoring patient exposure. This method uses the ''exposure index'' and the ''deviation index,'' recently developed by the International Electrotechnical Commission (IEC) and American Association of Physicists in Medicine (AAPM). The deviation index measures variation from an ideal target exposure index value. Our objective was to determine whether the exposure index and the deviation index can be used to monitor and control exposure drift over time. Our Agfa workstation automatically keeps a record of the exposure index for every patient. The exposure index and deviation index were calculated on 1,884 consecutive neonatal chest images. Exposure of a neonatal chest phantom was performed as a control. Acquisition of the exposure index and calculation of the deviation index was easily achieved. The weekly mean exposure index of the phantom and the patients was stable and showed <10% change during the study, indicating no exposure drift during the study period. The exposure index is an excellent tool to monitor the consistency of patient exposures. It does not indicate the exposure value used, but is an index to track compliance with a pre-determined target exposure. (orig.)

  17. The retest distribution of the visual field summary index mean deviation is close to normal.

    Science.gov (United States)

    Anderson, Andrew J; Cheng, Allan C Y; Lau, Samantha; Le-Pham, Anne; Liu, Victor; Rahman, Farahnaz

    2016-09-01

    When modelling optimum strategies for how best to determine visual field progression in glaucoma, it is commonly assumed that the summary index mean deviation (MD) is normally distributed on repeated testing. Here we tested whether this assumption is correct. We obtained 42 reliable 24-2 Humphrey Field Analyzer SITA standard visual fields from one eye of each of five healthy young observers, with the first two fields excluded from analysis. Previous work has shown that although MD variability is higher in glaucoma, the shape of the MD distribution is similar to that found in normal visual fields. A Shapiro-Wilks test determined any deviation from normality. Kurtosis values for the distributions were also calculated. Data from each observer passed the Shapiro-Wilks normality test. Bootstrapped 95% confidence intervals for kurtosis encompassed the value for a normal distribution in four of five observers. When examined with quantile-quantile plots, distributions were close to normal and showed no consistent deviations across observers. The retest distribution of MD is not significantly different from normal in healthy observers, and so is likely also normally distributed - or nearly so - in those with glaucoma. Our results increase our confidence in the results of influential modelling studies where a normal distribution for MD was assumed. © 2016 The Authors Ophthalmic & Physiological Optics © 2016 The College of Optometrists.

  18. Minimizing Hexapod Robot Foot Deviations Using Multilayer Perceptron

    Directory of Open Access Journals (Sweden)

    Vytautas Valaitis

    2015-12-01

    Full Text Available Rough-terrain traversability is one of the most valuable characteristics of walking robots. Even despite their slower speeds and more complex control algorithms, walking robots have far wider usability than wheeled or tracked robots. However, efficient movement over irregular surfaces can only be achieved by eliminating all possible difficulties, which in many cases are caused by a high number of degrees of freedom, feet slippage, frictions and inertias between different robot parts or even badly developed inverse kinematics (IK. In this paper we address the hexapod robot-foot deviation problem. We compare the foot-positioning accuracy of unconfigured inverse kinematics and Multilayer Perceptron-based (MLP methods via theory, computer modelling and experiments on a physical robot. Using MLP-based methods, we were able to significantly decrease deviations while reaching desired positions with the hexapod's foot. Furthermore, this method is able to compensate for deviations of the robot arising from any possible reason.

  19. Small-Volume Injections: Evaluation of Volume Administration Deviation From Intended Injection Volumes.

    Science.gov (United States)

    Muffly, Matthew K; Chen, Michael I; Claure, Rebecca E; Drover, David R; Efron, Bradley; Fitch, William L; Hammer, Gregory B

    2017-10-01

    regression model. Analysis of variance was used to determine whether the absolute log proportional error differed by the intended injection volume. Interindividual and intraindividual deviation from the intended injection volume was also characterized. As the intended injection volumes decreased, the absolute log proportional injection volume error increased (analysis of variance, P standard deviations of the log proportional errors for injection volumes between physicians and pediatric PACU nurses; however, the difference in absolute bias was significantly higher for nurses with a 2-sided significance of P = .03. Clinically significant dose variation occurs when injecting volumes ≤0.5 mL. Administering small volumes of medications may result in unintended medication administration errors.

  20. The Impact of Advanced Technologies on Treatment Deviations in Radiation Treatment Delivery

    International Nuclear Information System (INIS)

    Marks, Lawrence B.; Light, Kim L.; Hubbs, Jessica L.; Georgas, Debra L.; Jones, Ellen L.; Wright, Melanie C.; Willett, Christopher G.; Yin Fangfang

    2007-01-01

    Purpose: To assess the impact of new technologies on deviation rates in radiation therapy (RT). Methods and Materials: Treatment delivery deviations in RT were prospectively monitored during a time of technology upgrade. In January 2003, our department had three accelerators, none with 'modern' technologies (e.g., without multileaf collimators [MLC]). In 2003 to 2004, we upgraded to five new accelerators, four with MLC, and associated advanced capabilities. The deviation rates among patients treated on 'high-technology' versus 'low-technology' machines (defined as those with vs. without MLC) were compared over time using the two-tailed Fisher's exact test. Results: In 2003, there was no significant difference between the deviation rate in the 'high-technology' versus 'low-technology' groups (0.16% vs. 0.11%, p = 0.45). In 2005 to 2006, the deviation rate for the 'high-technology' groups was lower than the 'low-technology' (0.083% vs. 0.21%, p = 0.009). This difference was caused by a decline in deviations on the 'high-technology' machines over time (p = 0.053), as well as an unexpected trend toward an increase in deviations over time on the 'low-technology' machines (p = 0.15). Conclusions: Advances in RT delivery systems appear to reduce the rate of treatment deviations. Deviation rates on 'high-technology' machines with MLC decline over time, suggesting a learning curve after the introduction of new technologies. Associated with the adoption of 'high-technology' was an unexpected increase in the deviation rate with 'low-technology' approaches, which may reflect an over-reliance on tools inherent to 'high-technology' machines. With the introduction of new technologies, continued diligence is needed to ensure that staff remain proficient with 'low-technology' approaches

  1. Large-Deviation Results for Discriminant Statistics of Gaussian Locally Stationary Processes

    Directory of Open Access Journals (Sweden)

    Junichi Hirukawa

    2012-01-01

    Full Text Available This paper discusses the large-deviation principle of discriminant statistics for Gaussian locally stationary processes. First, large-deviation theorems for quadratic forms and the log-likelihood ratio for a Gaussian locally stationary process with a mean function are proved. Their asymptotics are described by the large deviation rate functions. Second, we consider the situations where processes are misspecified to be stationary. In these misspecified cases, we formally make the log-likelihood ratio discriminant statistics and derive the large deviation theorems of them. Since they are complicated, they are evaluated and illustrated by numerical examples. We realize the misspecification of the process to be stationary seriously affecting our discrimination.

  2. Influence of Dynamic Neuromuscular Stabilization Approach on Maximum Kayak Paddling Force

    Directory of Open Access Journals (Sweden)

    Davidek Pavel

    2018-03-01

    Full Text Available The purpose of this study was to examine the effect of Dynamic Neuromuscular Stabilization (DNS exercise on maximum paddling force (PF and self-reported pain perception in the shoulder girdle area in flatwater kayakers. Twenty male flatwater kayakers from a local club (age = 21.9 ± 2.4 years, body height = 185.1 ± 7.9 cm, body mass = 83.9 ± 9.1 kg were randomly assigned to the intervention or control groups. During the 6-week study, subjects from both groups performed standard off-season training. Additionally, the intervention group engaged in a DNS-based core stabilization exercise program (quadruped exercise, side sitting exercise, sitting exercise and squat exercise after each standard training session. Using a kayak ergometer, the maximum PF stroke was measured four times during the six weeks. All subjects completed the Disabilities of the Arm, Shoulder and Hand (DASH questionnaire before and after the 6-week interval to evaluate subjective pain perception in the shoulder girdle area. Initially, no significant differences in maximum PF and the DASH questionnaire were identified between the two groups. Repeated measures analysis of variance indicated that the experimental group improved significantly compared to the control group on maximum PF (p = .004; Cohen’s d = .85, but not on the DASH questionnaire score (p = .731 during the study. Integration of DNS with traditional flatwater kayak training may significantly increase maximum PF, but may not affect pain perception to the same extent.

  3. Deviation from Covered Interest Rate Parity in Korea

    Directory of Open Access Journals (Sweden)

    Seungho Lee

    2003-06-01

    Full Text Available This paper tested the factors which cause deviation from covered interest rate parity (CIRP in Korea, using regression and VAR models. The empirical evidence indicates that the difference between the swap rate and interest rate differential exists and is greatly affected by variables which represent the currency liquidity situation of foreign exchange banks. In other words, the deviation from CIRP can easily occur due to the lack of foreign exchange liquidity of banks in a thin market, despite few capital constraints, small transaction costs, and trivial default risk in Korea.

  4. Scan-To Output Validation: Towards a Standardized Geometric Quality Assessment of Building Information Models Based on Point Clouds

    Science.gov (United States)

    Bonduel, M.; Bassier, M.; Vergauwen, M.; Pauwels, P.; Klein, R.

    2017-11-01

    The use of Building Information Modeling (BIM) for existing buildings based on point clouds is increasing. Standardized geometric quality assessment of the BIMs is needed to make them more reliable and thus reusable for future users. First, available literature on the subject is studied. Next, an initial proposal for a standardized geometric quality assessment is presented. Finally, this method is tested and evaluated with a case study. The number of specifications on BIM relating to existing buildings is limited. The Levels of Accuracy (LOA) specification of the USIBD provides definitions and suggestions regarding geometric model accuracy, but lacks a standardized assessment method. A deviation analysis is found to be dependent on (1) the used mathematical model, (2) the density of the point clouds and (3) the order of comparison. Results of the analysis can be graphical and numerical. An analysis on macro (building) and micro (BIM object) scale is necessary. On macro scale, the complete model is compared to the original point cloud and vice versa to get an overview of the general model quality. The graphical results show occluded zones and non-modeled objects respectively. Colored point clouds are derived from this analysis and integrated in the BIM. On micro scale, the relevant surface parts are extracted per BIM object and compared to the complete point cloud. Occluded zones are extracted based on a maximum deviation. What remains is classified according to the LOA specification. The numerical results are integrated in the BIM with the use of object parameters.

  5. Outcome after polytrauma in a certified trauma network: comparing standard vs. maximum care facilities concept of the study and study protocol (POLYQUALY).

    Science.gov (United States)

    Koller, Michael; Ernstberger, Antonio; Zeman, Florian; Loss, Julika; Nerlich, Michael

    2016-07-11

    The aim of this study is to evaluate the performance of the first certified regional trauma network in Germany, the Trauma Network Eastern Bavaria (TNO) addressing the following specific research questions: Do standard and maximum care facilities produce comparable (risk-adjusted) levels of patient outcome? Does TNO outperform reference data provided by the German Trauma Register 2008? Does TNO comply with selected benchmarks derived from the S3 practice guideline? Which barriers and facilitators can be identified in the health care delivery processes for polytrauma patients? The design is based on a prospective multicenter cohort study comparing two cohorts of polytrauma patients: those treated in maximum care facilities and those treated in standard care facilities. Patient recruitment will take place in the 25 TNO clinics. It is estimated that n = 1.100 patients will be assessed for eligibility within a two-year period and n = 800 will be included into the study and analysed. Main outcome measures include the TraumaRegisterQM form, which has been implemented in the clinical routine since 2009 and is filled in via a web-based data management system in participating hospitals on a mandatory basis. Furthermore, patient-reported outcome is assessed using the EQ-5D at 6, 12 and 24 months after trauma. Comparisons will be drawn between the two cohorts. Further standards of comparisons are secondary data derived from German Trauma Registry as well as benchmarks from German S3 guideline on polytrauma. The qualitative part of the study will be based on semi-standardized interviews and focus group discussions with health care providers within TNO. The goal of the qualitative analysis is to elucidate which facilitating and inhibiting forces influence cooperation and performance within the network. This is the first study to evaluate a certified trauma network within the German health care system using a unique combination of a quantitative (prospective cohort

  6. A method for age-matched OCT angiography deviation mapping in the assessment of disease- related changes to the radial peripapillary capillaries.

    Science.gov (United States)

    Pinhas, Alexander; Linderman, Rachel; Mo, Shelley; Krawitz, Brian D; Geyman, Lawrence S; Carroll, Joseph; Rosen, Richard B; Chui, Toco Y

    2018-01-01

    To present a method for age-matched deviation mapping in the assessment of disease-related changes to the radial peripapillary capillaries (RPCs). We reviewed 4.5x4.5mm en face peripapillary OCT-A scans of 133 healthy control eyes (133 subjects, mean 41.5 yrs, range 11-82 yrs) and 4 eyes with distinct retinal pathologies, obtained using spectral-domain optical coherence tomography angiography. Statistical analysis was performed to evaluate the impact of age on RPC perfusion densities. RPC density group mean and standard deviation maps were generated for each decade of life. Deviation maps were created for the diseased eyes based on these maps. Large peripapillary vessel (LPV; noncapillary vessel) perfusion density was also studied for impact of age. Average healthy RPC density was 42.5±1.47%. ANOVA and pairwise Tukey-Kramer tests showed that RPC density in the ≥60yr group was significantly lower compared to RPC density in all younger decades of life (pDeviation mapping enabled us to quantitatively and visually elucidate the significance of RPC density changes in disease. It is important to consider changes that occur with aging when analyzing RPC and LPV density changes in disease. RPC density, coupled with age-matched deviation mapping techniques, represents a potentially clinically useful method in detecting changes to peripapillary perfusion in disease.

  7. Aqua AIRS Level 3 Monthly Standard Physical Retrieval (AIRS-only) V006

    Data.gov (United States)

    National Aeronautics and Space Administration — The AIRS Only Level 3 Monthly Gridded Retrieval Product contains standard retrieval means, standard deviations and input counts. Each file covers a calendar month....

  8. AIRS/Aqua Level 3 Monthly standard physical retrieval (AIRS-only) V005

    Data.gov (United States)

    National Aeronautics and Space Administration — The AIRS Only Level 3 Monthly Gridded Retrieval Product contains standard retrieval means, standard deviations and input counts. Each file covers a calendar month....

  9. AIRS/Aqua Level 3 Monthly standard physical retrieval (AIRS+AMSU) V005

    Data.gov (United States)

    National Aeronautics and Space Administration — The AIRS Level 3 Monthly Gridded Retrieval Product contains standard retrieval means, standard deviations and input counts. Each file covers a calendar month. The...

  10. Aqua AIRS Level 3 Monthly Standard Physical Retrieval (AIRS+AMSU) V006

    Data.gov (United States)

    National Aeronautics and Space Administration — The AIRS Level 3 Monthly Gridded Retrieval Product contains standard retrieval means, standard deviations and input counts. Each file covers a calendar month. The...

  11. Quantum Gravity and Maximum Attainable Velocities in the Standard Model

    International Nuclear Information System (INIS)

    Alfaro, Jorge

    2007-01-01

    A main difficulty in the quantization of the gravitational field is the lack of experiments that discriminate among the theories proposed to quantize gravity. Recently we showed that the Standard Model(SM) itself contains tiny Lorentz invariance violation(LIV) terms coming from QG. All terms depend on one arbitrary parameter α that set the scale of QG effects. In this talk we review the LIV for mesons nucleons and leptons and apply it to study several effects, including the GZK anomaly

  12. Study of the Standard Model Higgs boson decaying to taus at CMS

    CERN Document Server

    Botta, Valeria

    2017-01-01

    The most recent search for the Standard Model Higgs boson decaying to a pair of $\\tau$ leptons is performed using proton-proton collision events at a centre-of-mass energy of 13~TeV, recorded by the CMS experiment at the LHC. The full 2016 dataset, corresponding to an integrated luminosity of 35.9~fb$^{-1}$, has been analysed. The Higgs boson signal in the $\\tau^{+}\\tau^{-}$ decay mode is observed with a significance of 4.9 standard deviations, to be compared to an expected significance of 4.7 standard deviations. This measurement is the first observation of the Higgs boson decay into fermions by a single experiment.

  13. Accurate Maximum Power Tracking in Photovoltaic Systems Affected by Partial Shading

    Directory of Open Access Journals (Sweden)

    Pierluigi Guerriero

    2015-01-01

    Full Text Available A maximum power tracking algorithm exploiting operating point information gained on individual solar panels is presented. The proposed algorithm recognizes the presence of multiple local maxima in the power voltage curve of a shaded solar field and evaluates the coordinated of the absolute maximum. The effectiveness of the proposed approach is evidenced by means of circuit level simulation and experimental results. Experiments evidenced that, in comparison with a standard perturb and observe algorithm, we achieve faster convergence in normal operating conditions (when the solar field is uniformly illuminated and we accurately locate the absolute maximum power point in partial shading conditions, thus avoiding the convergence on local maxima.

  14. Measurement and analysis of the thoracic patient setup deviations in routine radiotherapy

    International Nuclear Information System (INIS)

    Jia Mingxuan; Zou Huawei; Wu Rong; Sun Jian; Dong Xiaoqi

    2003-01-01

    Objective: To determine the magnitude of the setup deviations of the thoracic patients in routine radiotherapy. Methods: Altogether 408 films for 21 thoracic patients were recorded using the electronic portal imaging device (EPID), and comparison with reference CT simulator digitally-reconstructed radiograph (DRR) for anterior-posterior fields was performed. The deviation of setup for 21 patients in the left-right (RL), superior-inferior (SI) directions and rotation about the anterior-posterior (AP) axis were measured and analyzed. Results: Without immobilization device, the mean translational and rotational setup deviations were (0.7±3.1) mm and (1.5±4.1) mm in the RL and SI directions, respectively, and (0.3±2.4) degree about AP axis. With immobilization device, the mean translational and rotational setup deviations were (0.5±2.4) mm and (0.8±2.7) mm in the RL and SI directions respectively, and (0.2±1.6) degree about AP axis. Conclusion: The setup deviations in thoracic patients irradiation may be reduced with the use of the immobilization device. The setup deviation in the SI direction is greater than that in the RL direction. The setup deviations are mainly random errors

  15. Can Gait Deviation Index be used effectively for the evaluation of gait pathology in total hip arthroplasty An explorative randomized trial

    DEFF Research Database (Denmark)

    Jensen, Carsten; Rosenlund, Signe; Nielsen, Dennis Brandborg

    2014-01-01

    and standard deviation (mean ¼ 94.7; SD ¼ 8.4) from our age-matched controls (n ¼ 20) were used as reference. A fixedeffects multilevel regression model was employed to evaluate the treatment effects. Results: No interaction was observed between treatment and time (p ¼ 0.33) or limb and time (p ¼ 0...... Deviation Index (GDI), used to evaluate treatment in children with cerebral palsy, has been proposed as such a measure. The experience with GDI in osteoarthritis (OA) patients following total hip arthroplasty (THA) is unknown. The aim of our study was to use the GDI to evaluate post-operative gait quality.......53). The pre-operative GDI mean value was 83.4 10.9, showing patients had a moderate deviation from normative gait before surgical treatment. After surgical treatment, the GDI score improved significantly by 4.9 [:95CI: 2.1 to 7.9] equal to a 0.8 average increase in GDI per month of follow-up. Therewas...

  16. Sea Surface Height Deviation, Aviso, 0.25 degrees, Global, Science Quality

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Aviso Sea Surface Height Deviation is the deviation from the mean geoid as measured from 1993 - 1995. This is Science Quality data.

  17. 48 CFR 201.404 - Class deviations.

    Science.gov (United States)

    2010-10-01

    ..., and the Defense Logistics Agency, may approve any class deviation, other than those described in 201...) Diminish any preference given small business concerns by the FAR or DFARS; or (D) Extend to requirements imposed by statute or by regulations of other agencies such as the Small Business Administration and the...

  18. Large deviations in the presence of cooperativity and slow dynamics

    Science.gov (United States)

    Whitelam, Stephen

    2018-06-01

    We study simple models of intermittency, involving switching between two states, within the dynamical large-deviation formalism. Singularities appear in the formalism when switching is cooperative or when its basic time scale diverges. In the first case the unbiased trajectory distribution undergoes a symmetry breaking, leading to a change in shape of the large-deviation rate function for a particular dynamical observable. In the second case the symmetry of the unbiased trajectory distribution remains unbroken. Comparison of these models suggests that singularities of the dynamical large-deviation formalism can signal the dynamical equivalent of an equilibrium phase transition but do not necessarily do so.

  19. CT Assessment of the axial deviation of the femoral and tibial prosthetic components in total knee arthroplasty

    International Nuclear Information System (INIS)

    Rimondi, E.; Molinari, M.; Moio, A.; Busacca, M.; Trentani, F.; Trentani, P.; Tigani, D.; Nigrosoli, M.

    2000-01-01

    CT assessment of the axial deviation of the femoral and tibial prosthetic components in total knee arthroplasty. From January to July 1999, 17 patients, 10 males and 7 females, mean age 66 years (standard deviation plus or minus 4) were examined after total knee arthroplasty. Exclusion criteriawere prosthesis loosening and severe (equal or superior to 7'' varus o valgus deviation. All patients were examined with knee radiography in the standing position completed by axial projection of patella and by CT scanning. It was used a modification of Berger technique and carried out comparative CT scans extended lower limbs and acquisitions perpendicular to the mechanical axis of the knee, from the femoral supracondylar region down to the plane crossing the distal end of the tibial prosthetic component. Reference lines were then drawn electronically on given scanning planes to reckon the axial deviation of the femoral and tibial prosthetic components. Six patients, one female and 5 males with normal rotational values of femoral and tibial prosthetic components presented no clinical symptoms. Eight patients, 4 females and 4 males, with abnormal values presented the following clinical symptoms: medial impingement, (incomplete) dislocation patella, and lateral instability. One female patient with a normal rotational value of femoral prosthetic component and an altered value of tibial prosthetic component presented medial impingement. Finally two patients, one female and one male, were absolutely asymptomatic although the rotational values of the two prosthetic components were beyond the normal range. Total knee arthroplasty is presently a standard treatment for many conditions involving this joint. There are several possible postoperative complications, namely fractures, dislocations (a)septic losening and femoropatellar instability. The latter condition is the most frequent complication among implant failures and is caused by bad orientation of the femoral and tibial

  20. Comparison of standard maximum likelihood classification and polytomous logistic regression used in remote sensing

    Science.gov (United States)

    John Hogland; Nedret Billor; Nathaniel Anderson

    2013-01-01

    Discriminant analysis, referred to as maximum likelihood classification within popular remote sensing software packages, is a common supervised technique used by analysts. Polytomous logistic regression (PLR), also referred to as multinomial logistic regression, is an alternative classification approach that is less restrictive, more flexible, and easy to interpret. To...

  1. The phonatory deviation diagram: a novel objective measurement of vocal function.

    Science.gov (United States)

    Madazio, Glaucya; Leão, Sylvia; Behlau, Mara

    2011-01-01

    To identify the discriminative characteristics of the phonatory deviation diagram (PDD) in rough, breathy and tense voices. One hundred and ninety-six samples of normal and dysphonic voices from adults were submitted to perceptual auditory evaluation, focusing on the predominant vocal quality and the degree of deviation. Acoustic analysis was performed with the VoxMetria (CTS Informatica). Significant differences were observed between the dysphonic and normal groups (p < 0.001), and also between the breathy and rough samples (p = 0.044) and the breathy and tense samples (p < 0.001). All normal voices were positioned in the inferior left quadrant, 45% of the rough voices in the inferior right quadrant, 52.6% of the breathy voices in the superior right quadrant and 54.3% of the tense voices in the inferior left quadrant of the PDD. In the inferior left quadrant, 93.8% of voices with no deviation were located and 72.7% of voices with mild deviation; voices with moderate deviation were distributed in the inferior and superior right quadrants, the latter ones containing the most deviant voices and 80% of voices with severe deviation. The PDD was able to discriminate normal from dysphonic voices, and the distribution was related to the type and degree of voice alteration. Copyright © 2011 S. Karger AG, Basel.

  2. AIRS/Aqua Level 3 Daily standard physical retrieval (AIRS-only) V005

    Data.gov (United States)

    National Aeronautics and Space Administration — The AIRS Only Level 3 Daily Gridded Product contains standard retrieval means, standard deviations and input counts. Each file covers a temporal period of 24 hours...

  3. AIRS/Aqua Level 3 Daily standard physical retrieval (AIRS+AMSU) V005

    Data.gov (United States)

    National Aeronautics and Space Administration — The AIRS Level 3 Daily Gridded Product contains standard retrieval means, standard deviations and input counts. Each file covers a temporal period of 24 hours for...

  4. Aqua AIRS Level 3 Daily Standard Physical Retrieval (AIRS+AMSU) V006

    Data.gov (United States)

    National Aeronautics and Space Administration — The AIRS Level 3 Daily Gridded Product contains standard retrieval means, standard deviations and input counts. Each file covers a temporal period of 24 hours for...

  5. Aqua AIRS Level 3 Daily Standard Physical Retrieval (AIRS-only) V006

    Data.gov (United States)

    National Aeronautics and Space Administration — The AIRS Only Level 3 Daily Gridded Product contains standard retrieval means, standard deviations and input counts. Each file covers a temporal period of 24 hours...

  6. Large deviation function for a driven underdamped particle in a periodic potential

    Science.gov (United States)

    Fischer, Lukas P.; Pietzonka, Patrick; Seifert, Udo

    2018-02-01

    Employing large deviation theory, we explore current fluctuations of underdamped Brownian motion for the paradigmatic example of a single particle in a one-dimensional periodic potential. Two different approaches to the large deviation function of the particle current are presented. First, we derive an explicit expression for the large deviation functional of the empirical phase space density, which replaces the level 2.5 functional used for overdamped dynamics. Using this approach, we obtain several bounds on the large deviation function of the particle current. We compare these to bounds for overdamped dynamics that have recently been derived, motivated by the thermodynamic uncertainty relation. Second, we provide a method to calculate the large deviation function via the cumulant generating function. We use this method to assess the tightness of the bounds in a numerical case study for a cosine potential.

  7. Technology transfer through a network of standard methods and recommended practices - The case of petrochemicals

    Science.gov (United States)

    Batzias, Dimitris F.; Karvounis, Sotirios

    2012-12-01

    Technology transfer may take place in parallel with cooperative action between companies participating in the same organizational scheme or using one another as subcontractor (outsourcing). In this case, cooperation should be realized by means of Standard Methods and Recommended Practices (SRPs) to achieve (i) quality of intermediate/final products according to specifications and (ii) industrial process control as required to guarantee such quality with minimum deviation (corresponding to maximum reliability) from preset mean values of representative quality parameters. This work deals with the design of the network of SRPs needed in each case for successful cooperation, implying also the corresponding technology transfer, effectuated through a methodological framework developed in the form of an algorithmic procedure with 20 activity stages and 8 decision nodes. The functionality of this methodology is proved by presenting the path leading from (and relating) a standard test method for toluene, as petrochemical feedstock in the toluene diisocyanate production, to the (6 generations distance upstream) performance evaluation of industrial process control systems (ie., from ASTM D5606 to BS EN 61003-1:2004 in the SRPs network).

  8. Adaptation requirements due to anatomical changes in free-breathing and deep-inspiration breath-hold for standard and dose-escalated radiotherapy of lung cancer patients

    DEFF Research Database (Denmark)

    Sibolt, Patrik; Ottosson, Wiviann; Sjöström, David

    2015-01-01

    to investigate the need for adaptation due to anatomical changes, for both standard (ST) and DE plans in free-breathing (FB) and DIBH. Material and methods. The effect of tumor shrinkage (TS), pleural effusion (PE) and atelectasis was investigated for patients and for a CIRS thorax phantom. Sixteen patients were...... volume. Results. Phantom simulations resulted in maximum deviations in mean dose to the GTV-T ( GTV-T ) of -1% for 3 cm PE and centrally located tumor, and + 3% for TS from 5 cm to 1 cm diameter for an anterior tumor location. For the majority of the patients, simulated PE resulted in a decreasing...

  9. A maximum power point tracking algorithm for buoy-rope-drum wave energy converters

    Science.gov (United States)

    Wang, J. Q.; Zhang, X. C.; Zhou, Y.; Cui, Z. C.; Zhu, L. S.

    2016-08-01

    The maximum power point tracking control is the key link to improve the energy conversion efficiency of wave energy converters (WEC). This paper presents a novel variable step size Perturb and Observe maximum power point tracking algorithm with a power classification standard for control of a buoy-rope-drum WEC. The algorithm and simulation model of the buoy-rope-drum WEC are presented in details, as well as simulation experiment results. The results show that the algorithm tracks the maximum power point of the WEC fast and accurately.

  10. Finite-time and finite-size scalings in the evaluation of large-deviation functions: Numerical approach in continuous time.

    Science.gov (United States)

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    2017-06-01

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.

  11. Finite-time and finite-size scalings in the evaluation of large-deviation functions: Numerical approach in continuous time

    Science.gov (United States)

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    2017-06-01

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which—as shown on the contact process—provides a significant improvement of the large deviation function estimators compared to the standard one.

  12. Full-field transmission-type angle-deviation optical microscope with reflectivity-height transformation.

    Science.gov (United States)

    Chiu, Ming-Hung; Tan, Chen-Tai; Tsai, Ming-Hung; Yang, Ya-Hsin

    2015-10-01

    This full-field transmission-type three-dimensional (3D) optical microscope is constructed based on the angle deviation method (ADM) and the algorithm of reflectivity-height transformation (RHT). The surface height is proportional to the deviation angle of light passing through the object. The angle deviation and surface height can be measured based on the reflectivity closed to the critical angle using a parallelogram prism and two CCDs.

  13. A course on large deviations with an introduction to Gibbs measures

    CERN Document Server

    Rassoul-Agha, Firas

    2015-01-01

    This is an introductory course on the methods of computing asymptotics of probabilities of rare events: the theory of large deviations. The book combines large deviation theory with basic statistical mechanics, namely Gibbs measures with their variational characterization and the phase transition of the Ising model, in a text intended for a one semester or quarter course. The book begins with a straightforward approach to the key ideas and results of large deviation theory in the context of independent identically distributed random variables. This includes Cramér's theorem, relative entropy, Sanov's theorem, process level large deviations, convex duality, and change of measure arguments. Dependence is introduced through the interactions potentials of equilibrium statistical mechanics. The phase transition of the Ising model is proved in two different ways: first in the classical way with the Peierls argument, Dobrushin's uniqueness condition, and correlation inequalities and then a second time through the ...

  14. Role of mother’s perceptions on their child development on early detection of developmental deviation

    Directory of Open Access Journals (Sweden)

    Pudji Andayani

    2006-10-01

    Full Text Available This report aimed to assess mothers’ perceptions on normal and deviation of development in their children. The study was done in underfive children and their mothers from May 1st 1999 to June 30th 1999 who visited the Nutrition, Growth & Development Clinic of the Child Health Department, Sanglah Hospital, Denpasar. A total of 76 children between 2 and 59 months of age and their mothers were enrolled. Data were collected by interview with mothers concerning the following items: perception of their children development, age of child, sex, mother’s education, mother’s job, number of sibling, and mother ability in making referral decisions. Denver II screening test was administered to each child to identify of development status as a gold standard. Sixteen (21% children was identified as having developmental deviation (by mother’s perception and 21 (28% by authors using Denver II screening test. The mother’s perception sensitivity was 67% and specificity was 97%. There were no significant differences of development status perception according to child’s age, mother’s education, mother’s job, and number of sibling. Most of mother’s perceptions about normal development were if the body weight increased and had no disability. Most of the sources of information about development was from the relatives. Thirteen of 21 children who had developmental deviation were referred by mothers. We conclude that mother’s perception can be used as early detection of developmental problems. Mother’s concerns of their children growth development had focused on again body weight, physical developmental and gross motor skill.

  15. Maximum Entropy Fundamentals

    Directory of Open Access Journals (Sweden)

    F. Topsøe

    2001-09-01

    Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over

  16. Aqua AIRS Level 3 8-day Standard Physical Retrieval (AIRS+AMSU) V006

    Data.gov (United States)

    National Aeronautics and Space Administration — The AIRS Level 3 8-Day Gridded Retrieval Product contains standard retrieval means, standard deviations and input counts. Each file covers an 8-day period, or...

  17. AIRS/Aqua Level 3 8-day standard physical retrieval (AIRS+AMSU) V005

    Data.gov (United States)

    National Aeronautics and Space Administration — The AIRS Level 3 8-Day Gridded Retrieval Product contains standard retrieval means, standard deviations and input counts. Each file covers an 8-day period, or...

  18. Normabweichungen im Zeitungsdeutsch Ostbelgiens (Deviations from the Standard in the Newspaper German of East Belgium)

    Science.gov (United States)

    Nelde, Peter H.

    1974-01-01

    Concludes that the German used in the east Belgium newspaper differs fr om standard High German. Proceeds to list these differences in the areas of lexicology, semantics and stylistics, morphology and syntax, orthography e tc. (Text is in German.) (DS)

  19. Last Glacial Maximum Salinity Reconstruction

    Science.gov (United States)

    Homola, K.; Spivack, A. J.

    2016-12-01

    It has been previously demonstrated that salinity can be reconstructed from sediment porewater. The goal of our study is to reconstruct high precision salinity during the Last Glacial Maximum (LGM). Salinity is usually determined at high precision via conductivity, which requires a larger volume of water than can be extracted from a sediment core, or via chloride titration, which yields lower than ideal precision. It has been demonstrated for water column samples that high precision density measurements can be used to determine salinity at the precision of a conductivity measurement using the equation of state of seawater. However, water column seawater has a relatively constant composition, in contrast to porewater, where variations from standard seawater composition occur. These deviations, which affect the equation of state, must be corrected for through precise measurements of each ion's concentration and knowledge of apparent partial molar density in seawater. We have developed a density-based method for determining porewater salinity that requires only 5 mL of sample, achieving density precisions of 10-6 g/mL. We have applied this method to porewater samples extracted from long cores collected along a N-S transect across the western North Atlantic (R/V Knorr cruise KN223). Density was determined to a precision of 2.3x10-6 g/mL, which translates to salinity uncertainty of 0.002 gms/kg if the effect of differences in composition is well constrained. Concentrations of anions (Cl-, and SO4-2) and cations (Na+, Mg+, Ca+2, and K+) were measured. To correct salinities at the precision required to unravel LGM Meridional Overturning Circulation, our ion precisions must be better than 0.1% for SO4-/Cl- and Mg+/Na+, and 0.4% for Ca+/Na+, and K+/Na+. Alkalinity, pH and Dissolved Inorganic Carbon of the porewater were determined to precisions better than 4% when ratioed to Cl-, and used to calculate HCO3-, and CO3-2. Apparent partial molar densities in seawater were

  20. A Generalized Polynomial Chaos-Based Approach to Analyze the Impacts of Process Deviations on MEMS Beams.

    Science.gov (United States)

    Gao, Lili; Zhou, Zai-Fa; Huang, Qing-An

    2017-11-08

    A microstructure beam is one of the fundamental elements in MEMS devices like cantilever sensors, RF/optical switches, varactors, resonators, etc. It is still difficult to precisely predict the performance of MEMS beams with the current available simulators due to the inevitable process deviations. Feasible numerical methods are required and can be used to improve the yield and profits of the MEMS devices. In this work, process deviations are considered to be stochastic variables, and a newly-developed numerical method, i.e., generalized polynomial chaos (GPC), is applied for the simulation of the MEMS beam. The doubly-clamped polybeam has been utilized to verify the accuracy of GPC, compared with our Monte Carlo (MC) approaches. Performance predictions have been made on the residual stress by achieving its distributions in GaAs Monolithic Microwave Integrated Circuit (MMIC)-based MEMS beams. The results show that errors are within 1% for the results of GPC approximations compared with the MC simulations. Appropriate choices of the 4-order GPC expansions with orthogonal terms have also succeeded in reducing the MC simulation labor. The mean value of the residual stress, concluded from experimental tests, shares an error about 1.1% with that of the 4-order GPC method. It takes a probability around 54.3% for the 4-order GPC approximation to attain the mean test value of the residual stress. The corresponding yield occupies over 90 percent around the mean within the twofold standard deviations.

  1. A Generalized Polynomial Chaos-Based Approach to Analyze the Impacts of Process Deviations on MEMS Beams

    Directory of Open Access Journals (Sweden)

    Lili Gao

    2017-11-01

    Full Text Available A microstructure beam is one of the fundamental elements in MEMS devices like cantilever sensors, RF/optical switches, varactors, resonators, etc. It is still difficult to precisely predict the performance of MEMS beams with the current available simulators due to the inevitable process deviations. Feasible numerical methods are required and can be used to improve the yield and profits of the MEMS devices. In this work, process deviations are considered to be stochastic variables, and a newly-developed numerical method, i.e., generalized polynomial chaos (GPC, is applied for the simulation of the MEMS beam. The doubly-clamped polybeam has been utilized to verify the accuracy of GPC, compared with our Monte Carlo (MC approaches. Performance predictions have been made on the residual stress by achieving its distributions in GaAs Monolithic Microwave Integrated Circuit (MMIC-based MEMS beams. The results show that errors are within 1% for the results of GPC approximations compared with the MC simulations. Appropriate choices of the 4-order GPC expansions with orthogonal terms have also succeeded in reducing the MC simulation labor. The mean value of the residual stress, concluded from experimental tests, shares an error about 1.1% with that of the 4-order GPC method. It takes a probability around 54.3% for the 4-order GPC approximation to attain the mean test value of the residual stress. The corresponding yield occupies over 90 percent around the mean within the twofold standard deviations.

  2. Analysis of ANSI N13.11: the performance algorithm

    International Nuclear Information System (INIS)

    Roberson, P.L.; Hadley, R.T.; Thorson, M.R.

    1982-06-01

    The method of performance testing for personnel dosimeters specified in draft ANSI N13.11, Criteria for Testing Personnel Dosimetry Performance is evaluated. Points addressed are: (1) operational behavior of the performance algorithm; (2) dependence on the number of test dosimeters; (3) basis for choosing an algorithm; and (4) other possible algorithms. The performance algorithm evaluated for each test category is formed by adding the calibration bias and its standard deviation. This algorithm is not optimal due to a high dependence on the standard deviation. The dependence of the calibration bias on the standard deviation is significant because of the low number of dosimeters (15) evaluated per category. For categories with large standard deviations the uncertainty in determining the performance criterion is large. To have a reasonable chance of passing all categories in one test, we required a 95% probability of passing each category. Then, the maximum permissible standard deviation is 30% even with a zero bias. For test categories with standard deviations <10%, the bias can be as high as 35%. For intermediate standard deviations, the chance of passing a category is improved by using a 5 to 10% negative bias. Most multipurpose personnel dosimetry systems will probably require detailed calibration adjustments to pass all categories within two rounds of testing

  3. Analysis of form deviation in non-isothermal glass molding

    Science.gov (United States)

    Kreilkamp, H.; Grunwald, T.; Dambon, O.; Klocke, F.

    2018-02-01

    Especially in the market of sensors, LED lighting and medical technologies, there is a growing demand for precise yet low-cost glass optics. This demand poses a major challenge for glass manufacturers who are confronted with the challenge arising from the trend towards ever-higher levels of precision combined with immense pressure on market prices. Since current manufacturing technologies especially grinding and polishing as well as Precision Glass Molding (PGM) are not able to achieve the desired production costs, glass manufacturers are looking for alternative technologies. Non-isothermal Glass Molding (NGM) has been shown to have a big potential for low-cost mass manufacturing of complex glass optics. However, the biggest drawback of this technology at the moment is the limited accuracy of the manufactured glass optics. This research is addressing the specific challenges of non-isothermal glass molding with respect to form deviation of molded glass optics. Based on empirical models, the influencing factors on form deviation in particular form accuracy, waviness and surface roughness will be discussed. A comparison with traditional isothermal glass molding processes (PGM) will point out the specific challenges of non-isothermal process conditions. Furthermore, the underlying physical principle leading to the formation of form deviations will be analyzed in detail with the help of numerical simulation. In this way, this research contributes to a better understanding of form deviations in non-isothermal glass molding and is an important step towards new applications demanding precise yet low-cost glass optics.

  4. Aqua AIRS Level 3 8-day Standard Physical Retrieval (AIRS-only) V006

    Data.gov (United States)

    National Aeronautics and Space Administration — The AIRS Only Level 3 8-Day Gridded Retrieval Product contains standard retrieval means, standard deviations and input counts. Each file covers an 8-day period, or...

  5. AIRS/Aqua Level 3 8-day standard physical retrieval (AIRS-only) V005

    Data.gov (United States)

    National Aeronautics and Space Administration — The AIRS Only Level 3 8-Day Gridded Retrieval Product contains standard retrieval means, standard deviations and input counts. Each file covers an 8-day period, or...

  6. Influence of fuel-cladding system deviations from the model of continuous cylinders on the parameters of WWER fuel element working ability

    International Nuclear Information System (INIS)

    Scheglov, A.

    1994-01-01

    In the programs of fuel rod computation, fuel and cladding are usually presented in the form of coaxial cylinders, which can change their sizes, mechanical and thermal-physical properties. The real fuel element has some typical deviations from this continuous coaxial cylinders (CCC) model as: axial asymmetry of fuel-cladding system (due to the oval form of the cladding, cracking and other type of fuel pallet damage, axial asymmetry of the volumetric heat release), gaps between the pallets (and heat release peaking in fuel near the gap), chambers in the pallets. As a result of these deviations actual fuel rod parameters of working ability - temperature, stresses, thermal fluxes relieved from the cladding, geometry changes - in some locations can greatly vary from the ones calculated according to CCC model. The influence of these deviations is extremely important while calculating the fuel rod, because they are a part of the mechanical excess coefficient. The author reviews the influence of these factors using specific examples. He applies his own two-dimensional codes based on the Finite Elements Method for calculations of temperature fields, stresses and deformation in the fuel rod elements. It is shown that consideration of these deviations, as a rule, leads to the increase of the maximum fuel temperature in the WWER pellets (characterized by a large central hole), temperature of the cladding, thermal flux, relieved by the coolant from the cladding, and stresses in the cladding. It is necessary to consider these factors for both validation of the fuel element working ability and interpretation of the experimental results. 4 tabs., 3 figs., 5 refs

  7. Influence of fuel-cladding system deviations from the model of continuous cylinders on the parameters of WWER fuel element working ability

    Energy Technology Data Exchange (ETDEWEB)

    Scheglov, A [Russian Research Centre Kurchatov Inst., Moscow (Russian Federation)

    1994-12-31

    In the programs of fuel rod computation, fuel and cladding are usually presented in the form of coaxial cylinders, which can change their sizes, mechanical and thermal-physical properties. The real fuel element has some typical deviations from this continuous coaxial cylinders (CCC) model as: axial asymmetry of fuel-cladding system (due to the oval form of the cladding, cracking and other type of fuel pallet damage, axial asymmetry of the volumetric heat release), gaps between the pallets (and heat release peaking in fuel near the gap), chambers in the pallets. As a result of these deviations actual fuel rod parameters of working ability - temperature, stresses, thermal fluxes relieved from the cladding, geometry changes - in some locations can greatly vary from the ones calculated according to CCC model. The influence of these deviations is extremely important while calculating the fuel rod, because they are a part of the mechanical excess coefficient. The author reviews the influence of these factors using specific examples. He applies his own two-dimensional codes based on the Finite Elements Method for calculations of temperature fields, stresses and deformation in the fuel rod elements. It is shown that consideration of these deviations, as a rule, leads to the increase of the maximum fuel temperature in the WWER pellets (characterized by a large central hole), temperature of the cladding, thermal flux, relieved by the coolant from the cladding, and stresses in the cladding. It is necessary to consider these factors for both validation of the fuel element working ability and interpretation of the experimental results. 4 tabs., 3 figs., 5 refs.

  8. Management of obstructive sleep apnea in the indigent population: a deviation of standard of care?

    Science.gov (United States)

    Hamblin, John S; Sandulache, Vlad C; Alapat, Philip M; Takashima, Masayoshi

    2014-03-01

    Comprehensive management of patients with obstructive sleep apnea (OSA) typically is managed best via a multidisciplinary approach, involving otolaryngologists, sleep psychologists/psychiatrists, pulmonologists, neurologists, oral surgeons, and sleep trained dentists. By utilizing these resources, one could fashion a treatment individualized to the patient, giving rise to the holistic phrase of "personalized medicine." Unfortunately, in situations and environments with limited resources, the treatment options in an otolaryngologist's armamentarium are restricted--typically to continuous positive airway pressure (CPAP) versus sleep surgery. However, a recent patient encounter highlighted here shows how a hospital's reimbursement policy effectively dictated a patient's medical management to sleep surgery. This occurred although the current gold standard for the initial treatment of OSA is CPAP. Changing the course of medical/surgical management by selectively restricting funding is a cause of concern, especially when it promotes patients to choose a treatment option that is not considered the current standard of care.

  9. Incorporating assumption deviation risk in quantitative risk assessments: A semi-quantitative approach

    International Nuclear Information System (INIS)

    Khorsandi, Jahon; Aven, Terje

    2017-01-01

    Quantitative risk assessments (QRAs) of complex engineering systems are based on numerous assumptions and expert judgments, as there is limited information available for supporting the analysis. In addition to sensitivity analyses, the concept of assumption deviation risk has been suggested as a means for explicitly considering the risk related to inaccuracies and deviations in the assumptions, which can significantly impact the results of the QRAs. However, challenges remain for its practical implementation, considering the number of assumptions and magnitude of deviations to be considered. This paper presents an approach for integrating an assumption deviation risk analysis as part of QRAs. The approach begins with identifying the safety objectives for which the QRA aims to support, and then identifies critical assumptions with respect to ensuring the objectives are met. Key issues addressed include the deviations required to violate the safety objectives, the uncertainties related to the occurrence of such events, and the strength of knowledge supporting the assessments. Three levels of assumptions are considered, which include assumptions related to the system's structural and operational characteristics, the effectiveness of the established barriers, as well as the consequence analysis process. The approach is illustrated for the case of an offshore installation. - Highlights: • An approach for assessing the risk of deviations in QRA assumptions is presented. • Critical deviations and uncertainties related to their occurrence are addressed. • The analysis promotes critical thinking about the foundation and results of QRAs. • The approach is illustrated for the case of an offshore installation.

  10. Deviations from tribimaximal mixing due to the vacuum expectation value misalignment in A4 models

    International Nuclear Information System (INIS)

    Barry, James; Rodejohann, Werner

    2010-01-01

    The addition of an A 4 family symmetry and extended Higgs sector to the standard model can generate the tribimaximal mixing pattern for leptons, assuming the correct vacuum expectation value alignment of the Higgs scalars. Deviating this alignment affects the predictions for the neutrino oscillation and neutrino mass observables. An attempt is made to classify the plethora of models in the literature, with respect to the chosen A 4 particle assignments. Of these models, two particularly popular examples have been analyzed for deviations from tribimaximal mixing by perturbing the vacuum expectation value alignments. The effect of perturbations on the mixing angle observables is studied. However, it is only investigation of the mass-related observables (the effective mass for neutrinoless double beta decay and the sum of masses from cosmology) that can lead to the exclusion of particular models by constraints from future data, which indicates the importance of neutrino mass in disentangling models. The models have also been tested for fine-tuning of the parameters. Furthermore, a well-known seesaw model is generalized to include additional scalars, which transform as representations of A 4 not included in the original model.

  11. 24 CFR 982.508 - Maximum family share at initial occupancy.

    Science.gov (United States)

    2010-04-01

    ... URBAN DEVELOPMENT SECTION 8 TENANT BASED ASSISTANCE: HOUSING CHOICE VOUCHER PROGRAM Rent and Housing Assistance Payment § 982.508 Maximum family share at initial occupancy. At the time the PHA approves a... program, and where the gross rent of the unit exceeds the applicable payment standard for the family, the...

  12. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  13. A correlational study of scoliosis and trunk balance in adult patients with mandibular deviation.

    Directory of Open Access Journals (Sweden)

    Shuncheng Zhou

    Full Text Available Previous studies have confirmed that patients with mandibular deviation often have abnormal morphology of their cervical vertebrae. However, the relationship between mandibular deviation, scoliosis, and trunk balance has not been studied. Currently, mandibular deviation is usually treated as a single pathology, which leads to poor clinical efficiency. We investigated the relationship of spine coronal morphology and trunk balance in adult patients with mandibular deviation, and compared the finding to those in healthy volunteers. 35 adult patients with skeletal mandibular deviation and 10 healthy volunteers underwent anterior X-ray films of the head and posteroanterior X-ray films of the spine. Landmarks and lines were drawn and measured on these films. The axis distance method was used to measure the degree of scoliosis and the balance angle method was used to measure trunk balance. The relationship of mandibular deviation, spine coronal morphology and trunk balance was evaluated with the Pearson correlation method. The spine coronal morphology of patients with mandibular deviation demonstrated an "S" type curve, while a straight line parallel with the gravity line was found in the control group (significant difference, p1°, while the control group had a normal trunk balance (imbalance angle <1°. There was a significant difference between the two groups (p<0.01. The degree of scoliosis and shoulder imbalance correlated with the degree of mandibular deviation, and presented a linear trend. The direction of mandibular deviation was the same as that of the lateral bending of thoracolumbar vertebrae, which was opposite to the direction of lateral bending of cervical vertebrae. Our study shows the degree of mandibular deviation has a high correlation with the degree of scoliosis and trunk imbalance, all the three deformities should be clinically evaluated in the management of mandibular deviation.

  14. Biological bases of the maximum permissible exposure levels of the UK laser standard BS 4803 1983

    CERN Document Server

    MacKinlay, Alistair F

    1983-01-01

    The use of lasers has increased greatly over the past 15 years or so, to the extent that they are now used routinely in many occupational and public situations. There has been an increasing awareness of the potential hazards presented by lasers and substantial efforts have been made to formulate safety standards. In the UK the relevant Safety Standard is the British Standards Institution Standard BS 4803. This Standard was originally published in 1972 and a revision has recently been published (BS 4803: 1983). The revised standard has been developed using the American National Standards Institute Standard, ANSI Z136.1 (1973 onwards), as a model. In other countries, national standards have been similarly formulated, resulting in a large measure of international agreement through participation in the work of the International Electrotechnical Commission (IEC). The bases of laser safety standards are biophysical data on threshold injury effects, particularly on the retina, and the development of theoretical mode...

  15. Linear maps preserving maximal deviation and the Jordan structure of quantum systems

    International Nuclear Information System (INIS)

    Hamhalter, Jan

    2012-01-01

    In the algebraic approach to quantum theory, a quantum observable is given by an element of a Jordan algebra and a state of the system is modelled by a normalized positive functional on the underlying algebra. Maximal deviation of a quantum observable is the largest statistical deviation one can obtain in a particular state of the system. The main result of the paper shows that each linear bijective transformation between JBW algebras preserving maximal deviations is formed by a Jordan isomorphism or a minus Jordan isomorphism perturbed by a linear functional multiple of an identity. It shows that only one numerical statistical characteristic has the power to determine the Jordan algebraic structure completely. As a consequence, we obtain that only very special maps can preserve the diameter of the spectra of elements. Nonlinear maps preserving the pseudometric given by maximal deviation are also described. The results generalize hitherto known theorems on preservers of maximal deviation in the case of self-adjoint parts of von Neumann algebras proved by Molnár.

  16. Deviations in human gut microbiota

    DEFF Research Database (Denmark)

    Casén, C; Vebø, H C; Sekelja, M

    2015-01-01

    microbiome profiling. AIM: To develop and validate a novel diagnostic test using faecal samples to profile the intestinal microbiota and identify and characterise dysbiosis. METHODS: Fifty-four DNA probes targeting ≥300 bacteria on different taxonomic levels were selected based on ability to distinguish......, and potential clinically relevant deviation in the microbiome from normobiosis. This model was tested in different samples from healthy volunteers and IBS and IBD patients (n = 330) to determine the ability to detect dysbiosis. RESULTS: Validation confirms dysbiosis was detected in 73% of IBS patients, 70...

  17. Deviation from the Standard of Care for Early Breast Cancer in the Elderly: What are the Consequences?

    Science.gov (United States)

    Sun, Susie X; Hollenbeak, Christopher S; Leung, Anna M

    2015-08-01

    For elderly patients with early-stage breast cancer, the standards of care often are not strictly followed due to either clinician biases or patient preferences. The authors hypothesized that forgoing radiation and lymph node (LN) staging for elderly patients with early-stage breast cancer would have a negative impact on survival. From the Surveillance, Epidemiology, and End Results Program database, 53,619 women older than 55 years with stage 1 breast cancer who underwent breast conservation surgery were identified. Analyses were performed to compare the characteristics and outcomes of patients who received the standards of care with LN sampling and radiation and those of patients who did not, with control used for confounders. To account for selection bias from covariate imbalance, propensity score matching was performed. Survival was analyzed using the Kaplan-Meier method. Older patients were less likely to receive radiation and LN sampling. These standards of care were associated with improved overall survival rates of 15.8 and 27.1 % after 10 years, respectively (p ≤ 0.0001). This survival advantage persisted after propensity score matching, with a 7.4 % higher survival rate for patients who received radiation and a 16.8 % higher survival rate for those who underwent LN staging (p standard of care for stage 1 breast cancer. Even after controlling for other factors, the study showed that failure to adhere to the standards of LN sampling and radiation therapy may have a negative impact in survival.

  18. Evaluation of robustness of maximum likelihood cone-beam CT reconstruction with total variation regularization

    International Nuclear Information System (INIS)

    Stsepankou, D; Arns, A; Hesser, J; Ng, S K; Zygmanski, P

    2012-01-01

    The objective of this paper is to evaluate an iterative maximum likelihood (ML) cone–beam computed tomography (CBCT) reconstruction with total variation (TV) regularization with respect to the robustness of the algorithm due to data inconsistencies. Three different and (for clinical application) typical classes of errors are considered for simulated phantom and measured projection data: quantum noise, defect detector pixels and projection matrix errors. To quantify those errors we apply error measures like mean square error, signal-to-noise ratio, contrast-to-noise ratio and streak indicator. These measures are derived from linear signal theory and generalized and applied for nonlinear signal reconstruction. For quality check, we focus on resolution and CT-number linearity based on a Catphan phantom. All comparisons are made versus the clinical standard, the filtered backprojection algorithm (FBP). In our results, we confirm and substantially extend previous results on iterative reconstruction such as massive undersampling of the number of projections. Errors of projection matrix parameters of up to 1° projection angle deviations are still in the tolerance level. Single defect pixels exhibit ring artifacts for each method. However using defect pixel compensation, allows up to 40% of defect pixels for passing the standard clinical quality check. Further, the iterative algorithm is extraordinarily robust in the low photon regime (down to 0.05 mAs) when compared to FPB, allowing for extremely low-dose image acquisitions, a substantial issue when considering daily CBCT imaging for position correction in radiotherapy. We conclude that the ML method studied herein is robust under clinical quality assurance conditions. Consequently, low-dose regime imaging, especially for daily patient localization in radiation therapy is possible without change of the current hardware of the imaging system. (paper)

  19. 9 CFR 381.308 - Deviations in processing.

    Science.gov (United States)

    2010-01-01

    ...) must be handled according to: (1)(i) A HACCP plan for canned product that addresses hazards associated... (d) of this section. (c) [Reserved] (d) Procedures for handling process deviations where the HACCP... accordance with the following procedures: (a) Emergency stops. (1) When retort jams or breakdowns occur...

  20. Comparison of 3D Maximum A Posteriori and Filtered Backprojection algorithms for high resolution animal imaging in microPET

    International Nuclear Information System (INIS)

    Chatziioannou, A.; Qi, J.; Moore, A.; Annala, A.; Nguyen, K.; Leahy, R.M.; Cherry, S.R.

    2000-01-01

    We have evaluated the performance of two three dimensional reconstruction algorithms with data acquired from microPET, a high resolution tomograph dedicated to small animal imaging. The first was a linear filtered-backprojection algorithm (FBP) with reprojection of the missing data and the second was a statistical maximum-aposteriori probability algorithm (MAP). The two algorithms were evaluated in terms of their resolution performance, both in phantoms and in vivo. Sixty independent realizations of a phantom simulating the brain of a baby monkey were acquired, each containing 3 million counts. Each of these realizations was reconstructed independently with both algorithms. The ensemble of the sixty reconstructed realizations was used to estimate the standard deviation as a measure of the noise for each reconstruction algorithm. More detail was recovered in the MAP reconstruction without an increase in noise relative to FBP. Studies in a simple cylindrical compartment phantom demonstrated improved recovery of known activity ratios with MAP. Finally in vivo studies also demonstrated a clear improvement in spatial resolution using the MAP algorithm. The quantitative accuracy of the MAP reconstruction was also evaluated by comparison with autoradiography and direct well counting of tissue samples and was shown to be superior

  1. Deviations from LTE in a stellar atmosphere

    International Nuclear Information System (INIS)

    Kalkofen, W.; Klein, R.I.; Stein, R.F.

    1979-01-01

    Deviations from LTE are investigated in an atmosphere of hydrogen atoms with one bound level, satisfying the equations of radiative, hydrostatic, and statistical equilibrium. The departure coefficient and the kinetic temperature as functions of the frequency dependence of the radiative cross section are studied analytically and numerically. Near the outer boundary of the atmosphere, the departure coefficient b is smaller than unity when the radiative cross section αsub(ν) grows with frequency ν faster than ν 2 ; b exceeds unity otherwise. Far from the boundary the departure coefficient tends to exceed unity for any frequency dependence of αsub(ν). Overpopulation (b > 1) always implies that the kinetic temperature in the statistical equilibrium atmosphere is higher than the temperature in the corresponding LTE atmosphere. Upper and lower bounds on the kinetic temperature are given for an atmosphere with deviations from LTE only in the optically shallow layers when the emergent intensity can be described by a radiation temperature. (author)

  2. Deviations from LTE in a stellar atmosphere

    Science.gov (United States)

    Kalkofen, W.; Klein, R. I.; Stein, R. F.

    1979-01-01

    Deviations for LTE are investigated in an atmosphere of hydrogen atoms with one bound level, satisfying the equations of radiative, hydrostatic, and statistical equilibrium. The departure coefficient and the kinetic temperature as functions of the frequency dependence of the radiative cross section are studied analytically and numerically. Near the outer boundary of the atmosphere, the departure coefficient is smaller than unity when the radiative cross section grows with frequency faster than with the square of frequency; it exceeds unity otherwise. Far from the boundary the departure coefficient tends to exceed unity for any frequency dependence of the radiative cross section. Overpopulation always implies that the kinetic temperature in the statistical-equilibrium atmosphere is higher than the temperature in the corresponding LTE atmosphere. Upper and lower bounds on the kinetic temperature are given for an atmosphere with deviations from LTE only in the optically shallow layers when the emergent intensity can be described by a radiation temperature.

  3. Process Measurement Deviation Analysis for Flow Rate due to Miscalibration

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Eunsuk; Kim, Byung Rae; Jeong, Seog Hwan; Choi, Ji Hye; Shin, Yong Chul; Yun, Jae Hee [KEPCO Engineering and Construction Co., Deajeon (Korea, Republic of)

    2016-10-15

    An analysis was initiated to identify the root cause, and the exemption of high static line pressure correction to differential pressure (DP) transmitters was one of the major deviation factors. Also the miscalibrated DP transmitter range was identified as another major deviation factor. This paper presents considerations to be incorporated in the process flow measurement instrumentation calibration and the analysis results identified that the DP flow transmitter electrical output decreased by 3%. Thereafter, flow rate indication decreased by 1.9% resulting from the high static line pressure correction exemption and measurement range miscalibration. After re-calibration, the flow rate indication increased by 1.9%, which is consistent with the analysis result. This paper presents the brief calibration procedures for Rosemount DP flow transmitter, and analyzes possible three cases of measurement deviation including error and cause. Generally, the DP transmitter is required to be calibrated with precise process input range according to the calibration procedure provided for specific DP transmitter. Especially, in case of the DP transmitter installed in high static line pressure, it is important to correct the high static line pressure effect to avoid the inherent systematic error for Rosemount DP transmitter. Otherwise, failure to notice the correction may lead to indicating deviation from actual value.

  4. Standardization of depression measurement

    DEFF Research Database (Denmark)

    Wahl, Inka; Löwe, Bernd; Bjørner, Jakob

    2014-01-01

    OBJECTIVES: To provide a standardized metric for the assessment of depression severity to enable comparability among results of established depression measures. STUDY DESIGN AND SETTING: A common metric for 11 depression questionnaires was developed applying item response theory (IRT) methods. Data...... of 33,844 adults were used for secondary analysis including routine assessments of 23,817 in- and outpatients with mental and/or medical conditions (46% with depressive disorders) and a general population sample of 10,027 randomly selected participants from three representative German household surveys....... RESULTS: A standardized metric for depression severity was defined by 143 items, and scores were normed to a general population mean of 50 (standard deviation = 10) for easy interpretability. It covers the entire range of depression severity assessed by established instruments. The metric allows...

  5. Status of conversion of NE standards to national consensus standards

    International Nuclear Information System (INIS)

    Jennings, S.D.

    1990-06-01

    One major goal of the Nuclear Standards Program is to convert existing NE standards into national consensus standards (where possible). This means that an NE standard in the same subject area using the national consensus process. This report is a summary of the activities that have evolved to effect conversion of NE standards to national consensus standards, and the status of current conversion activities. In some cases, all requirements in an NE standard will not be incorporated into the published national consensus standard because these requirements may be considered too restrictive or too specific for broader application by the nuclear industry. If these requirements are considered necessary for nuclear reactor program applications, the program standard will be revised and issued as a supplement to the national consensus standard. The supplemental program standard will contain only those necessary requirements not reflected by the national consensus standard. Therefore, while complete conversion of program standards may not always be realized, the standards policy has been fully supported in attempting to make maximum use of the national consensus standard. 1 tab

  6. Advantages of heavy metal collars in directional drilling and deviation control

    International Nuclear Information System (INIS)

    Bradley, W.B.; Murphey, C.E.; McLamore, R.T.; Dickson, L.L.

    1976-01-01

    A heavy, stiff-bottom drill collar can substantially improve deviation performance, theoretically increasing penetration rates by 50 to 100 percent in deviation-prone areas. This paper presents the underlying theory, practical charts on performance characteristics, and Shell Development Co.'s experience in fabricating and field testing two depleted-uranium alloy, heavy metal collars

  7. The Analysis of a Deviation of Investment and Corporate Governance

    OpenAIRE

    Shoichi Hisa

    2008-01-01

    Investment of firms is affected by not only fundamentals factors, but liquidity constraint, ownership or corporate structure. Information structure between manager and owner is a significant factor to decide the level of investment, and deviation of investment from optimal condition. The reputation model between manager and owner suggest that the separate of ownership and management may induce the deviation of investment, and indicate that governance structure is important to reduce it. In th...

  8. No-gold-standard evaluation of image-acquisition methods using patient data.

    Science.gov (United States)

    Jha, Abhinav K; Frey, Eric

    2017-02-11

    Several new and improved modalities, scanners, and protocols, together referred to as image-acquisition methods (IAMs), are being developed to provide reliable quantitative imaging. Objective evaluation of these IAMs on the clinically relevant quantitative tasks is highly desirable. Such evaluation is most reliable and clinically decisive when performed with patient data, but that requires the availability of a gold standard, which is often rare. While no-gold-standard (NGS) techniques have been developed to clinically evaluate quantitative imaging methods, these techniques require that each of the patients be scanned using all the IAMs, which is expensive, time consuming, and could lead to increased radiation dose. A more clinically practical scenario is where different set of patients are scanned using different IAMs. We have developed an NGS technique that uses patient data where different patient sets are imaged using different IAMs to compare the different IAMs. The technique posits a linear relationship, characterized by a slope, bias, and noise standard-deviation term, between the true and measured quantitative values. Under the assumption that the true quantitative values have been sampled from a unimodal distribution, a maximum-likelihood procedure was developed that estimates these linear relationship parameters for the different IAMs. Figures of merit can be estimated using these linear relationship parameters to evaluate the IAMs on the basis of accuracy, precision, and overall reliability. The proposed technique has several potential applications such as in protocol optimization, quantifying difference in system performance, and system harmonization using patient data.

  9. Sample-path large deviations in credit risk

    NARCIS (Netherlands)

    Leijdekker, V.J.G.; Mandjes, M.R.H.; Spreij, P.J.C.

    2011-01-01

    The event of large losses plays an important role in credit risk. As these large losses are typically rare, and portfolios usually consist of a large number of positions, large deviation theory is the natural tool to analyze the tail asymptotics of the probabilities involved. We first derive a

  10. A spectrum standardization approach for laser-induced breakdown spectroscopy measurements

    Energy Technology Data Exchange (ETDEWEB)

    Wang Zhe, E-mail: zhewang@mail.tsinghua.edu.cn; Li Lizhi; West, Logan; Li Zheng, E-mail: lz-dte@tsinghua.edu.cn; Ni Weidou

    2012-02-15

    This paper follows and completes a previous presentation of a spectrum normalization method for laser-induced breakdown spectroscopy (LIBS) measurements by converting the experimentally recorded line intensity at varying operational conditions to the intensity that would be obtained under a 'standard state' condition, characterized by a standard plasma temperature, electron number density, and total number density of the interested species. At first, for each laser shot and corresponding spectrum, the line intensities of the interested species are converted to the intensity at a fixed plasma temperature and electron number density, but with varying total number density. Under this state, if the influence of changing plasma morphology is neglected, the sum of multiple spectral line intensities for the measured element is proportional to the total number density of the specific element. Therefore, the fluctuation of the total number density, or the variation of ablation mass, can be compensated for by applying the proportional relationship. The application of this method to Cu in 29 brass alloy samples, showed an improvement over the commonly applied normalization method with regard to measurement precision and accuracy. The average relative standard deviation (RSD) value, average value of the error bar, R{sup 2}, root mean square error of prediction (RMSEP), and average value of the maximum relative error were: 5.29%, 0.68%, 0.98, 2.72%, 16.97%, respectively, while the above parameter values for normalization with the whole spectrum area were: 8.61%, 1.37%, 0.95, 3.28%, 29.19%, respectively. - Highlights: Black-Right-Pointing-Pointer Intensity converted into an ideal standard plasma state for uncertainty reduction. Black-Right-Pointing-Pointer Ablated mass fluctuations compensated by variation of sum of multiple intensities. Black-Right-Pointing-Pointer A spectrum standardization model established. Black-Right-Pointing-Pointer Results in both uncertainty

  11. Gait Deviation Index, Gait Profile Score and Gait Variable Score in children with spastic cerebral palsy

    DEFF Research Database (Denmark)

    Rasmussen, Helle Mätzke; Nielsen, Dennis Brandborg; Pedersen, Niels Wisbech

    2015-01-01

    Abstract The Gait Deviation Index (GDI) and Gait Profile Score (GPS) are the most used summary measures of gait in children with cerebral palsy (CP). However, the reliability and agreement of these indices have not been investigated, limiting their clinimetric quality for research and clinical...... to good reliability with ICCs of 0.4–0.7. The agreement for the GDI and the logarithmically transformed GPS, in terms of the standard error of measurement as a percentage of the grand mean (SEM%) varied from 4.1 to 6.7%, whilst the smallest detectable change in percent (SDC%) ranged from 11.3 to 18...

  12. Bodily Deviations and Body Image in Adolescence

    Science.gov (United States)

    Vilhjalmsson, Runar; Kristjansdottir, Gudrun; Ward, Dianne S.

    2012-01-01

    Adolescents with unusually sized or shaped bodies may experience ridicule, rejection, or exclusion based on their negatively valued bodily characteristics. Such experiences can have negative consequences for a person's image and evaluation of self. This study focuses on the relationship between bodily deviations and body image and is based on a…

  13. Analyzing Menton Deviation in Posteroanterior Cephalogram in Early Detection of Temporomandibular Disorder

    Directory of Open Access Journals (Sweden)

    Trelia Boel

    2017-01-01

    Full Text Available Introduction. Some clinicians believed that mandibular deviation leads to facial asymmetry and it also had a correlation with temporomandibular disorders (TMDs. Posteroanterior (PA cephalogram was widely reported as a regular record in treating facial asymmetry and craniofacial anomalies. The objective of this study was to analyze the relationship of menton deviation in PA cephalogram with temporomandibular disorders (TMDs symptoms. Materials and Methods. TMJ function was initially screened based on TMD-DI questionnaire. PA cephalogram of volunteer subjects with TMDs (n=37 and without TMDs (n=33 with mean age of 21.61±2.08 years was taken. The menton deviation was measured by the distance (mm from menton point to midsagittal reference (MSR horizontally, using software digitized measurement, and categorized as asymmetric if the value is greater than 3 mm. The prevalence and difference of menton deviation in both groups were evaluated by unpaired t-test. Result. The prevalence of symmetry group showed that 65.9% had no TMDs with mean of 1,815 ± 0,71 mm; in contrast, the prevalence of asymmetry group showed that 95.5% reported TMDs with mean of 3,159 ± 1,053 mm. There was a significant difference of menton deviation to TMDs (p=0.000 in subjects with and without TMDs. Conclusion. There was a significant relationship of menton deviation in PA cephalogram with TMDs based on TMD-DI index.

  14. Why do lesser toes deviate laterally in hallux valgus? A radiographic study.

    Science.gov (United States)

    Roan, Li-Yi; Tanaka, Yasuhito; Taniguchi, Akira; Tomiwa, Kiyonori; Kumai, Tsukasa; Cheng, Yuh-Min

    2015-06-01

    Hallux valgus foot with laterally deviated lesser toes is a complex condition to treat. Ignoring the laterally deviated lesser toes in hallux valgus might result in unsatisfactory foot shape. Without lateral support of the lesser toes, it might increase the risk of recurrence of hallux valgus. We sought to identify associated radiographic findings in patients where lesser toes follow the great toe in hallux valgus and deviate laterally. The weight-bearing, anteroposterior foot radiographs of 24 female hallux valgus feet with laterally deviated lesser toes (group L), 34 female hallux valgus feet with normal lesser toes (group H), and 43 normal female feet (group N) were selected for the study. A 2-dimensional coordinated system was used to analyze the shapes and angles of these feet by converting each dot made on the radiographs onto X and Y coordinates. Diagrams of the feet in each group were drawn for comparison. The hallux valgus angle, lateral deviation angle of the second toe, intermetatarsal angles, toe length, metatarsal length, and metatarsus adductus were calculated according to the coordinates of the corresponding points. The mapping showed the bases of the second, third, and fourth toe in group L shifted laterally away from their corresponding metatarsal head (P hallux valgus angles (P hallux valgus angle, more adducted first metatarsal, and divergent lateral splaying of the lesser metatarsals were associated with lateral deviation of the lesser toes in hallux valgus. Level III, comparative study. © The Author(s) 2015.

  15. Comparison of direct numerical simulation databases of turbulent channel flow at $Re_{\\tau}$ = 180

    NARCIS (Netherlands)

    Vreman, A.W.; Kuerten, Johannes G.M.

    2014-01-01

    Direct numerical simulation (DNS) databases are compared to assess the accuracy and reproducibility of standard and non-standard turbulence statistics of incompressible plane channel flow at $Re_{\\tau}$ = 180. Two fundamentally different DNS codes are shown to produce maximum relative deviations

  16. Wind Power Fluctuation Smoothing Controller Based on Risk Assessment of Grid Frequency Deviation in an Isolated System

    DEFF Research Database (Denmark)

    Lin, Jin; Sun, Yuanzhang; Song, Yonghua

    2013-01-01

    a smoothing controller to suppress the power fluctuation from doubly-fed induction generator (DFIG)-based wind farm. This controller consists of threemain functionality components: risk assessmentmodel, wind turbine rotor speed optimizer, and rotor speed upper limiter. In order to avoid unnecessary energy...... curve with reduced output so that a trade-off between fluctuation smoothing and energy loss is achieved. Subsequently, the controller limits the maximum rotor speed to shift down the power curve of wind power plant based on the optimal wind turbine rotor speed. Therefore, the power fluctuation......Wind power fluctuation raises the security concern of grid frequency deviation, especially for an isolated power system. Thus, better control methodology needs to be developed to smooth the fluctuation without excessive spillage. Based on an actual industrial power system, this paper proposes...

  17. OBSERVABLE DEVIATIONS FROM HOMOGENEITY IN AN INHOMOGENEOUS UNIVERSE

    Energy Technology Data Exchange (ETDEWEB)

    Giblin, John T. Jr. [Department of Physics, Kenyon College, 201 N College Road Gambier, OH 43022 (United States); Mertens, James B.; Starkman, Glenn D. [CERCA/ISO, Department of Physics, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH 44106 (United States)

    2016-12-20

    How does inhomogeneity affect our interpretation of cosmological observations? It has long been wondered to what extent the observable properties of an inhomogeneous universe differ from those of a corresponding Friedmann–Lemaître–Robertson–Walker (FLRW) model, and how the inhomogeneities affect that correspondence. Here, we use numerical relativity to study the behavior of light beams traversing an inhomogeneous universe, and construct the resulting Hubble diagrams. The universe that emerges exhibits an average FLRW behavior, but inhomogeneous structures contribute to deviations in observables across the observer’s sky. We also investigate the relationship between angular diameter distance and the angular extent of a source, finding deviations that grow with source redshift. These departures from FLRW are important path-dependent effects, with implications for using real observables in an inhomogeneous universe such as our own.

  18. OBSERVABLE DEVIATIONS FROM HOMOGENEITY IN AN INHOMOGENEOUS UNIVERSE

    International Nuclear Information System (INIS)

    Giblin, John T. Jr.; Mertens, James B.; Starkman, Glenn D.

    2016-01-01

    How does inhomogeneity affect our interpretation of cosmological observations? It has long been wondered to what extent the observable properties of an inhomogeneous universe differ from those of a corresponding Friedmann–Lemaître–Robertson–Walker (FLRW) model, and how the inhomogeneities affect that correspondence. Here, we use numerical relativity to study the behavior of light beams traversing an inhomogeneous universe, and construct the resulting Hubble diagrams. The universe that emerges exhibits an average FLRW behavior, but inhomogeneous structures contribute to deviations in observables across the observer’s sky. We also investigate the relationship between angular diameter distance and the angular extent of a source, finding deviations that grow with source redshift. These departures from FLRW are important path-dependent effects, with implications for using real observables in an inhomogeneous universe such as our own.

  19. Tail-constraining stochastic linear–quadratic control: a large deviation and statistical physics approach

    International Nuclear Information System (INIS)

    Chertkov, Michael; Kolokolov, Igor; Lebedev, Vladimir

    2012-01-01

    The standard definition of the stochastic risk-sensitive linear–quadratic (RS-LQ) control depends on the risk parameter, which is normally left to be set exogenously. We reconsider the classical approach and suggest two alternatives, resolving the spurious freedom naturally. One approach consists in seeking for the minimum of the tail of the probability distribution function (PDF) of the cost functional at some large fixed value. Another option suggests minimizing the expectation value of the cost functional under a constraint on the value of the PDF tail. Under the assumption of resulting control stability, both problems are reduced to static optimizations over a stationary control matrix. The solutions are illustrated using the examples of scalar and 1D chain (string) systems. The large deviation self-similar asymptotic of the cost functional PDF is analyzed. (paper)

  20. Yoruba Writing: Standards and Trends

    Directory of Open Access Journals (Sweden)

    Tèmítọ́pẹ́ Olúmúyìwá Ph.D.

    2013-06-01

    Full Text Available This paper presents the state of Yorùbá orthography. The first effort at standardizing Yorùbá writing system came in 1875, and there has been a great deal of refinements and orthographies since. Specifically, a great rush of activity in standardizing written Yorùbá came in the years after independence when effort to introduce the teaching of Nigerian languages in schools and the application of those languages to official activities. The present standards were established in 1974, however, there remains a great deal of contention over writing conventions-spelling, grammar, the use of tone marks. The paper explores examples from journalism, religious writing, education and literature, and advertising to demonstrate ongoing deviations from the approved orthography.

  1. Effect of implantoplasty on fracture resistance and surface roughness of standard diameter dental implants.

    Science.gov (United States)

    Costa-Berenguer, Xavier; García-García, Marta; Sánchez-Torres, Alba; Sanz-Alonso, Mariano; Figueiredo, Rui; Valmaseda-Castellón, Eduard

    2018-01-01

    To assess the effect of implantoplasty on the fracture resistance, surface roughness, and macroscopic morphology of standard diameter (4.1 mm) external connection dental implants. An in vitro study was conducted in 20 screw-shaped titanium dental implants with an external connection. In 10 implants, the threads and surface were removed and polished with high-speed burs (implantoplasty), while the remaining 10 implants were used as controls. The final implant dimensions were recorded. The newly polished surface quality was assessed by scanning electron microscopy (SEM) and by 3D surface roughness analysis using a confocal laser microscope. Finally, all the implants were subjected to a mechanical pressure resistance test. A descriptive analysis of the data was made. Also, Student's t tests were employed to detect differences regarding the compression tests. Implantoplasty was carried out for a mean time of 10 min and 48 s (standard deviation (SD) of 1 min 22 s). Macroscopically, the resulting surface had a smooth appearance, although small titanium shavings and silicon debris were present. The final surface roughness (S a values 0.1 ± 0.02 μm) was significantly lower than that of the original (0.75 ± 0.08 μm S a ) (p = .005). There was minimal reduction in the implant's inner body diameter (0.19 ± 0.03 mm), and no statistically significant differences were found between the test and control implants regarding the maximum resistance force (896 vs 880 N, respectively). Implantoplasty, although technically demanding and time-consuming, does not seem to significantly alter fracture resistance of standard diameter external connection implants. A smooth surface with S a values below 0.1 μm can be obtained through the use of silicon polishers. A larger sample is required to confirm that implantoplasty does not significantly affect the maximum resistance force of standard diameter external connection implants. © 2017 John Wiley & Sons A/S. Published

  2. Lower levels of insulin-like growth factor-1 standard deviation score are associated with histological severity of non-alcoholic fatty liver disease.

    Science.gov (United States)

    Sumida, Yoshio; Yonei, Yoshikazu; Tanaka, Saiyu; Mori, Kojiroh; Kanemasa, Kazuyuki; Imai, Shunsuke; Taketani, Hiroyoshi; Hara, Tasuku; Seko, Yuya; Ishiba, Hiroshi; Okajima, Akira; Yamaguchi, Kanji; Moriguchi, Michihisa; Mitsuyoshi, Hironori; Yasui, Kohichiroh; Minami, Masahito; Itoh, Yoshito

    2015-07-01

    Growth hormone (GH) deficiency may be associated with histological progression of non-alcoholic fatty liver disease (NAFLD) which includes non-alcoholic fatty liver (NAFL) and non-alcoholic steatohepatitis (NASH). Insulin-like growth factor 1 (IGF-1) is mainly produced by hepatocytes and its secretion is stimulated by GH. Our aim was to determine whether more histologically advanced NAFLD is associated with low circulating levels of IGF-1 in Japanese patients. Serum samples were obtained in 199 Japanese patients with biopsy-proven NAFLD and in 2911 sex- and age-matched healthy people undergoing health checkups. The serum levels of IGF-1 were measured using a commercially available immunoradiometric assay. The standard deviation scores (SDS) of IGF-1 according to age and sex were also calculated in NAFLD patients. The serum IGF-1 levels in NAFLD patients were significantly lower (median, 112 ng/mL) compared with the control population (median, 121 ng/mL, P < 0.0001). IGF-1 SDS less than -2.0 SD from median were found in 11.6% of 199 patients. NASH patients exhibited significantly lower levels of IGF-1 SDS (n = 130; median, -0.7) compared with NAFL patients (n = 69; median, -0.3; P = 0.026). The IGF-1 SDS values decreased significantly with increasing lobular inflammation (P < 0.001) and fibrosis (P < 0.001). In multiple regressions, the association between the IGF-1 SDS values and the severity of NAFLD persisted after adjusting for age, sex and insulin resistance. Low levels of circulating IGF-1 may have a role in the development of advanced NAFLD, independent of insulin resistance. Supplementation with GH/IGF-1 may be a candidate for the treatment of NASH. © 2014 The Japan Society of Hepatology.

  3. Digital subtraction radiographic evaluation of the standardize periapical intraoral radiographs

    International Nuclear Information System (INIS)

    Cho, Bong Hae; Nah, Kyung Soo

    1993-01-01

    The geometrically standardized intraoral radiographs using 5 occlusal registration material were taken serially from immediate, 1 day, 2, 4, 8, 12 and 16 weeks after making the bite blocks. The qualities of those subtracted images were evaluated to check the degree of reproducibility of each impression material. The results were as follows: 1. The standard deviations of the grey scales of the overall subtracted images were 4.9 for Exaflex, 7.2 for Pattern resin, 9.0 for Tooth Shade Acrylic, 12.2 for XCP only, 14.8 for Impregum. 2. The standard deviation of the grey scales of the overall subtracted images were grossly related to those of the localized horizontal line of interest. 3. Exaflex which showed the best subtracted image quality had 15 cases of straight, 14 cases of wave, 1 case of canyon shape. Impregum which showed the worst subtracted image quality had 4 cases of straight, 8 cases of wave, 18 cases of canyon shape respectively.

  4. Deviations from Vegard’s law in ternary III-V alloys

    KAUST Repository

    Murphy, S. T.

    2010-08-03

    Vegard’s law states that, at a constant temperature, the volume of an alloy can be determined from a linear interpolation of its constituent’s volumes. Deviations from this description occur such that volumes are both greater and smaller than the linear relationship would predict. Here we use special quasirandom structures and density functional theory to investigate such deviations for MxN1−xAs ternary alloys, where M and N are group III species (B, Al, Ga, and In). Our simulations predict a tendency, with the exception of AlxGa1−xAs, for the volume of the ternary alloys to be smaller than that determined from the linear interpolation of the volumes of the MAs and BAs binary alloys. Importantly, we establish a simple relationship linking the relative size of the group III atoms in the alloy and the predicted magnitude of the deviation from Vegard’s law.

  5. Optimum detection for extracting maximum information from symmetric qubit sets

    International Nuclear Information System (INIS)

    Mizuno, Jun; Fujiwara, Mikio; Sasaki, Masahide; Akiba, Makoto; Kawanishi, Tetsuya; Barnett, Stephen M.

    2002-01-01

    We demonstrate a class of optimum detection strategies for extracting the maximum information from sets of equiprobable real symmetric qubit states of a single photon. These optimum strategies have been predicted by Sasaki et al. [Phys. Rev. A 59, 3325 (1999)]. The peculiar aspect is that the detections with at least three outputs suffice for optimum extraction of information regardless of the number of signal elements. The cases of ternary (or trine), quinary, and septenary polarization signals are studied where a standard von Neumann detection (a projection onto a binary orthogonal basis) fails to access the maximum information. Our experiments demonstrate that it is possible with present technologies to attain about 96% of the theoretical limit

  6. Multifield stochastic particle production: beyond a maximum entropy ansatz

    Energy Technology Data Exchange (ETDEWEB)

    Amin, Mustafa A.; Garcia, Marcos A.G.; Xie, Hong-Yi; Wen, Osmond, E-mail: mustafa.a.amin@gmail.com, E-mail: marcos.garcia@rice.edu, E-mail: hxie39@wisc.edu, E-mail: ow4@rice.edu [Physics and Astronomy Department, Rice University, 6100 Main Street, Houston, TX 77005 (United States)

    2017-09-01

    We explore non-adiabatic particle production for N {sub f} coupled scalar fields in a time-dependent background with stochastically varying effective masses, cross-couplings and intervals between interactions. Under the assumption of weak scattering per interaction, we provide a framework for calculating the typical particle production rates after a large number of interactions. After setting up the framework, for analytic tractability, we consider interactions (effective masses and cross couplings) characterized by series of Dirac-delta functions in time with amplitudes and locations drawn from different distributions. Without assuming that the fields are statistically equivalent, we present closed form results (up to quadratures) for the asymptotic particle production rates for the N {sub f}=1 and N {sub f}=2 cases. We also present results for the general N {sub f} >2 case, but with more restrictive assumptions. We find agreement between our analytic results and direct numerical calculations of the total occupation number of the produced particles, with departures that can be explained in terms of violation of our assumptions. We elucidate the precise connection between the maximum entropy ansatz (MEA) used in Amin and Baumann (2015) and the underlying statistical distribution of the self and cross couplings. We provide and justify a simple to use (MEA-inspired) expression for the particle production rate, which agrees with our more detailed treatment when the parameters characterizing the effective mass and cross-couplings between fields are all comparable to each other. However, deviations are seen when some parameters differ significantly from others. We show that such deviations become negligible for a broad range of parameters when N {sub f}>> 1.

  7. Quality, precision and accuracy of the maximum No. 40 anemometer

    Energy Technology Data Exchange (ETDEWEB)

    Obermeir, J. [Otech Engineering, Davis, CA (United States); Blittersdorf, D. [NRG Systems Inc., Hinesburg, VT (United States)

    1996-12-31

    This paper synthesizes available calibration data for the Maximum No. 40 anemometer. Despite its long history in the wind industry, controversy surrounds the choice of transfer function for this anemometer. Many users are unaware that recent changes in default transfer functions in data loggers are producing output wind speed differences as large as 7.6%. Comparison of two calibration methods used for large samples of Maximum No. 40 anemometers shows a consistent difference of 4.6% in output speeds. This difference is significantly larger than estimated uncertainty levels. Testing, initially performed to investigate related issues, reveals that Gill and Maximum cup anemometers change their calibration transfer functions significantly when calibrated in the open atmosphere compared with calibration in a laminar wind tunnel. This indicates that atmospheric turbulence changes the calibration transfer function of cup anemometers. These results call into question the suitability of standard wind tunnel calibration testing for cup anemometers. 6 refs., 10 figs., 4 tabs.

  8. Solar radiation pressure and deviations from Keplerian orbits

    Energy Technology Data Exchange (ETDEWEB)

    Kezerashvili, Roman Ya. [Physics Department, New York City College of Technology, the City University of New York, Brooklyn, NY 11201 (United States); Vazquez-Poritz, Justin F. [Physics Department, New York City College of Technology, City University of New York, Brooklyn, NY 11201 (United States)], E-mail: jporitz@gmail.com

    2009-05-04

    Newtonian gravity and general relativity give exactly the same expression for the period of an object in circular orbit around a static central mass. However, when the effects of the curvature of spacetime and solar radiation pressure are considered simultaneously for a solar sail propelled satellite, there is a deviation from Kepler's third law. It is shown that solar radiation pressure affects the period of this satellite in two ways: by effectively decreasing the solar mass, thereby increasing the period, and by enhancing the effects of other phenomena, potentially rendering some of them detectable. In particular, we consider deviations from Keplerian orbits due to spacetime curvature, frame dragging from the rotation of the sun, the oblateness of the sun, a possible net electric charge of the sun, and a very small positive cosmological constant.

  9. Differential processing of melodic, rhythmic and simple tone deviations in musicians--an MEG study.

    Science.gov (United States)

    Lappe, Claudia; Lappe, Markus; Pantev, Christo

    2016-01-01

    Rhythm and melody are two basic characteristics of music. Performing musicians have to pay attention to both, and avoid errors in either aspect of their performance. To investigate the neural processes involved in detecting melodic and rhythmic errors from auditory input we tested musicians on both kinds of deviations in a mismatch negativity (MMN) design. We found that MMN responses to a rhythmic deviation occurred at shorter latencies than MMN responses to a melodic deviation. Beamformer source analysis showed that the melodic deviation activated superior temporal, inferior frontal and superior frontal areas whereas the activation pattern of the rhythmic deviation focused more strongly on inferior and superior parietal areas, in addition to superior temporal cortex. Activation in the supplementary motor area occurred for both types of deviations. We also recorded responses to similar pitch and tempo deviations in a simple, non-musical repetitive tone pattern. In this case, there was no latency difference between the MMNs and cortical activation was smaller and mostly limited to auditory cortex. The results suggest that prediction and error detection of musical stimuli in trained musicians involve a broad cortical network and that rhythmic and melodic errors are processed in partially different cortical streams. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Effect of density deviations of concrete on its attenuation efficiency

    International Nuclear Information System (INIS)

    Szymendera, L.; Wincel, K.; Blociszewski, S.; Kordyasz, D.; Sobolewska, I.

    In the work, the influence of concrete density deviation on shield thickness and total dose ratio outside the reactor shield, has--on the basis of numerical analysis--been considered. It has been noticed the possibility of introducing flexible corrections--without additional shielding calculation--to the design thickness of the shield. It has been also found that in common cases of shield design, where any necessity of minimizing the shield thickness does not exist, the tendency to minimize the value of this deviation is hardly substantiable

  11. Improved iterative oscillation tests for first-order deviating differential equations

    Directory of Open Access Journals (Sweden)

    George E. Chatzarakis

    2018-01-01

    Full Text Available In this paper, improved oscillation conditions are established for the oscillation of all solutions of differential equations with non-monotone deviating arguments and nonnegative coefficients. They lead to a procedure that checks for oscillations by iteratively computing \\(\\lim \\sup\\ and \\(\\lim \\inf\\ on terms recursively defined on the equation's coefficients and deviating argument. This procedure significantly improves all known oscillation criteria. The results and the improvement achieved over the other known conditions are illustrated by two examples, numerically solved in MATLAB.

  12. The quantitative analysis of Bowen's kale by PIXE using the internal standard

    International Nuclear Information System (INIS)

    Navarrete, V.R.; Izawa, G.; Shiokawa, T.; Kamiya, M.; Morita, S.

    1978-01-01

    The internal standard method was used for non-destructive quantitative determination of trace elements by PIXE. The uniform distribution of the internal standard element in the Bowen's kale powder sample was obtained by using homogenization technique. Eleven elements are determined quantitatively for the sample prepared into self-supporting targets having lower relative standard deviations than non-self-supporting targets. (author)

  13. Applications of phosphorus/silicon standards in quantitative autoradiography

    International Nuclear Information System (INIS)

    Treutler, H.Ch.; Freyer, K.

    1983-01-01

    Quantitative autoradiography requires a careful selection of suitable standard preparations. After several basic comments related to the problems of standardization in autoradiography an example is given of the autoradiographic study of semiconductor materials and it is used for describing the system of standardization using silicon discs with diffused phosphorus. These standardized samples are processed in the same manner as the evaluated samples, i.e., from activation to exposure to sensitive material whereby optimal comparability is obtained. All failures of the processing cycle caused by the fluctuation of the neutron flux in the reactor, deviations at the time of activation, afterglow, etc. are eliminated by this standardization procedure. Experience is presented obtained with the application of this procedure. (author)

  14. Maximum Interconnectedness and Availability for Directional Airborne Range Extension Networks

    Science.gov (United States)

    2016-08-29

    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS 1 Maximum Interconnectedness and Availability for Directional Airborne Range Extension Networks Thomas...2 IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS I. INTRODUCTION Tactical military networks both on land and at sea often have restricted transmission...a standard definition in graph theoretic and networking literature that is related to, but different from, the metric we consider. August 29, 2016

  15. Quality control and standardization of forearm X-ray osteodensitometry

    International Nuclear Information System (INIS)

    Boyanov, M.

    2000-01-01

    Quality control (QC) has an essential practical bearing on the proper function of the equipment used for bone density measurement. Special attention is likewise focused on the issue of standardization of the results afforded by different osteodensitometry instruments. It is the purpose of the study to assay QC of a single-energy x-ray forearm osteodensitometry unit DTX-100 covering a 3 year period, and compare the data on bone mineral density (BMD) produced by three different devices. Long-term BMD reproducibility in vitro, expressed as coefficient of variation, amounts to 0.55 per cent. Except for a two week period, no deviation from the normal function of the instrument is documented. Failing to comply with the manufacturer's instructions may discredit QC efficacy. On comparative assessment of the results produced by different osteodensitometers, differences in vivo may reach up to 1.2 standard deviation. Definite regions of special interest, feasible for comparison, are recommended. In conclusion special emphasis is laid on the necessity of performing through QC, measurement results standardization and accreditation of a reference osteodensitometry center

  16. Geometry of river networks. I. Scaling, fluctuations, and deviations

    International Nuclear Information System (INIS)

    Dodds, Peter Sheridan; Rothman, Daniel H.

    2001-01-01

    This paper is the first in a series of three papers investigating the detailed geometry of river networks. Branching networks are a universal structure employed in the distribution and collection of material. Large-scale river networks mark an important class of two-dimensional branching networks, being not only of intrinsic interest but also a pervasive natural phenomenon. In the description of river network structure, scaling laws are uniformly observed. Reported values of scaling exponents vary, suggesting that no unique set of scaling exponents exists. To improve this current understanding of scaling in river networks and to provide a fuller description of branching network structure, here we report a theoretical and empirical study of fluctuations about and deviations from scaling. We examine data for continent-scale river networks such as the Mississippi and the Amazon and draw inspiration from a simple model of directed, random networks. We center our investigations on the scaling of the length of a subbasin's dominant stream with its area, a characterization of basin shape known as Hack's law. We generalize this relationship to a joint probability density, and provide observations and explanations of deviations from scaling. We show that fluctuations about scaling are substantial, and grow with system size. We find strong deviations from scaling at small scales which can be explained by the existence of a linear network structure. At intermediate scales, we find slow drifts in exponent values, indicating that scaling is only approximately obeyed and that universality remains indeterminate. At large scales, we observe a breakdown in scaling due to decreasing sample space and correlations with overall basin shape. The extent of approximate scaling is significantly restricted by these deviations, and will not be improved by increases in network resolution

  17. IMPROVING MANAGEMENT ACCOUNTING AND COST CALCULATION IN DAIRY INDUSTRY USING STANDARD COST METHOD

    Directory of Open Access Journals (Sweden)

    Bogdănoiu Cristiana-Luminiţa

    2013-04-01

    Full Text Available This paper aims to discuss issues related to the improvement of management accounting in the dairy industry by implementing standard cost method. The methods used today do not provide informational satisfaction to managers in order to conduct effectively production activities, which is why we attempted the standard cost method, it responding to the managers needs to obtain the efficiency of production, and all economic entities. The method allows an operative control of how they consume manpower and material resources by pursuing distinct, permanent and complete deviations during the activity and not at the end of the reporting period. Successful implementation of the standard method depends on the accuracy by which standards are developed and promotes consistently anticipated calculation of production costs as well as determination, tracking and controlling deviations from them, leads to increased practical value of accounting information and business improvement.

  18. Assessment of freeway work zone safety with improved cellular automata model

    Directory of Open Access Journals (Sweden)

    Guohua Liang

    2014-08-01

    Full Text Available To accurately assess the safety of freeway work zones, this paper investigates the safety of vehicle lane change maneuvers with improved cellular automata model. Taking the traffic conflict and standard deviation of operating speed as the evaluation indexes, the study evaluates the freeway work zone safety. With improved deceleration probability in car-following raies and the addition of lanechanging rules under critical state, the lane-changing behavior under critical state is defined as a conflict count. Through 72 schemes of simulation runs, the possible states of the traffic flow are carefully studied. The results show that under the condition of constant saturation traffic conflict count and vehicle speed standard deviation reach their maximums when the mixed rate of heave vehicles is 40%. Meanwhile, in the case of constant heavy vehicles mix, traffic conflict count and vehicle speed standard deviation reach maximum values when saturation rate is 0. 75. Integrating ail simulation results, it is known the traffic safety in freeway work zones is classified into four levels : safe, relatively safe, relatively dangerous, and dangerous.

  19. The standard deviation of extracellular water/intracellular water is associated with all-cause mortality and technique failure in peritoneal dialysis patients.

    Science.gov (United States)

    Tian, Jun-Ping; Wang, Hong; Du, Feng-He; Wang, Tao

    2016-09-01

    The mortality rate of peritoneal dialysis (PD) patients is still high, and the predicting factors for PD patient mortality remain to be determined. This study aimed to explore the relationship between the standard deviation (SD) of extracellular water/intracellular water (E/I) and all-cause mortality and technique failure in continuous ambulatory PD (CAPD) patients. All 152 patients came from the PD Center between January 1st 2006 and December 31st 2007. Clinical data and at least five-visit E/I ratio defined by bioelectrical impedance analysis were collected. The patients were followed up till December 31st 2010. The primary outcomes were death from any cause and technique failure. Kaplan-Meier analysis and Cox proportional hazards models were used to identify risk factors for mortality and technique failure in CAPD patients. All patients were followed up for 59.6 ± 23.0 months. The patients were divided into two groups according to their SD of E/I values: lower SD of E/I group (≤0.126) and higher SD of E/I group (>0.126). The patients with higher SD of E/I showed a higher all-cause mortality (log-rank χ (2) = 10.719, P = 0.001) and technique failure (log-rank χ (2) = 9.724, P = 0.002) than those with lower SD of E/I. Cox regression analysis found that SD of E/I independently predicted all-cause mortality (HR  3.551, 95 % CI 1.442-8.746, P = 0.006) and technique failure (HR  2.487, 95 % CI 1.093-5.659, P = 0.030) in CAPD patients after adjustment for confounders except when sensitive C-reactive protein was added into the model. The SD of E/I was a strong independent predictor of all-cause mortality and technique failure in CAPD patients.

  20. A Study of the Standard Model Higgs Boson Decaying to a Pair of Tau Leptons with the CMS Detector at the LHC

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00398807; Smith, Wesley H.; Herndon, Matthew F.

    This thesis presents a 5.5 standard deviation observation of the Higgs boson decaying to fermions using the data collected at the LHC at 13\\TeV center-of-mass energy. The studied dataset corresponds to an integrated luminosity of 35.9\\fbinv. The best fit signal strength for the $\\htt$ process is measured to be $\\mu = 1.24 ^{+0.29} _{-0.27}$, consistent with standard model predictions. Unique event categories are used targeting the leading Higgs boson production processes, gluon fusion, vector boson fusion, and associated production. This provides signal regions sensitive to Higgs boson couplings to both fermions and vector bosons. These two Higgs boson couplings are measured and are consistent with standard model predictions within one standard deviation. This 5.5 standard deviation observation of the $\\htt$ process and the consistency of the Higgs boson couplings with the standard model provide confirmation of the Higgs boson Yukawa couplings to fermions. This is evidence that the Higgs field provides mass f...

  1. Prediction of Pressing Quality for Press-Fit Assembly Based on Press-Fit Curve and Maximum Press-Mounting Force

    Directory of Open Access Journals (Sweden)

    Bo You

    2015-01-01

    Full Text Available In order to predict pressing quality of precision press-fit assembly, press-fit curves and maximum press-mounting force of press-fit assemblies were investigated by finite element analysis (FEA. The analysis was based on a 3D Solidworks model using the real dimensions of the microparts and the subsequent FEA model that was built using ANSYS Workbench. The press-fit process could thus be simulated on the basis of static structure analysis. To verify the FEA results, experiments were carried out using a press-mounting apparatus. The results show that the press-fit curves obtained by FEA agree closely with the curves obtained using the experimental method. In addition, the maximum press-mounting force calculated by FEA agrees with that obtained by the experimental method, with the maximum deviation being 4.6%, a value that can be tolerated. The comparison shows that the press-fit curve and max press-mounting force calculated by FEA can be used for predicting the pressing quality during precision press-fit assembly.

  2. Control of deviations and prediction of surface roughness from micro machining of THz waveguides using acoustic emission signals

    Science.gov (United States)

    Griffin, James M.; Diaz, Fernanda; Geerling, Edgar; Clasing, Matias; Ponce, Vicente; Taylor, Chris; Turner, Sam; Michael, Ernest A.; Patricio Mena, F.; Bronfman, Leonardo

    2017-02-01

    By using acoustic emission (AE) it is possible to control deviations and surface quality during micro milling operations. The method of micro milling is used to manufacture a submillimetre waveguide where micro machining is employed to achieve the required superior finish and geometrical tolerances. Submillimetre waveguide technology is used in deep space signal retrieval where highest detection efficiencies are needed and therefore every possible signal loss in the receiver has to be avoided and stringent tolerances achieved. With a sub-standard surface finish the signals travelling along the waveguides dissipate away faster than with perfect surfaces where the residual roughness becomes comparable with the electromagnetic skin depth. Therefore, the higher the radio frequency the more critical this becomes. The method of time-frequency analysis (STFT) is used to transfer raw AE into more meaningful salient signal features (SF). This information was then correlated against the measured geometrical deviations and, the onset of catastrophic tool wear. Such deviations can be offset from different AE signals (different deviations from subsequent tests) and feedback for a final spring cut ensuring the geometrical accuracies are met. Geometrical differences can impact on the required transfer of AE signals (change in cut off frequencies and diminished SNR at the interface) and therefore errors have to be minimised to within 1 μm. Rules based on both Classification and Regression Trees (CART) and Neural Networks (NN) were used to implement a simulation displaying how such a control regime could be used as a real time controller, be it corrective measures (via spring cuts) over several initial machining passes or, with a micron cut introducing a level plain measure for allowing setup corrective measures (similar to a spirit level).

  3. Status of conversion of DOE standards to non-Government standards

    Energy Technology Data Exchange (ETDEWEB)

    Moseley, H.L.

    1992-07-01

    One major goal of the DOE Technical Standards Program is to convert existing DOE standards into non-Government standards (NGS's) where possible. This means that a DOE standard may form the basis for a standards-writing committee to produce a standard in the same subject area using the non-Government standards consensus process. This report is a summary of the activities that have evolved to effect conversion of DOE standards to NGSs, and the status of current conversion activities. In some cases, all requirements in a DOE standard will not be incorporated into the published non-Government standard because these requirements may be considered too restrictive or too specific for broader application by private industry. If requirements in a DOE standard are not incorporated in a non-Government standard and the requirements are considered necessary for DOE program applications, the DOE standard will be revised and issued as a supplement to the non-Government standard. The DOE standard will contain only those necessary requirements not reflected by the non-Government standard. Therefore, while complete conversion of DOE standards may not always be realized, the Department's technical standards policy as stated in Order 1300.2A has been fully supported in attempting to make maximum use of the non-Government standard.

  4. Status of conversion of DOE standards to non-Government standards

    Energy Technology Data Exchange (ETDEWEB)

    Moseley, H.L.

    1992-07-01

    One major goal of the DOE Technical Standards Program is to convert existing DOE standards into non-Government standards (NGS`s) where possible. This means that a DOE standard may form the basis for a standards-writing committee to produce a standard in the same subject area using the non-Government standards consensus process. This report is a summary of the activities that have evolved to effect conversion of DOE standards to NGSs, and the status of current conversion activities. In some cases, all requirements in a DOE standard will not be incorporated into the published non-Government standard because these requirements may be considered too restrictive or too specific for broader application by private industry. If requirements in a DOE standard are not incorporated in a non-Government standard and the requirements are considered necessary for DOE program applications, the DOE standard will be revised and issued as a supplement to the non-Government standard. The DOE standard will contain only those necessary requirements not reflected by the non-Government standard. Therefore, while complete conversion of DOE standards may not always be realized, the Department`s technical standards policy as stated in Order 1300.2A has been fully supported in attempting to make maximum use of the non-Government standard.

  5. Relationship and significance of gait deviations associated with limb length discrepancy: A systematic review.

    Science.gov (United States)

    Khamis, Sam; Carmeli, Eli

    2017-09-01

    Controversy still exists as to the clinical significance of leg length discrepancy (LLD) in spite of the fact that further evidence has been emerging regarding the relationship between several clinical conditions and LLD. The objectives of our study were to review the available research with regard to LLD as a cause of clinically significant gait deviations, to determine if there is a relationship between the magnitude of LLD and the presence of gait deviations and to identify the most common gait deviations associated with LLD. In line with the PRISMA guidelines, a literature search was carried out throughout the Medline, CINAHL and EMBASE databases. Twelve articles met the predetermined inclusion criteria and were included in the review. Quality assessment using the Methodological Index for Non-Randomized Studies (MINORS) scale was completed for all included studies. Two main methodologies were found in 4 studies evaluating gait asymmetry in patients or healthy participants with anatomic LLD and 8 studies evaluating gait deviations while simulating LLD by employing artificial lifts of 1-5cm on healthy subjects. A significant relationship was found between anatomic LLD and gait deviation. Evidence suggests that gait deviations may occur with discrepancies of >1cm, with greater impact seen as the discrepancy increases. Compensatory strategies were found to occur in both the shorter and longer limb, throughout the lower limb. As the discrepancy increases, more compensatory strategies occur. Sagittal plane deviations seem to be the most effective deviations, although, frontal plane compensations also occur in the pelvis, hip and foot. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. FEMORAL NECK FRACTURES GARDEN I AND II: EVALUATION OF THE DEVIATION IN LATERAL VIEW.

    Science.gov (United States)

    Leonhardt, Natália Zalc; Melo, Lucas da Ponte; Nordon, David Gonçalves; Silva, Fernando Brandão de Andrade E; Kojima, Kodi Edson; Silva, Jorge Santos

    2017-01-01

    To evaluate the rate of deviation in the lateral radiographic incidence in patients with femoral neck fracture classified as non-diverted in the anteroposterior view (Garden I and II). Nineteen selected patients with femoral neck fractures classified as Garden I and II were retrospectively evaluated, estimating the degree of deviation in the lateral view. Fifteen cases (79%) presented deviations in lateral view, with a mean of 18.6 degrees (±15.5). Most fractures of the femoral neck classified as Garden I and II present some degree of posterior deviation in the X-ray lateral view. Level of Evidence III, Retrospective Comparative Study.

  7. Urinary growth hormone level and insulin-like growth factor-1 standard deviation score (IGF-SDS) can discriminate adult patients with severe growth hormone deficiency.

    Science.gov (United States)

    Hirohata, Toshio; Saito, Nobuhito; Takano, Koji; Yamada, So; Son, Jae-Hyun; Yamada, Shoko M; Nakaguchi, Hiroshi; Hoya, Katsumi; Murakami, Mineko; Mizutani, Akiko; Okinaga, Hiroko; Matsuno, Akira

    2013-01-01

    Adult growth hormone (GH) deficiency (AGHD) in Japan is diagnosed based on peak GH concentrations during GH provocative tests such as GHRP-2 stimulation test. In this study, we aimed to evaluate the ability of serum insulin-like growth factor-1 (sIGF-1) and urinary GH (uGH) at the time of awakening to diagnose AGHD. Fifty-nine patients with pituitary disease (32 men and 27 women; age 20-85 y (57.5 ± 15.5, mean ± SD) underwent GHRP-2 stimulation and sIGF-1 testing. Thirty-six and 23 patients were diagnosed with and without severe AGHD, respectively based on a peak GH response of standard deviation score (IGF-1 SDS) based on age and sex. We determined whether uGH levels in urine samples from 42 of the 59 patients at awakening were above or below the sensitivity limit. We evaluated IGF-1 SDS and uGH levels in a control group of 15 healthy volunteers. Values for IGF-1 SDS were significantly lower in patients with, than without (-2.07 ± 1.77 vs.-0.03 ± 0.92, mean ± SD; p -1.4. IGF-1 SDS discriminated AGHD more effectively in patients aged ≤60 years. The χ2 test revealed a statistical relationship between uGH and AGHD (test statistic: 7.0104 ≥ χ2 (1; 0.01) = 6.6349). When IGF-1 SDS is < -1.4 or uGH is below the sensitivity limit, AGHD can be detected with high sensitivity.

  8. Surface metal standards produced by ion implantation through a removable layer

    International Nuclear Information System (INIS)

    Schueler, B.W.; Granger, C.N.; McCaig, L.; McKinley, J.M.; Metz, J.; Mowat, I.; Reich, D.F.; Smith, S.; Stevie, F.A.; Yang, M.H.

    2003-01-01

    Surface metal concentration standards were produced by ion implantation and investigated for their suitability to calibrate surface metal measurements by secondary ion mass spectrometry (SIMS). Single isotope implants were made through a 100 nm oxide layer on silicon. The implant energies were chosen to place the peak of the implanted species at a depth of 100 nm. Subsequent removal of the oxide layer was used to expose the implant peak and to produce controlled surface metal concentrations. Surface metal concentration measurements by time-of-flight SIMS (TOF-SIMS) with an analysis depth of 1 nm agreed with the expected surface concentrations of the implant standards with a relative mean standard deviation of 20%. Since the TOF-SIMS relative sensitivity factors (RSFs) were originally derived from surface metal measurements of surface contaminated silicon wafers, the agreement implies that the implant standards can be used to measure RSF values. The homogeneity of the surface metal concentration was typically <10%. The dopant dose remaining in silicon after oxide removal was measured using the surface-SIMS protocol. The measured implant dose agreed with the expected dose with a mean relative standard deviation of 25%

  9. Method for solving fully fuzzy linear programming problems using deviation degree measure

    Institute of Scientific and Technical Information of China (English)

    Haifang Cheng; Weilai Huang; Jianhu Cai

    2013-01-01

    A new ful y fuzzy linear programming (FFLP) prob-lem with fuzzy equality constraints is discussed. Using deviation degree measures, the FFLP problem is transformed into a crispδ-parametric linear programming (LP) problem. Giving the value of deviation degree in each constraint, the δ-fuzzy optimal so-lution of the FFLP problem can be obtained by solving this LP problem. An algorithm is also proposed to find a balance-fuzzy optimal solution between two goals in conflict: to improve the va-lues of the objective function and to decrease the values of the deviation degrees. A numerical example is solved to il ustrate the proposed method.

  10. Performance Evaluation of Five Turbidity Sensors in Three Primary Standards

    Science.gov (United States)

    Snazelle, Teri T.

    2015-10-28

    Open-File Report 2015-1172 is temporarily unavailable.Five commercially available turbidity sensors were evaluated by the U.S. Geological Survey, Hydrologic Instrumentation Facility (HIF) for accuracy and precision in three types of turbidity standards; formazin, StablCal, and AMCO Clear (AMCO–AEPA). The U.S. Environmental Protection Agency (EPA) recognizes all three turbidity standards as primary standards, meaning they are acceptable for reporting purposes. The Forrest Technology Systems (FTS) DTS-12, the Hach SOLITAX sc, the Xylem EXO turbidity sensor, the Yellow Springs Instrument (YSI) 6136 turbidity sensor, and the Hydrolab Series 5 self-cleaning turbidity sensor were evaluated to determine if turbidity measurements in the three primary standards are comparable to each other, and to ascertain if the primary standards are truly interchangeable. A formazin 4000 nephelometric turbidity unit (NTU) stock was purchased and dilutions of 40, 100, 400, 800, and 1000 NTU were made fresh the day of testing. StablCal and AMCO Clear (for Hach 2100N) standards with corresponding concentrations were also purchased for the evaluation. Sensor performance was not evaluated in turbidity levels less than 40 NTU due to the unavailability of polymer-bead turbidity standards rated for general use. The percent error was calculated as the true (not absolute) difference between the measured turbidity and the standard value, divided by the standard value.The sensors that demonstrated the best overall performance in the evaluation were the Hach SOLITAX and the Hydrolab Series 5 turbidity sensor when the operating range (0.001–4000 NTU for the SOLITAX and 0.1–3000 NTU for the Hydrolab) was considered in addition to sensor accuracy and precision. The average percent error in the three standards was 3.80 percent for the SOLITAX and -4.46 percent for the Hydrolab. The DTS-12 also demonstrated good accuracy with an average percent error of 2.02 percent and a maximum relative standard

  11. A user-operated audiometry method based on the maximum likelihood principle and the two-alternative forced-choice paradigm

    DEFF Research Database (Denmark)

    Schmidt, Jesper Hvass; Brandt, Christian; Pedersen, Ellen Raben

    2014-01-01

    with standard deviation of differences from 3.9 dB to 5.2 dB in the frequency range of 250-8000 Hz. User-operated 2AFC audiometrygave thresholds 1-2 dB lower at most frequencies compared to traditional audiometry. Conclusions: User-operated 2AFC audiometry does not require specific operating skills...

  12. Comparison of direct numerical simulation databases of turbulent channel flow at Re = 180

    NARCIS (Netherlands)

    Vreman, A.W.; Kuerten, J.G.M.

    2014-01-01

    Direct numerical simulation (DNS) databases are compared to assess the accuracy and reproducibility of standard and non-standard turbulence statistics of incompressible plane channel flow at Re t = 180. Two fundamentally different DNS codes are shown to produce maximum relative deviations below 0.2%

  13. Hazards and preventive measures of well deviation in well construction of in-situ leaching

    International Nuclear Information System (INIS)

    Zou Wenjie; Chen Shihe

    2006-01-01

    Whether the in-situ leaching method is successful depends on the quality of borehole engineering to a great extent. There are lots of factors that affect the quality, and the well deviation is one of notable problems. The hazards and causes of the well deviation are analyzed. The preventive measures and the methods of rectifying the deviation are put forward. (authors)

  14. Graphite calorimeter, the primary standard of absorbed dose at BNM-LNHB

    International Nuclear Information System (INIS)

    Daures, J.; Ostrowsky, A.; Chauvenet, B.

    2002-01-01

    The graphite calorimeter is the standard for absorbed dose to water at BNM-LNHB. The transfer from absorbed dose to graphite to absorbed dose to water is then performed by means of chemical dosimeters and ionisation chamber measurements. Therefore the quality of graphite calorimeter measurements is essential. The present graphite calorimeter is described. The characteristics of this calorimeter are pointed out. Special attention is given to the thermal feedback of the core, which is the main difference with the Domen-type calorimeter. The repeatability and reproducibility of the mean absorbed dose in the calorimeter core are presented in detail. As an example, individual measurements in the 20 MV photon beam from our Saturne 43 linac are given. The y-axis quantity is the mean absorbed dose in the core divided by the reference ionisation chamber charge. Both are normalised to the monitor ionisation chamber charge. The standard deviation (of the distribution itself) is 0.12 % for the first set of measurements performed in 1999. In 2002, for each different series, the standard deviation is 0.03%. The improvement on the 2002 standard deviation is mainly due to the change of the ionisation chamber used for the beam monitoring of the linac. Some benefit also comes from changes on the thermal control and measuring systems (nanovoltmeters, Wheatstone bridges, power supplies, determination of the measuring bridge sensitivity (V/Ω.) ). The maximum difference between the means of the three series is 0.08%. This difference is due to the variation of not only the calorimetric measurements but also of the reference ionisation chamber response, of the position of the assembly and of the monitoring of the beam. The stability of the linac (electron energy, photon beam shape) has to be very good too in order to obtain this global performance. The correction factors necessary to determine the absorbed dose to graphite at the reference point in an homogeneous phantom from the

  15. Consensus values for NIST biological and environmental Standard Reference Materials

    International Nuclear Information System (INIS)

    Roelandts, I.; Gladney, E.S.

    1998-01-01

    The National Institute of Standards and Technology (NIST, formerly the National Bureau of Standards or NBS) has produced numerous Standard Reference Materials (SRM) for use in biological and environmental analytical chemistry. The value listed on the ''NIST Certificate of Analysis'' is the present best estimate of the ''true'' concentration of that element and is not expected to deviate from that concentration by more than the stated uncertainty. However, NIST does not certify the elemental concentration of every constituent and the number of elements reported in the NIST programs tends to be limited.Numerous analysts have published concentration data on these reference materials. Major journals in analytical chemistry, books, proceedings and ''technical reports'' have been surveyed to collect these available literature values. A standard statistical approach has been employed to evaluate the compiled data. Our methodology has been developed in a series of previous papers. Some subjective criteria are first used to reject aberrant data. Following these eliminations, an initial arithmetic mean and standard deviation (S.D.) are computed from remaining data for each element. All data now outside two S.D. from the initial mean are dropped and a second mean and S.D. recalculated. These final means and associated S.D. are reported as ''consensus values'' in our tables. (orig.)

  16. Search for the Standard Model Higgs boson produced in the decay ...

    Indian Academy of Sciences (India)

    2012-10-06

    Oct 6, 2012 ... s = 7 TeV. No evidence is found for a significant deviation from Standard Model expectations anywhere in the ZZ mass range considered in this analysis. An upper limit at 95% CL is placed on the product of the cross-section and decay branching ratio for the Higgs boson decaying with Standard Model-like ...

  17. g-2 and α(MZ2): Status of the Standard Model predictions

    International Nuclear Information System (INIS)

    Teubner, T.; Hagiwara, K.; Liao, R.; Martin, A.D.; Nomura, D.

    2012-01-01

    We review the status of the Standard Model prediction of the anomalous magnetic moment of the muon and the electromagnetic coupling at the scale M Z . Recent progress in the evaluation of the hadronic contributions have consolidated the prediction of both quantities. For g-2, the discrepancy between the measurement from BNL and the Standard Model prediction stands at a level of more than three standard deviations.

  18. Analysis to determine the maximum dimensions of flexible apertures in sensored security netting products.

    Energy Technology Data Exchange (ETDEWEB)

    Murton, Mark; Bouchier, Francis A.; vanDongen, Dale T.; Mack, Thomas Kimball; Cutler, Robert P; Ross, Michael P.

    2013-08-01

    Although technological advances provide new capabilities to increase the robustness of security systems, they also potentially introduce new vulnerabilities. New capability sometimes requires new performance requirements. This paper outlines an approach to establishing a key performance requirement for an emerging intrusion detection sensor: the sensored net. Throughout the security industry, the commonly adopted standard for maximum opening size through barriers is a requirement based on square inchestypically 96 square inches. Unlike standard rigid opening, the dimensions of a flexible aperture are not fixed, but variable and conformable. It is demonstrably simple for a human intruder to move through a 96-square-inch opening that is conformable to the human body. The longstanding 96-square-inch requirement itself, though firmly embedded in policy and best practice, lacks a documented empirical basis. This analysis concluded that the traditional 96-square-inch standard for openings is insufficient for flexible openings that are conformable to the human body. Instead, a circumference standard is recommended for these newer types of sensored barriers. The recommended maximum circumference for a flexible opening should be no more than 26 inches, as measured on the inside of the netting material.

  19. The calculation of maximum permissible exposure levels for laser radiation

    International Nuclear Information System (INIS)

    Tozer, B.A.

    1979-01-01

    The maximum permissible exposure data of the revised standard BS 4803 are presented as a set of decision charts which ensure that the user automatically takes into account such details as pulse length and pulse pattern, limiting angular subtense, combinations of multiple wavelength and/or multiple pulse lengths, etc. The two decision charts given are for the calculation of radiation hazards to skin and eye respectively. (author)

  20. Halogens determination in vegetable NBS standard reference materials

    International Nuclear Information System (INIS)

    Stella, R.; Genova, N.; Di Casa, M.

    1977-01-01

    Levels of all four halogens in Orchard Leaves, Pine Needles and Tomato Leaves NBS reference standards were determined. For fluorine a spiking isotope dilution method was used followed by HF absorption on glass beads. Instrumental nuclear activation analysis was adopted for chlorine and bromine determination. Radiochemical separation by a distillation procedure was necessary for iodine nuclear activation analysis after irradiation. Activation parameters of Cl, Br and I are reported. Results of five determinations for each halogen in Orchard Leaves, Pine Needles and Tomato Leaves NBS Standard Materials and Standard deviations of the mean are reported. (T.I.)