Smith, Gary
2015-01-01
Did you know that having a messy room will make you racist? Or that human beings possess the ability to postpone death until after important ceremonial occasions? Or that people live three to five years longer if they have positive initials, like ACE? All of these facts' have been argued with a straight face by researchers and backed up with reams of data and convincing statistics.As Nobel Prize-winning economist Ronald Coase once cynically observed, If you torture data long enough, it will confess.' Lying with statistics is a time-honoured con. In Standard Deviations, ec
Standard Deviation for Small Samples
Joarder, Anwar H.; Latif, Raja M.
2006-01-01
Neater representations for variance are given for small sample sizes, especially for 3 and 4. With these representations, variance can be calculated without a calculator if sample sizes are small and observations are integers, and an upper bound for the standard deviation is immediate. Accessible proofs of lower and upper bounds are presented for…
Beat the Deviations in Estimating Maximum Power of Thermoelectric Modules
Gao, Junling; Chen, Min
2013-01-01
Under a certain temperature difference, the maximum power of a thermoelectric module can be estimated by the open-circuit voltage and the short-circuit current. In practical measurement, there exist two switch modes, either from open to short or from short to open, but the two modes can give...... different estimations on the maximum power. Using TEG-127-2.8-3.5-250 and TEG-127-1.4-1.6-250 as two examples, the difference is about 10%, leading to some deviations with the temperature change. This paper analyzes such differences by means of a nonlinear numerical model of thermoelectricity, and finds out...
A Maximum Likelihood Approach to Least Absolute Deviation Regression
Yinbo Li
2004-09-01
Full Text Available Least absolute deviation (LAD regression is an important tool used in numerous applications throughout science and engineering, mainly due to the intrinsic robust characteristics of LAD. In this paper, we show that the optimization needed to solve the LAD regression problem can be viewed as a sequence of maximum likelihood estimates (MLE of location. The derived algorithm reduces to an iterative procedure where a simple coordinate transformation is applied during each iteration to direct the optimization procedure along edge lines of the cost surface, followed by an MLE of location which is executed by a weighted median operation. Requiring weighted medians only, the new algorithm can be easily modularized for hardware implementation, as opposed to most of the other existing LAD methods which require complicated operations such as matrix entry manipulations. One exception is Wesolowsky's direct descent algorithm, which among the top algorithms is also based on weighted median operations. Simulation shows that the new algorithm is superior in speed to Wesolowsky's algorithm, which is simple in structure as well. The new algorithm provides a better tradeoff solution between convergence speed and implementation complexity.
Comparison of estimators of standard deviation for hydrologic time series.
Tasker, Gary D.; Gilroy, E.J.
1982-01-01
Unbiasing factors as a function of serial correlation, rho, and sample size, n for the sample standard deviation of a lag one autoregressive model were generated by random number simulation. Monte Carlo experiments were used to compare the performance of several alternative methods for estimating the standard deviation sigma of a lag one autoregressive model in terms of bias, root mean square error, probability of underestimation, and expected opportunity design loss. -from Authors
The missing ingredient in effective-medium theories: Standard deviations
Bohren, Craig F; Lakhtakia, Akhlesh
2012-01-01
Effective-medium theories for electromagnetic constitutive parameters of particulate composite materials are theories of averages. Standard deviations are absent because of the lack of rigorous theories. But ensemble averages and standard deviations can be calculated from a rigorous theory of reflection by planar multilayers. Average reflectivities at all angles of incidence and two orthogonal polarization states for a multilayer composed of two kinds of electrically thin layers agree well with reflectivities for a single layer with the same overall thickness and a volume-weighted average of the relative permittivities of these two components. But the relative standard deviation can be appreciable depending on the angle of incidence and the polarization state of the incident illumination, and increases with increasing difference between the constitutive parameters of the two layers. This suggests that average constitutive parameters obtained from effective-medium theories do not have uniform validity for all ...
史海芳; 李树有; 姬永刚
2008-01-01
For two normal populations with u~nown means μi and variances σ2i>0,i=1,2,assume that there is a semi-order restriction between ratios of means and standard deviations and sample numbers of two normal populations are different.A procedure of obtaining the maximum likelihood estimatom of μi's and σ's under the semi-order restrictions is proposed.For i=3 case,some connected results and simulations are given.
Estimation of amplitude and standard deviation of noisy sinusoidal signals
Juarez-Salazar, Rigoberto; Diaz-Ramirez, Victor H.
2017-01-01
A simple method to estimate the amplitude and standard deviation of sinusoidal signals corrupted with additive Gaussian noise is proposed. For this, a two-parameter model is developed by sorting the samples of the signal. This reduced parametric model allows robust parameter estimation, even if the phase function of the sinusoid is nonlinear, discontinuous, and unknown. The functionality and performance of the proposed method are analyzed by several computer simulations; the used GNU Octave program is provided. The proposed method can be useful for unbiased envelope estimation in fringe pattern normalization among other potential applications.
New g-2 measurement deviates further from Standard Model
2004-01-01
"The latest result from an international collaboration of scientists investigating how the spin of a muon is affected as this type of subatomic particle moves through a magnetic field deviates further than previous measurements from theoretical predictions" (1 page).
Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates
Laurence, T; Chromy, B
2009-11-10
Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE
Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates
Laurence, T; Chromy, B
2009-11-10
Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE
Timothy J. Fullman; Erin L. Bunting
2014-01-01
Northern Botswana is influenced by various socio-ecological drivers of landscape change. The African elephant ( Loxodonta africana ) is one of the leading sources of landscape shifts in this region. Developing the ability to assess elephant impacts on savanna vegetation is important to promote effective management strategies. The Moving Standard Deviation Index (MSDI) applies a standard deviation calculation to remote sensing imagery to assess degradation of vegetation. Used previously for as...
Set standard deviation, repeatability and offset of absolute gravimeter A10-008
Schmerge, D.; Francis, O.
2006-01-01
The set standard deviation, repeatability and offset of absolute gravimeter A10-008 were assessed at the Walferdange Underground Laboratory for Geodynamics (WULG) in Luxembourg. Analysis of the data indicates that the instrument performed within the specifications of the manufacturer. For A10-008, the average set standard deviation was (1.6 0.6) ??Gal (1Gal ??? 1 cm s -2), the average repeatability was (2.9 1.5) ??Gal, and the average offset compared to absolute gravimeter FG5-216 was (3.2 3.5) ??Gal. ?? 2006 BIPM and IOP Publishing Ltd.
American Society for Testing and Materials. Philadelphia
2009-01-01
1.1 This photographic practice determines the optical distortion and deviation of a line of sight through a simple transparent part, such as a commercial aircraft windshield or a cabin window. This practice applies to essentially flat or nearly flat parts and may not be suitable for highly curved materials. 1.2 Test Method F 801 addresses optical deviation (angluar deviation) and Test Method F 2156 addresses optical distortion using grid line slope. These test methods should be used instead of Practice F 733 whenever practical. 1.3 This standard does not purport to address the safety concerns associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Use of Standard Deviations as Predictors in Models Using Large-Scale International Data Sets
Austin, Bruce; French, Brian; Adesope, Olusola; Gotch, Chad
2017-01-01
Measures of variability are successfully used in predictive modeling in research areas outside of education. This study examined how standard deviations can be used to address research questions not easily addressed using traditional measures such as group means based on index variables. Student survey data were obtained from the Organisation for…
Deviating from the standard: effects on labor continuity and career patterns
Roman, A.A.
2006-01-01
Deviating from a standard career path is increasingly becoming an option for individuals to combine paid labor with other important life domains. These career detours emerge in diverse labor forms such as part-time jobs, temporary working hour reductions, and labor force time-outs, used to alleviate
Isolating the Systematic Component of a Single Stock’s (or Portfolio’s) Standard Deviation
Cara Marshall
2008-01-01
This paper revisits the roots of modern portfolio theory and the recognition that a stock’s (or a stock portfolio’s) risk can be decomposed into a systematic component and an unsystematic component, and, further, that only the former should contribute to expected return. However, instead of isolating the systematic component of risk by recasting the risk in terms of a stock’s beta coefficient, I choose to decompose the standard deviation, or variance if one prefers the original risk measure, ...
National Oceanic and Atmospheric Administration, Department of Commerce — Standard deviation of depth was calculated from the bathymetry surface for each cell using the ArcGIS Spatial Analyst Focal Statistics "STD" parameter. Standard...
Wang, Bin; Shi, Wenzhong; Miao, Zelang
2015-01-01
Standard deviational ellipse (SDE) has long served as a versatile GIS tool for delineating the geographic distribution of concerned features. This paper firstly summarizes two existing models of calculating SDE, and then proposes a novel approach to constructing the same SDE based on spectral decomposition of the sample covariance, by which the SDE concept is naturally generalized into higher dimensional Euclidean space, named standard deviational hyper-ellipsoid (SDHE). Then, rigorous recursion formulas are derived for calculating the confidence levels of scaled SDHE with arbitrary magnification ratios in any dimensional space. Besides, an inexact-newton method based iterative algorithm is also proposed for solving the corresponding magnification ratio of a scaled SDHE when the confidence probability and space dimensionality are pre-specified. These results provide an efficient manner to supersede the traditional table lookup of tabulated chi-square distribution. Finally, synthetic data is employed to generate the 1-3 multiple SDEs and SDHEs. And exploratory analysis by means of SDEs and SDHEs are also conducted for measuring the spread concentrations of Hong Kong's H1N1 in 2009.
Bin Wang
Full Text Available Standard deviational ellipse (SDE has long served as a versatile GIS tool for delineating the geographic distribution of concerned features. This paper firstly summarizes two existing models of calculating SDE, and then proposes a novel approach to constructing the same SDE based on spectral decomposition of the sample covariance, by which the SDE concept is naturally generalized into higher dimensional Euclidean space, named standard deviational hyper-ellipsoid (SDHE. Then, rigorous recursion formulas are derived for calculating the confidence levels of scaled SDHE with arbitrary magnification ratios in any dimensional space. Besides, an inexact-newton method based iterative algorithm is also proposed for solving the corresponding magnification ratio of a scaled SDHE when the confidence probability and space dimensionality are pre-specified. These results provide an efficient manner to supersede the traditional table lookup of tabulated chi-square distribution. Finally, synthetic data is employed to generate the 1-3 multiple SDEs and SDHEs. And exploratory analysis by means of SDEs and SDHEs are also conducted for measuring the spread concentrations of Hong Kong's H1N1 in 2009.
Timothy J. Fullman
2014-01-01
Full Text Available Northern Botswana is influenced by various socio-ecological drivers of landscape change. The African elephant (Loxodonta africana is one of the leading sources of landscape shifts in this region. Developing the ability to assess elephant impacts on savanna vegetation is important to promote effective management strategies. The Moving Standard Deviation Index (MSDI applies a standard deviation calculation to remote sensing imagery to assess degradation of vegetation. Used previously for assessing impacts of livestock on rangelands, we evaluate the ability of the MSDI to detect elephant-modified vegetation along the Chobe riverfront in Botswana, a heavily elephant-impacted landscape. At broad scales, MSDI values are positively related to elephant utilization. At finer scales, using data from 257 sites along the riverfront, MSDI values show a consistent negative relationship with intensity of elephant utilization. We suggest that these differences are due to varying effects of elephants across scales. Elephant utilization of vegetation may increase heterogeneity across the landscape, but decrease it within heavily used patches, resulting in the observed MSDI pattern of divergent trends at different scales. While significant, the low explanatory power of the relationship between the MSDI and elephant utilization suggests the MSDI may have limited use for regional monitoring of elephant impacts.
A standard deviation selection in evolutionary algorithm for grouper fish feed formulation
Cai-Juan, Soong; Ramli, Razamin; Rahman, Rosshairy Abdul
2016-10-01
Malaysia is one of the major producer countries for fishery production due to its location in the equatorial environment. Grouper fish is one of the potential markets in contributing to the income of the country due to its desirable taste, high demand and high price. However, the demand of grouper fish is still insufficient from the wild catch. Therefore, there is a need to farm grouper fish to cater to the market demand. In order to farm grouper fish, there is a need to have prior knowledge of the proper nutrients needed because there is no exact data available. Therefore, in this study, primary data and secondary data are collected even though there is a limitation of related papers and 30 samples are investigated by using standard deviation selection in Evolutionary algorithm. Thus, this study would unlock frontiers for an extensive research in respect of grouper fish feed formulation. Results shown that the fitness of standard deviation selection in evolutionary algorithm is applicable. The feasible and low fitness, quick solution can be obtained. These fitness can be further predicted to minimize cost in farming grouper fish.
Zagoris Konstantinos
2011-01-01
Full Text Available Abstract A text localization technique is required to successfully exploit document images such as technical articles and letters. The proposed method detects and extracts text areas from document images. Initially a connected components analysis technique detects blocks of foreground objects. Then, a descriptor that consists of a set of suitable document structure elements is extracted from the blocks. This is achieved by incorporating an algorithm called Standard Deviation Analysis of Structure Elements (SDASE which maximizes the separability between the blocks. Another feature of the SDASE is that its length adapts according to the requirements of the application. Finally, the descriptor of each block is used as input to a trained support vector machines that classify the block as text or not. The proposed technique is also capable of adjusting to the text structure of the documents. Experimental results on benchmarking databases demonstrate the effectiveness of the proposed method.
Xu Meng-Long; Yang Chang-Bao; Wu Yan-Gang; Chen Jing-Yi; Huan Heng-Fei
2015-01-01
Most edge-detection methods rely on calculating gradient derivatives of the potential field, a process that is easily affected by noise and is therefore of low stability. We propose a new edge-detection method named correlation coeffi cient of multidirectional standard deviations (CCMS) that is solely based on statistics. First, we prove the reliability of the proposed method using a single model and then a combination of models. The proposed method is evaluated by comparing the results with those obtained by other edge-detection methods. The CCMS method offers outstanding recognition, retains the sharpness of details, and has low sensitivity to noise. We also applied the CCMS method to Bouguer anomaly data of a potash deposit in Laos. The applicability of the CCMS method is shown by comparing the inferred tectonic framework to that inferred from remote sensing (RS) data.
Muon’s (g-2): the obstinate deviation from the Standard Model
Antonella Del Rosso
2011-01-01
It’s been 50 years since a small group at CERN measured the muon (g-2) for the first time. Several other experiments have followed over the years. The latest measurement at Brookhaven (2004) gave a value that obstinately remains about 3 standard deviations away from the prediction of the Standard Model. Francis Farley, one of the fathers of the (g-2) experiments, argues that a statement such as “everything we observe is accounted for by the Standard Model” is not acceptable. Francis J. M. Farley. Francis J. M. Farley, Fellow of the Royal Society since 1972 and the 1980 winner of the Hughes Medal "for his ultra-precise measurements of the muon magnetic moment, a severe test of quantum electrodynamics and of the nature of the muon", is among the scientists who still look at the (g-2) anomaly as one of the first proofs of the existence of new physics. “Although it seems to be generally believed that all experiments agree with the Stan...
Veissid, N. (Instituto de Pesquisas Espaciais, Sao Jose dos Campos (Brazil)); Cruz, M.T.F. da (Universidade de Sao Paulo, SP (Brazil). Inst. de Fisica); Andrade, A.M. de (Universidade de Sao Paulo, SP (Brazil). Lab. de Microeletronica)
1990-05-01
A method for the determination of the standard deviations of the solar cell characteristic curve fitting parameters is presented for the first time. In this method, a Taylor series expansion of the parameters, around their best values, is made resulting in linear functions which permit the determination of the standard deviations with the least-squares method. The parameters, with the respective standard deviations, were determined from the experimental I-V characteristic curves obtained under illuminated and dark conditions. For the studied experimental I-V curves, the diode saturation currents, the diode factor and the shunt resistance showed smaller standard deviations in the dark condition, and the series resistance appeared to be more precise in the illuminated I-V characteristic. (orig.).
Reyes, Melissa Lopez
2003-01-01
A structure for learning the connections among standard deviations, z-scores, and normal distributions is presented. The components of this structure are classified into intuitive or previously learned conceptual knowledge, computational knowledge, and formalized conceptual knowledge. (Contains 1 figure.)
Y. Song; Z. Gui; H. Wu; Y. Wei
2017-01-01
.... The framework uses standard deviational ellipse (SDE) and shifting route of gravity centers to show the spatial distribution and yearly developing trends of different enterprise types according to their industry categories...
Reyes, Melissa Lopez
2003-01-01
A structure for learning the connections among standard deviations, z-scores, and normal distributions is presented. The components of this structure are classified into intuitive or previously learned conceptual knowledge, computational knowledge, and formalized conceptual knowledge. (Contains 1 figure.)
Scatter-Reducing Sounding Filtration Using a Genetic Algorithm and Mean Monthly Standard Deviation
Mandrake, Lukas
2013-01-01
Retrieval algorithms like that used by the Orbiting Carbon Observatory (OCO)-2 mission generate massive quantities of data of varying quality and reliability. A computationally efficient, simple method of labeling problematic datapoints or predicting soundings that will fail is required for basic operation, given that only 6% of the retrieved data may be operationally processed. This method automatically obtains a filter designed to reduce scatter based on a small number of input features. Most machine-learning filter construction algorithms attempt to predict error in the CO2 value. By using a surrogate goal of Mean Monthly STDEV, the goal is to reduce the retrieved CO2 scatter rather than solving the harder problem of reducing CO2 error. This lends itself to improved interpretability and performance. This software reduces the scatter of retrieved CO2 values globally based on a minimum number of input features. It can be used as a prefilter to reduce the number of soundings requested, or as a post-filter to label data quality. The use of the MMS (Mean Monthly Standard deviation) provides a much cleaner, clearer filter than the standard ABS(CO2-truth) metrics previously employed by competitor methods. The software's main strength lies in a clearer (i.e., fewer features required) filter that more efficiently reduces scatter in retrieved CO2 rather than focusing on the more complex (and easily removed) bias issues.
Edjabou, Maklawe Essonanawe; Martín-Fernández, Josep Antoni; Scheutz, Charlotte; Astrup, Thomas Fruergaard
2017-09-04
Data for fractional solid waste composition provide relative magnitudes of individual waste fractions, the percentages of which always sum to 100, thereby connecting them intrinsically. Due to this sum constraint, waste composition data represent closed data, and their interpretation and analysis require statistical methods, other than classical statistics that are suitable only for non-constrained data such as absolute values. However, the closed characteristics of waste composition data are often ignored when analysed. The results of this study showed, for example, that unavoidable animal-derived food waste amounted to 2.21±3.12% with a confidence interval of (-4.03; 8.45), which highlights the problem of the biased negative proportions. A Pearson's correlation test, applied to waste fraction generation (kg mass), indicated a positive correlation between avoidable vegetable food waste and plastic packaging. However, correlation tests applied to waste fraction compositions (percentage values) showed a negative association in this regard, thus demonstrating that statistical analyses applied to compositional waste fraction data, without addressing the closed characteristics of these data, have the potential to generate spurious or misleading results. Therefore, ¨compositional data should be transformed adequately prior to any statistical analysis, such as computing mean, standard deviation and correlation coefficients. Copyright © 2017 Elsevier Ltd. All rights reserved.
A better detection of 2LSB steganography via standard deviation of the extended pairs of values
Khalind, Omed; Aziz, Benjamin
2015-05-01
This paper proposes a modification to the Extended Pairs of Values (EPoV) method of 2LSB steganalysis in digital still images. In EPoV, the detection and the estimation of the hidden message length were performed in two separate processes as it considered the automated detection. However, the new proposed method uses the standard deviation of the EPoV to measure the amount of distortion in the stego image made by the embedding process using 2LSB replacement, which is directly proportional with the embedding rate. It is shown that it can accurately estimate the length of the hidden message and outperform the other methods of the targeted 2LSB steganalysis in the literature. The proposed method is also more consistent with the steganalysis methods in the literature by giving the amount of difference to the expected clean image. According to the experimental results, based on analysing 3000 nevercompressed images, the proposed method is more accurate than the current targeted 2LSB steganalysis methods for low embedding rates.
Falabino, Simona; Trini Castelli, Silvia
2017-02-01
In air quality practice, observed data are often input to air pollution models to simulate the pollutants dispersion and to estimate their concentration. When the area of interest includes urban sites, observed data collected at urban or suburban stations can be available, and it can happen to use them for estimating surface layer parameters given in input to the models. In such case, roughness sublayer quantities may enter the parameterizations of the turbulence variables as if they were representative of the inertial sublayer, possibly leading to a not appropriate application of the Monin-Obukhov similarity theory. We investigate whether it is possible to derive suitable values of the wind velocity standard deviations for the inertial sublayer using the friction velocity and stability parameter observed in the roughness sublayer, inside a similarity-like analytical function. For this purpose, an analysis of sonic anemometer data sets collected in suburban and urban sites is proposed. The values derived through this approach are compared to actual observations in the inertial sublayer. The transferability of the empirical coefficients estimated for the similarity functions between different sites, characterized by similar or different morphologies, is also addressed. The derived functions proved to be a reasonable approximation of the actual data. This method was found to be feasible and generally reliable, and can be a reference to keep using, in air pollution models, the similarity theory parameterizations when measurements are available only in the roughness sublayer.
Berendes, Todd A.; Mecikalski, John R.; MacKenzie, Wayne M.; Bedka, Kristopher M.; Nair, U. S.
2008-10-01
This paper describes a statistical clustering approach toward the classification of cloud types within meteorological satellite imagery, specifically, visible and infrared data. The method is based on the Standard Deviation Limited Adaptive Clustering (SDLAC) procedure, which has been used to classify a variety of features within both polar orbiting and geostationary imagery, including land cover, volcanic ash, dust, and clouds of various types. In this study, the focus is on classifying cumulus clouds of various types (e.g., "fair weather, "towering, and newly glaciated cumulus, in addition to cumulonimbus). The SDLAC algorithm is demonstrated by showing examples using Geostationary Operational Environmental Satellite (GOES) 12, Meteosat Second Generation's (MSG) Spinning Enhanced Visible and Infrared Imager (SEVIRI), and the Moderate Resolution Infrared Spectrometer (MODIS). Results indicate that the method performs well, classifying cumulus similarly between MODIS, SEVIRI, and GOES, despite the obvious channel and resolution differences between these three sensors. The SDLAC methodology has been used in several research activities related to convective weather forecasting, which offers some proof of concept for its value.
Liu, Weidong
2009-01-01
In this paper, Cram\\'{e}r type moderate deviations for the maximum of the periodogram and its studentized version are derived. The results are then applied to a simultaneous testing problem in gene expression time series. It is shown that the level of the simultaneous tests is accurate provided that the number of genes $G$ and the sample size $n$ satisfy $G=\\exp(o(n^{1/3}))$.
Chiuh Cheng Chyu
2012-06-01
Full Text Available This paper studies the unrelated parallel machine scheduling problem with three minimization objectives – makespan, maximum earliness, and maximum tardiness (MET-UPMSP. The last two objectives combined are related to just-in-time (JIT performance of a solution. Three hybrid algorithms are presented to solve the MET-UPMSP: reactive GRASP with path relinking, dual-archived memetic algorithm (DAMA, and SPEA2. In order to improve the solution quality, min-max matching is included in the decoding scheme for each algorithm. An experiment is conducted to evaluate the performance of the three algorithms, using 100 (jobs x 3 (machines and 200 x 5 problem instances with three combinations of two due date factors – tight and range. The numerical results indicate that DAMA performs best and GRASP performs second for most problem instances in three performance metrics: HVR, GD, and Spread. The experimental results also show that incorporating min-max matching into decoding scheme significantly improves the solution quality for the two population-based algorithms. It is worth noting that the solutions produced by DAMA with matching decoding can be used as benchmark to evaluate the performance of other algorithms.
Joustra, S.D.; Plas, E.M. van der; Goede, J.; Oostdijk, W.; Delemarre-van de Waal, H.A.; Hack, W.W.M.; Buuren, S. van; Wit, J.M.
2015-01-01
Aim Accurate calculations of testicular volume standard deviation (SD) scores are not currently available. We constructed LMS-smoothed age-reference charts for testicular volume in healthy boys. Methods The LMS method was used to calculate reference data, based on testicular volumes from ultrasonogr
Nelde, Peter H.
1974-01-01
Concludes that the German used in the east Belgium newspaper differs fr om standard High German. Proceeds to list these differences in the areas of lexicology, semantics and stylistics, morphology and syntax, orthography e tc. (Text is in German.) (DS)
75 FR 67093 - Iceberg Water Deviating From Identity Standard; Temporary Permit for Market Testing
2010-11-01
...), Canada J3Z 1G4. This permit covers limited interstate marketing tests of products identified as ``GLACE... requirements of the standard with the exception of the source definition. The purpose of this temporary permit... problems, and assess commercial feasibility. This permit provides for the temporary marketing of...
Kang, Namgoo; Jung, Min-Ho; Jeong, Hyun-Cheol; Lee, Yung-Seop
2015-06-01
The general sample standard deviation and the Monte-Carlo methods as an estimate of confidence interval is frequently being used for estimates of uncertainties with regard to greenhouse gas emission, based on the critical assumption that a given data set follows a normal (Gaussian) or statistically known probability distribution. However, uncertainty estimated using those methods are severely limited in practical applications where it is challenging to assume the probability distribution of a data set or where the real data distribution form appears to deviate significantly from statistically known probability distribution models. In order to solve these issues encountered especially in reasonable estimation of uncertainty about the average of greenhouse gas emission, we present two statistical methods, the pooled standard deviation method (PSDM) and the standardized-t bootstrap method (STBM) based upon statistical theories. We also report interesting results of the uncertainties about the average of a data set of methane (CH4) emission from rice cultivation under the four different irrigation conditions in Korea, measured by gas sampling and subsequent gas analysis. Results from the applications of the PSDM and the STBM to these rice cultivation methane emission data sets clearly demonstrate that the uncertainties estimated by the PSDM were significantly smaller than those by the STBM. We found that the PSDM needs to be adopted in many cases where a data probability distribution form appears to follow an assumed normal distribution with both spatial and temporal variations taken into account. However, the STBM is a more appropriate method widely applicable to practical situations where it is realistically impossible with the given data set to reasonably assume or determine a probability distribution model with a data set showing evidence of fairly asymmetric distribution but severely deviating from known probability distribution models.
Wage and Labor Standards Administration (DOL), Washington, DC.
This report describes the 1966 amendments to the Fair Labor Standards Act and summarizes the findings of three 1969 studies of the economic effects of these amendments. The studies found that economic growth continued through the third phase of the amendments, beginning February 1, 1969, despite increased wage and hours restrictions for recently…
Tscherning, Carl Christian
2015-01-01
The method of Least-Squares Collocation (LSC) may be used for the modeling of the anomalous gravity potential (T) and for the computation (prediction) of quantities related to T by a linear functional. Errors may also be estimated. However, when using an isotropic covariance function or equivalen...... on gravity anomalies (at 10 km altitude) predicted from GOCE Tzz. This has given an improved agreement between errors based on the differences between values derived from EGM2008 (to degree 512) and predicted gravity anomalies.......The method of Least-Squares Collocation (LSC) may be used for the modeling of the anomalous gravity potential (T) and for the computation (prediction) of quantities related to T by a linear functional. Errors may also be estimated. However, when using an isotropic covariance function or equivalent...... outside the data area. On the other hand, a comparison of predicted quantities with observed values show that the error also varies depending on the local data standard deviation. This quantity may be (and has been) estimated using the GOCE second order vertical derivative, Tzz, in the area covered...
Y. Song
2017-09-01
Full Text Available Analysing spatiotemporal distribution patterns and its dynamics of different industries can help us learn the macro-level developing trends of those industries, and in turn provides references for industrial spatial planning. However, the analysis process is challenging task which requires an easy-to-understand information presentation mechanism and a powerful computational technology to support the visual analytics of big data on the fly. Due to this reason, this research proposes a web-based framework to enable such a visual analytics requirement. The framework uses standard deviational ellipse (SDE and shifting route of gravity centers to show the spatial distribution and yearly developing trends of different enterprise types according to their industry categories. The calculation of gravity centers and ellipses is paralleled using Apache Spark to accelerate the processing. In the experiments, we use the enterprise registration dataset in Mainland China from year 1960 to 2015 that contains fine-grain location information (i.e., coordinates of each individual enterprise to demonstrate the feasibility of this framework. The experiment result shows that the developed visual analytics method is helpful to understand the multi-level patterns and developing trends of different industries in China. Moreover, the proposed framework can be used to analyse any nature and social spatiotemporal point process with large data volume, such as crime and disease.
Song, Y.; Gui, Z.; Wu, H.; Wei, Y.
2017-09-01
Analysing spatiotemporal distribution patterns and its dynamics of different industries can help us learn the macro-level developing trends of those industries, and in turn provides references for industrial spatial planning. However, the analysis process is challenging task which requires an easy-to-understand information presentation mechanism and a powerful computational technology to support the visual analytics of big data on the fly. Due to this reason, this research proposes a web-based framework to enable such a visual analytics requirement. The framework uses standard deviational ellipse (SDE) and shifting route of gravity centers to show the spatial distribution and yearly developing trends of different enterprise types according to their industry categories. The calculation of gravity centers and ellipses is paralleled using Apache Spark to accelerate the processing. In the experiments, we use the enterprise registration dataset in Mainland China from year 1960 to 2015 that contains fine-grain location information (i.e., coordinates of each individual enterprise) to demonstrate the feasibility of this framework. The experiment result shows that the developed visual analytics method is helpful to understand the multi-level patterns and developing trends of different industries in China. Moreover, the proposed framework can be used to analyse any nature and social spatiotemporal point process with large data volume, such as crime and disease.
Optimal reinsurance under the standard deviation principle%标准差准则下的最优再保险
宋立新; 黄玉洁; 周娟
2011-01-01
本文关注的是在标准差准则下如何进行再保险,使得保险公司和再保险公司的风险波动达到最小.在容许合约类范围内得到了建立最优再保险合约的充分条件.如果再保险公司的风险小于一个给定阈值,我们找到了使保险公司的风险最小的最优再保险合约.在这里,保险公司可以采取三种最一般且有效的风险措施.%This paper concerns how to purchase the reinsurance to minimize both the insurer and the reinsurer's risk fluctuation under the standard deviation principle. Sufficient conditions for the optimal reinsurance contract are obtained in the restricted class of admissible contracts. Assume that the reinsurer's risk is less than a threshold, and then we find out the optimal reinsurance contract that minimizes the insurer's risk. Here the insurer company can take three general and useful risk measures.
Martya Rahmaniati
2014-06-01
Full Text Available Dengue Fever Disease is still regarded as an endemic disease in Banjar City. Information is still required to map dengue fever case distribution, mean center of case distribution, and the direction of dengue fever case dispersion in order to support the surveillance program in the relation to the vast area of the dengue fever disease control program. The objective of the research is to obtain information regarding the area of dengue fever disease distribution in Banjar City by utilizing the Standard Deviational Ellipse (SDE model. The research is an observational study with Explanatory Spatial Data Analysis (ESDA. Data analysis uses SDE model with the scope of the entire sub district area in Banjar City. The data analyzed is dengue fever case from 2007-2013 periods, with the number of sample of 315 cases. Social demographic overview of dengue fever patients in Banjar City shows that most of the patients are within the productive age, with 39.7% within the school age and 45.7% are within the work age. Most of the dengue fever patients are men (58.1%. Distribution of dengue fever cases from the period of 2007 until 2012 mostly occur in 25-37.5 meters above sea level (MASL (55.8%. The SDE models of dengue fever cases in Banjar City generally form dispersion patterns following the x-axis and clustered by physiographic boundaries. The SDE model can be used to discover dispersion patterns and directions of dengue fever cases, therefore, dengue fever disease control program can be conducted based on local-specific information, in order to support health decision.
Varadhan, S R S
2016-01-01
The theory of large deviations deals with rates at which probabilities of certain events decay as a natural parameter in the problem varies. This book, which is based on a graduate course on large deviations at the Courant Institute, focuses on three concrete sets of examples: (i) diffusions with small noise and the exit problem, (ii) large time behavior of Markov processes and their connection to the Feynman-Kac formula and the related large deviation behavior of the number of distinct sites visited by a random walk, and (iii) interacting particle systems, their scaling limits, and large deviations from their expected limits. For the most part the examples are worked out in detail, and in the process the subject of large deviations is developed. The book will give the reader a flavor of how large deviation theory can help in problems that are not posed directly in terms of large deviations. The reader is assumed to have some familiarity with probability, Markov processes, and interacting particle systems.
Hollander, Frank den
2008-01-01
This book is an introduction to the theory and applications of large deviations, a branch of probability theory that describes the probability of rare events in terms of variational problems. By focusing the theory, in Part A of the book, on random sequences, the author succeeds in conveying the main ideas behind large deviations without a need for technicalities, thus providing a concise and accessible entry to this challenging and captivating subject. The selection of modern applications, described in Part B of the book, offers a good sample of what large deviation theory is able to achieve
Plotting positions via maximum-likelihood for a non-standard situation
D. A. Jones
1997-01-01
Full Text Available A new approach is developed for the specification of the plotting positions used in the frequency analysis of extreme flows, rainfalls or similar data. The approach is based on the concept of maximum likelihood estimation and it is applied here to provide plotting positions for a range of problems which concern non-standard versions of annual-maximum data. This range covers the inclusion of incomplete years of data and also the treatment of cases involving regional maxima, where the number of sites considered varies from year to year. These problems, together with a not-to-be-recommended approach to using historical information, can be treated as special cases of a non-standard situation in which observations arise from different statistical distributions which vary in a simple, known, way.
Veenhoven, Ruut
2012-01-01
Inequality of happiness in nations can be measured using the standard deviation of responses to surveys questions. The standard-deviation is not quite independent of the mean, being zero when everybody is maximally happy or unhappy while the possible value of the standard deviation is highest when the mean is in the middle of the response scale. Delhey and Kohler see this intrinsic dependency as a problem and propose two ways to compute 'corrected' standard deviations. I advise against this medicine. One reason is that there is no real disease, since the presumed problem does not occur with commonly used numerical rating scales of 10 or more steps. The second reason is that one of Delhey and Kohler's medicines have side effects, their first correction affects the mean and their second correction is based on implausible assumptions. A third reason is that there are better ways to estimate the effect happiness-inequality net happiness-level. Partialling out mean happiness did not affect the non-correlation between inequality of income and inequality of happiness in an analysis of 116 nations. Copyright © 2011 Elsevier Inc. All rights reserved.
U.S. Geological Survey, Department of the Interior — This part of the data release contains a grid of standard deviations of bathymetric soundings within each 0.5 m x 0.5 m grid cell. The bathymetry was collected on...
Encarnación Álvarez
2015-07-01
Full Text Available Statistical quality control (SQC is used by companies and industries for many reasons. For example, the process capability of machines is an important aspect of SQC, which consists in evaluating the ability of a production process to perform with the required specifications. In other words, the process capability measures the ability of a process of producing acceptable products according to the established specifications. The most common indicator used to measure the process capability is the process capability index, which depends on the process standard deviation. In practice, the standard deviation is unknown, and the process capability index is thus estimated by using an estimator of the process standard deviation. In this paper, we describe the most common estimators of the process standard deviation, and define the corresponding estimators of the process capability index. A bound for the bias ratio of the various estimators is obtained. Monte Carlo simulation studies are carried out to analyze the empirical performance of the various estimators of the process capability index. Empirical results indicate that biases can be obtained, specially in the presence of small samples. We also observe that the estimators of the process capability index based on sample ranges are less accurate than the alternative estimators.
Henning Grosse Ruse-Khan
2009-07-01
Full Text Available International intellectual property (IP protection is at the heart of controversies over the impact of economic interests on social or environmental concerns. Some see IP rights as unduly encroaching upon human rights and societal interests, others argue for stronger enforcement and additional exclusivity to incentivize new innovations and creations. Underlying these debates is the perception that international IP treaties set out minimum standards of protection - which presumably allow for additional protection with only the sky being the limit. This article challenges this view and explores the idea of maximum standards or ceilings within the existing body of international IP law. It looks at the relation between IP treaties and subsequent agreements or national laws which offer stronger protection. In particular, within the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS, an important qualification may serve as a door opener for ceilings: While additional IP protection may not go beyond mandatory limits within TRIPS, the qualification not to “contravene” TRIPS is unlikely to safeguard TRIPS flexibilities against TRIPS-plus norms. The article further identifies and examines the rationales for maximum standards in international IP protection as: (1 Legal security and predictability about the boundaries of protection; (2 the global protection of users’ rights; and (3 the free movement of goods, services and information. Examples of mandatory limits in the existing IP treaties and in ongoing initiatives can implement these. However, most of the relevant treaty norms are optional. The article concludes with some observations on the need for more comprehensive and precise maximum standards.
Jansen, Rob T P; Laeven, Mark; Kardol, Wim
2002-06-01
The analytical processes in clinical laboratories should be considered to be non-stationary, non-ergodic and probably non-stochastic processes. Both the process mean and the process standard deviation vary. The variation can be different at different levels of concentration. This behavior is shown in five examples of different analytical systems: alkaline phosphatase on the Hitachi 911 analyzer (Roche), vitamin B12 on the Access analyzer (Beckman), prothrombin time and activated partial thromboplastin time on the STA Compact analyzer (Roche) and PO2 on the ABL 520 analyzer (Radiometer). A model is proposed to assess the status of a process. An exponentially weighted moving average and standard deviation was used to estimate process mean and standard deviation. Process means were estimated overall and for each control level. The process standard deviation was estimated in terms of within-run standard deviation. Limits were defined in accordance with state of the art- or biological variance-derived cut-offs. The examples given are real, not simulated, data. Individual control sample results were normalized to a target value and target standard deviation. The normalized values were used in the exponentially weighted algorithm. The weighting factor was based on a process time constant, which was estimated from the period between two calibration or maintenance procedures. The proposed system was compared with Westgard rules. The Westgard rules perform well, despite the underlying presumption of ergodicity. This is mainly caused by the introduction of the starting rule of 12s, which proves essential to prevent a large number of rule violations. The probability of reporting a test result with an analytical error that exceeds the total allowable error was calculated for the proposed system as well as for the Westgard rules. The proposed method performed better. The proposed algorithm was implemented in a computer program running on computers to which the analyzers were
Ren, Hongwu; Ding, Zhihua; Zhao, Yonghua; Miao, Jianjun; Nelson, J. Stuart; Chen, Zhongping
2002-10-01
We describe a phase-resolved functional optical coherence tomography system that can simultaneously yield in situ images of tissue structure, blood flow velocity, standard deviation, birefringence, and the Stokes vectors in human skin. Multifunctional images were obtained by processing of analytical interference fringe signals derived from two perpendicular polarization-detection channels. The blood flow velocity and standard deviation images were obtained by comparison of the phases from pairs of analytical signals in neighboring A-lines in the same polarization state. The analytical signals from two polarization-diversity detection channels were used to determine the four Stokes vectors for four reference polarization states. From the four Stokes vectors, the birefringence image, which is not sensitive to the orientation of the optical axis in the sample, was obtained. Multifunctional in situ images of a port wine stain birthmark in human skin are presented.
肖枝洪; 朱强
2009-01-01
本文研究了截断与删失模型,运用Taylor渐近展开方法,得到模型的极大似然估计的中偏差,比渐近正态性结果更加精细.%In this paper, we study a kind of truncated and censored data. It is shown that the maximum likelihood estimator of unknown parameter θ obeys the moderate deviation under certain regular conditions by Taylor asymptotic expansion. We obtain their accurate expression of rate function.
Halil Karahan
2013-03-01
Full Text Available Knowing the properties like amount, duration, intensity, spatial and temporal variation etc of precipitation which is the primary input of water resources is required for planning, design, construction and operation studies of various sectors like water resources, agriculture, urbanization, drainage, flood control and transportation. For executing the mentioned practices, reliable and realistic estimations based on existing observations should be made. The first step of making a reliable estimation is to test the reliability of existing observations. In this study, Kolmogorov-Smirnov, Anderson-Darling and Chi-Square goodness of distribution fit tests were applied for determining to which distribution the measured standard duration maximum precipitation values (in the years 1929-2005 fit in the meteorological stations operated by the Turkish State Meteorological Service (DMİ which are located in the city and town centers of Aegean Region. While all the observations fit to GEV distribution according to Anderson-Darling test, it was seen that short, mid-term and long duration precipitation observations generally fit to GEV, Gamma and Log-normal distribution according to Kolmogorov-Smirnov and Chi-square tests. To determine the parameters of the chosen probability distribution, maximum likelihood (LN2, LN3, EXP2, Gamma3, probability-weighted distribution (LP3,Gamma2, L-moments (GEV and least squares (Weibull2 methods were used according to different distributions.
Belyakov A. V.
2016-01-01
Full Text Available The newest Large Hadron Collider experiments targeting the search for New Physics manifested the possibility of new heavy particles. Such particles are not predicted in the framework of Standard Model, however their existence is lawful in the framework of another model based on J. A.Wheeler’s geometrodynamcs.
Applications of non-standard maximum likelihood techniques in energy and resource economics
Moeltner, Klaus
Two important types of non-standard maximum likelihood techniques, Simulated Maximum Likelihood (SML) and Pseudo-Maximum Likelihood (PML), have only recently found consideration in the applied economic literature. The objective of this thesis is to demonstrate how these methods can be successfully employed in the analysis of energy and resource models. Chapter I focuses on SML. It constitutes the first application of this technique in the field of energy economics. The framework is as follows: Surveys on the cost of power outages to commercial and industrial customers usually capture multiple observations on the dependent variable for a given firm. The resulting pooled data set is censored and exhibits cross-sectional heterogeneity. We propose a model that addresses these issues by allowing regression coefficients to vary randomly across respondents and by using the Geweke-Hajivassiliou-Keane simulator and Halton sequences to estimate high-order cumulative distribution terms. This adjustment requires the use of SML in the estimation process. Our framework allows for a more comprehensive analysis of outage costs than existing models, which rely on the assumptions of parameter constancy and cross-sectional homogeneity. Our results strongly reject both of these restrictions. The central topic of the second Chapter is the use of PML, a robust estimation technique, in count data analysis of visitor demand for a system of recreation sites. PML has been popular with researchers in this context, since it guards against many types of mis-specification errors. We demonstrate, however, that estimation results will generally be biased even if derived through PML if the recreation model is based on aggregate, or zonal data. To countervail this problem, we propose a zonal model of recreation that captures some of the underlying heterogeneity of individual visitors by incorporating distributional information on per-capita income into the aggregate demand function. This adjustment
Pesticide food safety standards as companions to tolerances and maximum residue limits
Carl K Winter; Elizabeth A Jara
2015-01-01
Alowable levels for pesticide residues in foods, known as tolerances in the US and as maximum residue limits (MRLs) in much of the world, are widely yet inappropriately perceived as levels of safety concern. A novel approach to develop scientiifcaly defensible levels of safety concern is presented and an example to determine acute and chronic pesticide food safety standard (PFSS) levels for the fungicide captan on strawberries is provided. Using this approach, the chronic PFSS level for captan on strawberries was determined to be 2000 mg kg–1 and the acute PFSS level was determined to be 250 mg kg–1. Both levels are far above the existing tolerance and MRLs that commonly range from 3 to 20 mg kg–1, and provide evidence that captan residues detected at levels greater than the tolerance or MRLs are not of acute or chronic health concern even though they represent violative residues. The beneifts of developing the PFSS approach to serve as a companion to existing tolerances/MRLs include a greater understanding concerning the health signiifcance, if any, from exposure to violative pesticide residues. In addition, the PFSS approach can be universaly applied to al potential pesticide residues on al food commodities, can be modiifed by speciifc jurisdictions to take into account differences in food consumption practices, and can help prioritize food residue monitoring by identifying the pesticide/commodity combinations of the greatest potential food safety concern and guiding development of ifeld level analytical methods to detect pesticide residues on prioritized pesticide/commodity combinations.
Osmar Abílio de Carvalho Júnior
2015-05-01
Full Text Available Typically, digital image processing for burned-areas detection combines the use of a spectral index and the seasonal differencing method. However, the seasonal differencing has many errors when applied to a long-term time series. This article aims to develop and test two methods as an alternative to the traditional seasonal difference. The study area is the Chapada dos Veadeiros National Park (Central Brazil that comprises different vegetation of the Cerrado biome. We used the MODIS/Terra Surface Reflectance 8-Day composite data, considering a 12-year period. The normalized burn ratio was calculated from the band 2 (250-meter resolution and the band 7 (500-meter resolution reasampled to 250-meter. In this context, the normalization methods aim to eliminate all possible sources of spectral variation and highlight the burned-area features. The proposed normalization methods were the standardized time-series and the interannual phenological deviation. The standardized time-series calculate for each pixel the z-scores of its temporal curve, obtaining a mean of 0 and a standard deviation of 1. The second method establishes a reference curve for each pixel from the average interannual phenology that is subtracted for every year of its respective time series. Optimal threshold value between burned and unburned area for each method was determined from accuracy assessment curves, which compare different threshold values and its accuracy indices with a reference classification using Landsat TM. The different methods have similar accuracy for the burning event, where the standardized method has slightly better results. However, the seasonal difference method has a very false positive error, especially in the period between the rainy and dry seasons. The interannual phenological deviation method minimizes false positive errors, but some remain. In contrast, the standardized time series shows excellent results not containing this type of error. This
Biological bases of the maximum permissible exposure levels of the UK laser standard BS 4803 1983
MacKinlay, Alistair F
1983-01-01
The use of lasers has increased greatly over the past 15 years or so, to the extent that they are now used routinely in many occupational and public situations. There has been an increasing awareness of the potential hazards presented by lasers and substantial efforts have been made to formulate safety standards. In the UK the relevant Safety Standard is the British Standards Institution Standard BS 4803. This Standard was originally published in 1972 and a revision has recently been published (BS 4803: 1983). The revised standard has been developed using the American National Standards Institute Standard, ANSI Z136.1 (1973 onwards), as a model. In other countries, national standards have been similarly formulated, resulting in a large measure of international agreement through participation in the work of the International Electrotechnical Commission (IEC). The bases of laser safety standards are biophysical data on threshold injury effects, particularly on the retina, and the development of theoretical mode...
Segmentation Using Symmetry Deviation
Hollensen, Christian; Højgaard, L.; Specht, L.
2011-01-01
and evaluate the method. The method uses deformable registration on computed tomography(CT) to find anatomical symmetry deviations of Head & Neck squamous cell carcinoma and combining it with positron emission tomography (PET) images. The method allows the use anatomical and symmetrical information of CT scans...... to improve automatic delineations. Materials: PET/CT scans from 30 patients were used for this study, 20 without cancer in hypopharyngeal volume and 10 with hypharyngeal carcinoma. An head and neck atlas was created from the 20 normal patients. The atlas was created using affine and non-rigid registration...... of the CT-scans into a single atlas. Afterwards the standard deviation of anatomical symmetry for the 20 normal patients was evaluated using non-rigid registration and registered onto the atlas to create an atlas for normal anatomical symmetry deviation. The same non-rigid registration was used on the 10...
29 CFR 553.230 - Maximum hours standards for work periods of 7 to 28 days-section 7(k).
2010-07-01
...-section 7(k). 553.230 Section 553.230 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR... Compensation Rules § 553.230 Maximum hours standards for work periods of 7 to 28 days—section 7(k). (a) For... 28 consecutive days, no overtime compensation is required under section 7(k) until the number...
John Hogland; Nedret Billor; Nathaniel Anderson
2013-01-01
Discriminant analysis, referred to as maximum likelihood classification within popular remote sensing software packages, is a common supervised technique used by analysts. Polytomous logistic regression (PLR), also referred to as multinomial logistic regression, is an alternative classification approach that is less restrictive, more flexible, and easy to interpret. To...
Nielsen, Hanne-Marie; Groen, A F; Østergaard, Søren
2006-01-01
The objective of this paper was to present a model of a dairy cattle production system for the derivation of economic values and their standard deviations for both production and functional traits under Danish production circumstances. The stochastic model used is dynamic, and simulates production...... and health in a dairy herd. Because of indirect effects between traits, the phenotypic levels of (related) traits can change as a result of genetic changes. Economic values for milk production and body weight were 0.28 and -0.76 €/kg per cow-year respectively. For incidence of milk fever, mastitis, retained...... placenta and laminitis economic values were -402.1, -162.5, -79.0 and -210.2 €/incidence per cow-year. The economic values for involuntary culling rate, stillbirth and conception rate were -6.66, -1.63, and 1.98 €/% per cow-year, respectively and the economic value for days from calving to first heat...
Abdu. M. A. Atta
2011-12-01
Full Text Available In many statistical process control (SPC applications, the ease of use of control charts leads to ignoring the fact that the process population of the quality characteristic being measured may be highly skewed. However, in many situations, the normality assumption is usually violated. Among the recent heuristic charts for skewed distributions proposed in the literature are those based on the weighted standard deviation (WSD method. Thus, this paper compares the performances of certain WSD charts, such as WSD X , WSD Exponential weighted moving Average (WSDEWMA and WSD Cumulative Sum (WSD-CUSUM charts for skewed distributions. The skewed distributions being considered are weibull, gamma and lognormal. The false alarm and mean shift detection rates were computed so as to evaluate the performances of the WSD charts. The WSD X chart was found to have the lowest false alarm rate in cases of known and unknown parameters. Moreover, when parameters are known and unknown, the WSD-CUSUM provided the highest mean shift detection rates. The chart with the lowest false alarm and the highest mean shift detection rates for most level of skewness and sample size, n is assumed to be have a better performance.
Construction of 3D Seabed Terrain Model based on the Standard Deviation Criterion%基于标准差准则的海底三维地形模型构建
韩富江; 潘胜玲; 王德刚; 来向华
2011-01-01
At present, existing triangulation must be done in the projection plane, so it causes the loss of attribution information in LOP (Local Optimization Procedure). In this paper, a new triangulation criterion based on standard deviation is used. The definition of standard deviation, calculation of standard deviation, and description of standard deviation criterion is investigated. Then the construction algorithm of 3D seabed terrain model based on standard deviation is presented according to the standard deviation criterion. The result of experiment shows that this method improves the rationality of triangulation, the details and precision of seabed terrain model are better than others, and it is better in dealing with special terrain than the algorithm based on empty circum-circle criterion.%针对现有三角剖分需要投影到平面,局部优化时属性丢失的问题,本文采用一种顾及水深属性的三角剖分准则——标准差准则进行三角剖分,并且讨论了标准差的含义、标准差的计算以及标准差准则的描述.根据标准差准则,实现了一种基于标准差差则的海底三维地形模型构建方法.实验结果表明该方法提高了三角剖分的合理性,模型重建的细节与精确程度更高,在处理特殊地形土优于基于空外接圆准则的TIN模型构建方法.
黄巍; 吴俊勇; 鲁思棋; 郝亮亮
2016-01-01
针对多个三相不平衡配电网络，在不同位置接入分布式光伏，运用OpenDSS软件计算了分布式光伏接入配网后节点电压偏差及谐波电压畸变率。根据ANSI C84.1-2006稳态电压标准以及IEEE519-1992谐波标准对电压偏差和谐波电压畸变率进行约束，得出不同位置下稳态电压偏差和谐波约束下分布式光伏的最大渗透率，同时分析了线路调压器及光伏接入点短路容量与光伏渗透率的关系。研究结果表明，光伏渗透率与具体配电网的拓扑结构和线路参数相关，光伏接入点越靠近馈线首端，渗透率越大。同时考虑电压偏差和谐波约束下光伏渗透率至少能达到20%，并且使线路调压器与其下游接入的光伏发电功率协调配合也可显著提高光伏的渗透率。%This paper interconnects distributed photovoltaic with several PV location scenarios for several unbalanced distribution system, and calculates the steady state voltage and voltage harmonic distortion with power flow program. Various levels of photovoltaic penetration are presented under the voltage constraints in the ANSI C84.1-2006 standard and the voltage harmonic distortion constraints in the IEEE519-1992 standard with several PV location scenarios. And with the simulation results of different output and different location scenarios of PV, the relationship between SVR and short circuit capacity and photovoltaic penetration is analyzed. The result of simulation shows that the penetration of photovoltaic is related with the line parameters of actual system. The closer the location of photovoltaic to the feeder source, the larger the penetration will be. For the cases considering the voltage and harmonic limits simulated, maximum photovoltaic penetration will be at least 20% of peak load, and with the coordination between SVR and the downstream PV system, the penetration of PV will be increased significantly.
Dissociated Vertical Deviation
... Frequently Asked Questions Español Condiciones Chinese Conditions Dissociated Vertical Deviation En Español Read in Chinese What is Dissociated Vertical Deviation (DVD)? DVD is a condition in which ...
Tomatsuri, K. (Taisei Corp., Tokyo (Japan))
1991-10-30
Concrete strength varies in accordance with material proportioning and age, and is usually mixed and designed at the determined age in order to manifest the specified strength. Concerning high strength concrete with design strength over 360kg/cm{sup 2}, however, there is no clear provision to estimate increse and deviation of the strength in the case where either age or cumulative temperature varies. In this study, the strength and the distribution of standard curing concrete and concrete after long piriod of time were measured and analyzed statistically in regard to 14 kinds of high strength concrete with the nominal strength between 360 to 465kg/cm{sup 2} of three construction projects. Considering that strength ratio of concrete at two different kinds of cumulative temperature showed the nominal distribution, a method to predict the strength distribution of concrete after a long period of time was presented. In this method, for instance, use of such parameters as standard deviation of strength on the 28th day of age and strength index could make it possible to predict the average strength and the standard deviation at different ages. 9 refs., 15 figs., 6 tabs.
Er, Hale Çolakoğlu; Erden, Ayşe; Küçük, N Özlem; Geçim, Ethem
2014-01-01
The aim of this study was to retrospectively assess the correlation between minimum apparent diffusion coefficient (ADCmin) values obtained from diffusion-weighted magnetic resonance imaging (MRI) and maximum standardized uptake values (SUVmax) obtained from positron emission tomography-computed tomography (PET-CT) in rectal cancer. Forty-one patients with pathologically confirmed rectal adenocarcinoma were included in this study. For preoperative staging, PET-CT and pelvic MRI with diffusion-weighted imaging were performed within one week (mean time interval, 3±1 day). For ADC measurements, the region of interest (ROI) was manually drawn along the border of each hyperintense tumor on b=1000 s/mm2 images. After repeating this procedure on each consecutive tumor-containing slice to cover the entire tumoral area, ROIs were copied to ADC maps. ADCmin was determined as the lowest ADC value among all ROIs in each tumor. For SUVmax measurements, whole-body images were assessed visually on transaxial, sagittal, and coronal images. ROIs were determined from the lesions observed on each slice, and SUVmax values were calculated automatically. The mean values of ADCmin and SUVmax were compared using Spearman's test. The mean ADCmin was 0.62±0.19×10-3 mm2/s (range, 0.368-1.227×10-3 mm2/s), the mean SUVmax was 20.07±9.3 (range, 4.3-49.5). A significant negative correlation was found between ADCmin and SUVmax (r=-0.347; P = 0.026). There was a significant negative correlation between the ADCmin and SUVmax values in rectal adenocarcinomas.
Chung, Hyun Hoon; Kim, Jae Weon; Park, Noh-Hyun; Song, Yong-Sang; Kang, Soon-Beom [Seoul National University College of Medicine, Department of Obstetrics and Gynecology, Cancer Research Institute, Seoul (Korea); Nam, Byung-Ho [National Cancer Center, Division of Cancer Epidemiology and Management, Research Institute, Seoul (Korea); Kang, Keon Wook; Chung, June-Key [Seoul National University College of Medicine, Department of Nuclear Medicine, Seoul (Korea)
2010-08-15
To determine if preoperative [{sup 18}F]FDG-PET/CT imaging has prognostic significance in patients with uterine cervical cancer. Patients with FIGO stage IB to IIA cervical cancer were imaged with integrated FDG PET/CT before radical surgery. The relationship between the maximum standardized uptake value (SUV{sub max}) of FDG in the primary tumour during PET/CT and recurrence was examined. Included in the study were 75 patients. Medical records including clinical data, treatment modalities, and treatment results were retrospectively reviewed. The median duration of follow-up was 13 months (range 3 to 58 months) after treatment. Median preoperative SUV{sub max} values in the primary tumours were significantly higher in patients with higher FIGO stages (p = 0.0149), pelvic lymph node metastasis (p = 0.0068), parametrial involvement (p = 0.0002), large (>4 cm) tumour size (p = 0.0022), presence of lymphovascular space invasion (p = 0.0055), and deep cervical stromal invasion (p < 0.0001). In univariate analysis, lymph node metastasis, parametrial invasion, presence of lymphovascular space invasion, and preoperative SUV{sub max} (uncategorized values) in the primary tumour were significantly associated with recurrence. However, in multivariate analysis, preoperative SUV{sub max} (p = 0.014, HR 1.178, 95% CI 1.034-1.342), age (p = 0.021, HR 0.87, 95% CI 0.772-0.980), and parametrial involvement (p = 0.040, HR 27.974, 95% CI 1.156-677.043) by primary tumour were significantly associated with recurrence. Preoperative FDG uptake by the primary tumour showed a significant association with recurrence in patients with uterine cervical cancer. (orig.)
The Strip Scratch Analysis Method Based on Sample Standard Deviation%基于样本标准差理论的带钢板面划伤分析方法
陈代兵; 纪马力; 杨子秀; 连旭东
2014-01-01
For the discrimination of strip scratch which is caused by roll touch in continuous line , and the estimation of speed deviation wheather can make a influence on strip surface , this paper provides a new analysis method .It introduces the process of strip scratch that appears in the roll touch .Especially ,it shows the factors which will result in speed deviation .Al-so,it proposes the use of sample standard deviation method in the analysis , and the value which can lead to scratch .Finally, it provides the solution of speed deviation and share the experience in analysis with readers .When this method is widely used in the process of strip scratch analysis ,we can exactly discriminate the factors of scratch ,and resolve the speed deviation quickly .%为区分连续生产线上带钢板面划伤缺陷的来源，判断辊子速度偏差对板面质量的影响，进而解决这种产品缺陷，本文提出了一种新分析方法。通过介绍带钢板面划伤缺陷产生的机理，着重指出了造成辊子速度偏差的因素，将样本标准差理论运用到偏差的分析中，明确了造成划伤的速度偏差值，最后介绍了解决速度偏差的排查方法和经验。该分析方法在生产中得到推广运用，实现了准确识别缺陷来源、快速解决速度偏差的目的。
Shoari, Niloofar; Dubé, Jean-Sébastien; Chenouri, Shoja'eddin
2015-11-01
In environmental studies, concentration measurements frequently fall below detection limits of measuring instruments, resulting in left-censored data. Some studies employ parametric methods such as the maximum likelihood estimator (MLE), robust regression on order statistic (rROS), and gamma regression on order statistic (GROS), while others suggest a non-parametric approach, the Kaplan-Meier method (KM). Using examples of real data from a soil characterization study in Montreal, we highlight the need for additional investigations that aim at unifying the existing literature. A number of studies have examined this issue; however, those considering data skewness and model misspecification are rare. These aspects are investigated in this paper through simulations. Among other findings, results show that for low skewed data, the performance of different statistical methods is comparable, regardless of the censoring percentage and sample size. For highly skewed data, the performance of the MLE method under lognormal and Weibull distributions is questionable; particularly, when the sample size is small or censoring percentage is high. In such conditions, MLE under gamma distribution, rROS, GROS, and KM are less sensitive to skewness. Related to model misspecification, MLE based on lognormal and Weibull distributions provides poor estimates when the true distribution of data is misspecified. However, the methods of rROS, GROS, and MLE under gamma distribution are generally robust to model misspecifications regardless of skewness, sample size, and censoring percentage. Since the characteristics of environmental data (e.g., type of distribution and skewness) are unknown a priori, we suggest using MLE based on gamma distribution, rROS and GROS.
Calculation on Batch Standard Deviation of the Blank and Methods Detection Limit%空白批内标准偏差和方法检出限的计算
祝旭初
2014-01-01
In environmental analytical chemistry,especially trace analysis,the methods detection limit for the reported results of the monitoring is extremely important. Determination of the detection limit often involves with calculating the standard deviation of the blank. Based on the analysis of a typical case,it was pointed out that part of the revised environmental standard method of using the new system between batches blank to calculate the standard deviation of the detection limit was unreasonable. Before calculating the detection limit it is necessary to determine whether there are significant differences between batches.%环境分析化学中，尤其痕量分析时，方法检出限对于监测结果的报出很重要。检出限的确定常会涉及空白标准偏差的计算。通过对一典型案例剖析，指出一部分新制修订环境标准方法采用空白批间标准偏差计算检出限是不合理的。在求算检出限时应先判断批间是否存在显著性差异。
Large Deviations and Metastability
Olivieri, Enzo; Eulália Vares, Maria
2005-02-01
This self-contained account of the main results in large deviation theory includes recent developments and emphasizes the Freidlin-Wentzell results on small random perturbations. Metastability is described on physical grounds, followed by the development of more exacting approaches to its description. The first part of the book then develops such pertinent tools as the theory of large deviations which is used to provide a physically relevant dynamical description of metastability. Written for graduate students, this book affords an excellent route into contemporary research as well.
Moderate deviations of maximum likelihood estimators under alternatives
Inglot, T.; Kallenberg, W.C.M.
2000-01-01
Since statistical models are simplifications of reality, it is important in estimation theory to study the behavior of estimators also under distributions (slightly) different from the proposed model. In testing theory, when dealing with test statistics where nuisance parameters are estimated,
Comparative study on maximum residue limits standards of pesticides in peanuts%花生农药最大残留限量标准比对研究
丁小霞; 李培武; 周海燕; 李娟; 白艺珍
2011-01-01
It is important to protect the health of consumers and standardize the agricultural products in trading market. One essential aspect is to develop and implement a standardized scientific and applicable maximum residue limits( MRL) pesticides. A comparative study of maximum residue limits standards of pesticides in peanuts was carried out among China,Codex Alimentarius Commission (CAC) , Unite States, Japan and European Union. Corre-sponding suggestion was put forward after analyzing the problems in maximum residue limit standards of pesticides in China.%制定和实施科学合理的农药最大残留限量标准是保护消费者健康和规范农产品国际贸易的重要手段.对我国、国际食品法典委员会、花生主产国美国以及我国花生主要出口目的国日本和欧盟的花生农药最大残留限量标准进行了系统比较,分析了我国花生农药最大残留限量标准存在的问题,提出了相应的建议.
Fan Aihua
2004-01-01
The vertices of an infinite locally finite tree T are labelled by a collection of i.i.d. real random variables {Xσ}σ∈T which defines a tree indexed walk Sσ = ∑θ＜r≤σXr. We introduce and study the oscillations of the walk:Exact Hausdorff dimension of the set of such ξ 's is calculated. An application is given to study the local variation of Brownian motion. A general limsup deviation problem on trees is also studied.
Marianna Rakszegi
2016-06-01
Full Text Available An assessment was previously made of the effects of organic and low-input field management systems on the physical, grain compositional and processing quality of wheat and on the performance of varieties developed using different breeding methods (“Comparison of quality parameters of wheat varieties with different breeding origin under organic and low-input conventional conditions” [1]. Here, accompanying data are provided on the performance and stability analysis of the genotypes using the coefficient of variation and the ‘ranking’ and ‘which-won-where’ plots of GGE biplot analysis for the most important quality traits. Broad-sense heritability was also evaluated and is given for the most important physical and quality properties of the seed in organic and low-input management systems, while mean values and standard deviation of the studied properties are presented separately for organic and low-input fields.
Rakszegi, Marianna; Löschenberger, Franziska; Hiltbrunner, Jürg; Vida, Gyula; Mikó, Péter
2016-06-01
An assessment was previously made of the effects of organic and low-input field management systems on the physical, grain compositional and processing quality of wheat and on the performance of varieties developed using different breeding methods ("Comparison of quality parameters of wheat varieties with different breeding origin under organic and low-input conventional conditions" [1]). Here, accompanying data are provided on the performance and stability analysis of the genotypes using the coefficient of variation and the 'ranking' and 'which-won-where' plots of GGE biplot analysis for the most important quality traits. Broad-sense heritability was also evaluated and is given for the most important physical and quality properties of the seed in organic and low-input management systems, while mean values and standard deviation of the studied properties are presented separately for organic and low-input fields.
Wage and Labor Standards Administration (DOL), Washington, DC.
The Fair Labor Standards Act's 1966 amendments extended coverage to all non-Federal hospitals. Using data on employment, hours, wages, and supplementary benefits from one payroll period in March 1969, this report describes the impact of the increased coverage. Although 19 percent of the nonsupervisory employees were earning less than $1.30 an hour…
Large deviations and portfolio optimization
Sornette, Didier
Risk control and optimal diversification constitute a major focus in the finance and insurance industries as well as, more or less consciously, in our everyday life. We present a discussion of the characterization of risks and of the optimization of portfolios that starts from a simple illustrative model and ends by a general functional integral formulation. A major item is that risk, usually thought of as one-dimensional in the conventional mean-variance approach, has to be addressed by the full distribution of losses. Furthermore, the time-horizon of the investment is shown to play a major role. We show the importance of accounting for large fluctuations and use the theory of Cramér for large deviations in this context. We first treat a simple model with a single risky asset that exemplifies the distinction between the average return and the typical return and the role of large deviations in multiplicative processes, and the different optimal strategies for the investors depending on their size. We then analyze the case of assets whose price variations are distributed according to exponential laws, a situation that is found to describe daily price variations reasonably well. Several portfolio optimization strategies are presented that aim at controlling large risks. We end by extending the standard mean-variance portfolio optimization theory, first within the quasi-Gaussian approximation and then using a general formulation for non-Gaussian correlated assets in terms of the formalism of functional integrals developed in the field theory of critical phenomena.
Chad Pope; Larry L. Taylor; Soon Sam Kim
2007-02-01
This document represents a summary version of the criticality analysis done to support loading SNF in a Type 1a basket/standard canister combination. Specifically, this engineering design file (EDF) captures the information pertinent to the intact condition of four fuel types with different fissile loads and their calculated reactivities. These fuels are then degraded into various configurations inside a canister without the presence of significant moderation. The important aspect of this study is the portrayal of the fuel degradation and its effect on the reactivity of a single canister given the supposition there will be continued moderation exclusion from the canister. Subsequent analyses also investigate the most reactive ‘dry’ canister in a nine canister array inside a hypothetical transport cask, both dry and partial to complete flooding inside the transport cask. The analyses also includes a comparison of the most reactive configuration to other benchmarked fuels using a software package called TSUNAMI, which is part of the SCALE 5.0 suite of software.
MAXIMUM-LIKELIHOOD-ESTIMATION OF THE ENTROPY OF AN ATTRACTOR
SCHOUTEN, JC; TAKENS, F; VANDENBLEEK, CM
1994-01-01
In this paper, a maximum-likelihood estimate of the (Kolmogorov) entropy of an attractor is proposed that can be obtained directly from a time series. Also, the relative standard deviation of the entropy estimate is derived; it is dependent on the entropy and on the number of samples used in the est
Large deviations from freeness
Kargin, Vladislav
2010-01-01
Let H=A+UBU* where A and B are two N-by-N Hermitian matrices and U is a Haar-distributed random unitary matrix, and let \\mu_H, \\mu_A, and \\mu_B be empirical measures of eigenvalues of matrices H, A, and B, respectively. Then, it is known (see, for example, Pastur-Vasilchuk, CMP, 2000, v.214, pp.249-286) that for large N, measure \\mu_H is close to the free convolution of measures \\mu_A and \\mu_B, where the free convolution is a non-linear operation on probability measures. The large deviations of the cumulative distribution function of \\mu_H from its expectation have been studied by Chatterjee in JFA, 2007, v. 245, pp.379-389. In this paper we improve Chatterjee's estimate and show that P {\\sup_x |F_H (x) -F_+ (x)| > \\delta} < exp [-f(\\delta) N^2], where F_H (x) and F_+ (x) denote the cumulative distribution functions of \\mu_H and the free convolution of \\mu_A and \\mu_B, respectively, and where f(\\delta) is a specific function.
Large deviations of the maximal eigenvalue of random matrices
Borot, Gaëtan; Majumdar, Satya; Nadal, Céline
2011-01-01
We present detailed computations of the 'at least finite' terms (three dominant orders) of the free energy in a one-cut matrix model with a hard edge a, in beta-ensembles, with any polynomial potential. beta is a positive number, so not restricted to the standard values beta = 1 (hermitian matrices), beta = 1/2 (symmetric matrices), beta = 2 (quaternionic self-dual matrices). This model allows to study the statistic of the maximum eigenvalue of random matrices. We compute the large deviation function to the left of the expected maximum. We specialize our results to the gaussian beta-ensembles and check them numerically. Our method is based on general results and procedures already developed in the literature to solve the Pastur equations (also called "loop equations"). It allows to compute the left tail of the analog of Tracy-Widom laws for any beta, including the constant term.
Large deviations and idempotent probability
Puhalskii, Anatolii
2001-01-01
In the view of many probabilists, author Anatolii Puhalskii''s research results stand among the most significant achievements in the modern theory of large deviations. In fact, his work marked a turning point in the depth of our understanding of the connections between the large deviation principle (LDP) and well-known methods for establishing weak convergence results.Large Deviations and Idempotent Probability expounds upon the recent methodology of building large deviation theory along the lines of weak convergence theory. The author develops an idempotent (or maxitive) probability theory, introduces idempotent analogues of martingales (maxingales), Wiener and Poisson processes, and Ito differential equations, and studies their properties. The large deviation principle for stochastic processes is formulated as a certain type of convergence of stochastic processes to idempotent processes. The author calls this large deviation convergence.The approach to establishing large deviation convergence uses novel com...
Explorations in Statistics: Standard Deviations and Standard Errors
Curran-Everett, Douglas
2008-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This series in "Advances in Physiology Education" provides an opportunity to do just that: we will investigate basic concepts in statistics using the free software package R. Because this series uses R solely as a vehicle…
Semantic Deviation in Oliver Twist
康艺凡
2016-01-01
Dickens, with his adeptness with language, applies semantic deviation skillfully in his realistic novel Oliver Twist. However, most studies and comments home and abroad on it mainly focus on such aspects as humanity, society, and characters. Therefore, this thesis will take a stylistic approach to Oliver Twist from the perspective of semantic deviation, which is achieved by the use of irony, hyperbole, and pun and analyze how the application of the technique makes the novel attractive.
Paroxysmal upgaze deviation: case report
Echeverría-Palacio CM; Benavidez-Fierro MA
2012-01-01
The paroxysmal upgaze deviation is a syndrome that described in infants for first time in 1988; there are just about 50 case reports worldwide ever since. Its etiology is unclear and though it prognosis is variable; most case reports indicate that during growth the episodes tend to decrease in frequency and duration until they disappear. It describes a 16-months old male child who since 11-months old presented many episodes of variable conjugate upward deviation of the eyes, compensatory neck...
Gronewold, A. D.; Alameddine, I.; Anderson, R.; Wolpert, R.; Reckhow, K.
2008-12-01
The United States Environmental Protection Agency (USEPA) total maximum daily load (TMDL) program requires that individual states assess the condition of surface waters and identify those which fail to meet ambient water quality standards. Waters failing to meet those standards must have a TMDL assessment conducted to determine the maximum allowable pollutant load which can enter the water without violating water quality standards. While most of the nearly 30,000 TMDL assessments completed since 1995 use mechanistic or empirical water quality models to forecast water quality conditions under alternative pollutant loading reduction scenarios, few, if any, also simulate water quality conditions under alternative climate change scenarios. As a result, model-based loading reduction requirements (which serve as the cornerstone for implementing water resource management plans, and initiating environmental management infrastructure projects), believed to improve water quality in impaired waters and reinstate their designated use, may misrepresent the actual required reduction when future climate change scenarios are considered. For example, recent research indicates a potential long term future increase in both the number of days between, and the intensity of, individual precipitation events. In coastal terrestrial and aquatic ecosystems, such climate conditions could lead to an increased accumulation of pollutants on the landscape between precipitation events, followed by a washoff event with a relatively high pollutant load. On the other hand, anticipated increases in average temperature and evaporation rate might not only reduce effective rainfall rates (resulting in less energy for transporting pollutants from the landscape) but also reduce the tidal exchange ratio in shallow estuaries (many of which are valuable recreational, commercial, and aesthetic natural resources). Here, we develop and apply a comprehensive watershed-scale model for simulating water quality in
Angle-deviation optical profilometer
Chen-Tai Tan; Yuan-Sheng Chan; Zhen-Chin Lin; Ming-Hung Chiu
2011-01-01
@@ We propose a new optical profilometer for three-dimensional (3D) surface profile measurement in real time.The deviation angle is based on geometrical optics and is proportional to the apex angle of a test plate.Measuring the reflectivity of a parallelogram prism allows detection of the deviation angle when the beam is incident at the nearby critical angle. The reflectivity is inversely proportional to the deviation angle and proportional to the apex angle and surface height. We use a charge-coupled device (CCD) camera at the image plane to capture the reflectivity profile and obtain the 3D surface profile directly.%We propose a new optical profilometer for three-dimensional (3D) surface profile measurement in real time.The deviation angle is based on geometrical optics and is proportional to the apex angle of a test plate.Measuring the refiectivity of a parallelogram prism allows detection of the deviation angle when the beam is incident at the nearby critical angle. The refiectivity is inversely proportional to the deviation angle and proportional to the apex angle and surface height. We use a charge-coupled device (CCD) camera at the image plane to capture the refiectivity profile and obtain the 3D surface profile directly.
Qi Shi
Full Text Available To find out the most valuable parameter of 18F-Fluorodeoxyglucose positron emission tomography for predicting distant metastasis in nasopharyngeal carcinoma.From June 2007 through December 2010, 43 non-metastatic NPC patients who underwent 18F-Fluorodeoxyglucose positron emission tomography/computed tomography (PET/CT before radical Intensity-Modulated Radiation Therapy were enrolled and reviewed retrospectively. PET parameters including maximum standardized uptake value (SUV max, mean standardized uptake value (SUV mean, metabolic tumor volume (MTV, and total lesion glucose (TLG of both primary tumor and cervical lymph nodes were calculated. Total SUV max were recorded as the sum of SUV max of primary tumor and cervical lymph nodes. Total SUV mean, Total MTV and Total TLG were calculated in the same way as Total SUV max.The median follow-up was 32 months (range, 23-68 months. Distant metastasis was the main pattern of treatment failure. Univariate analysis showed higher SUV max, SUV mean, MTV, and TLG of primary tumor, Total SUV max, Total MTV, Total TLG, and stage T3-4 were factors predicting for significantly poorer distant metastasis-free survival (p = 0.042, p = 0.008, p = 0.023, p = 0.023, p = 0.024, p = 0.033, p = 0.016, p = 0.015. In multivariate analysis, Total SUV max was the independent predictive factor for distant metastasis (p = 0.046. Spearman Rank correlation analysis showed mediate to strong correlationship between Total SUV max and SUV max-T, and between Total SUV max and SUV max-N(Spearman coefficient: 0.568 and 0.834; p = 0.000 and p = 0.000.Preliminary results indicated that Total SUV max was an independently predictive factor for distant metastasis in patients of nasopharyngeal carcinoma treated with Intensity-Modulated Radiation Therapy.
Schmidt, Matthias; Dietlein, Markus; Kobe, Carsten; Eschner, Wolfgang; Schicha, Harald [University of Cologne, Department of Nuclear Medicine, Cologne (Germany); Bollschweiler, Elfriede; Moenig, Stefan P.; Vallboehmer, Daniel; Hoelscher, Arnulf [University of Cologne, Department of General-, Visceral and Cancer Surgery, Cologne (Germany)
2009-05-15
To evaluate the potential of [{sup 18}F]fluorodeoxyglucose positron emission tomography (FDG-PET) for the assessment of histopathological response and survival after neoadjuvant radiochemotherapy in patients with oesophageal cancer. In 2005 and 2006, 55 patients (43 men, 12 women; median age 60 years) with locally advanced oesophageal cancer (cT3-4 Nx M0; 24 with squamous cell carcinoma, 31 with adenocarcinoma) underwent transthoracic en bloc oesophagectomy after completion of treatment with cisplatin, 5-fluorouracil, and radiotherapy ad 36 Gy in a prospective clinical trial. Of the 55 patients, 21 (38%) were classified as histopathological responders (<10% vital residual tumour cells) and 34 (62%) as nonresponders. FDG-PET was performed before (PET 1) and 3-4 weeks after the end (PET 2) of radiochemotherapy with assessment of maximum and average standardized uptake values (SUV) for correlation with histopathological response and survival. Histopathological responders had a slightly higher baseline SUV than nonresponders (p<0.0001 between PET 1 and PET 2 for responders and nonresponders) and the decrease was more prominent in responders. Except for SUVmax in patients with squamous cell carcinoma neither baseline nor preoperative SUV nor percent SUV reduction correlated significantly with histopathological response. Histopathological responders had a 2-year overall survival of 91 {+-} 9% and nonresponders a survival of 53 {+-} 10% (p = 0.007). Our study does not support recent reports that FDG-PET predicts histopathological response and survival in patients with locally advanced oesophageal cancer treated by neoadjuvant radiochemotherapy. (orig.)
张凯; 董华英
2013-01-01
A new method for discriminating the inrush current by integrating the characteristics of the two kinds of current is presented based on large of simulation and testing.Because of the character that the magnetic inrush current leans to the one side of the time axis and the internal fault current is similar to the sine wave,the difference between the maximum and minimum and the unitary area in cycle is used as a general criterion to discriminate the unsymmetrical inrush current.For the symmetrical inrush current,we set a second criterion,which needs to calculate the average standard deviation of the 2-norm between the part data in the half cycle and the sine wave with the same length.validity of the new method is verified with simulating and testing.%在大量仿真和实验的基础上,综合变压器励磁涌流和内部故障电流的特点,提出了一种鉴别变压器励磁涌流的新方法.该方法基于励磁涌流偏于时间轴一侧和故障电流接近正弦波的特点,以周波内最值差别和归一化面积作为综合判据鉴别非对称的励磁涌流,而对于对称的励磁涌流,以后半周波内一部分数据与同窗长的标准正弦波的2-范数的平均标准差作为判据进行鉴别.仿真和实验验证了方法的正确性.
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
Kus T
2015-12-01
Full Text Available Tulay Kus,1 Gokmen Aktas,1 Alper Sevinc,1 Mehmet Emin Kalender,1 Mustafa Yilmaz,2 Seval Kul,3 Serdar Oztuzcu,4 Cemil Oktay,5 Celaletdin Camci1 1Department of Internal Medicine, Division of Medical Oncology, Gaziantep Oncology Hospital, 2Department of Nuclear Medicine, 3Department of Biostatistics, Faculty of Medicine, 4Department of Medical Biology, Faculty of Medicine, University of Gaziantep, Gaziantep, 5Department of Radiology, Faculty of Medicine, University of Akdeniz, Antalya, Turkey Purpose: To investigate whether the initial maximum standardized uptake value (SUVmax on fluorine-18 fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT has a prognostic significance in metastatic lung adenocarcinoma.Patients and methods: Sixty patients (24 females, mean age: 57.9±12 years with metastatic stage lung adenocarcinoma who used erlotinib and underwent 18F-FDG PET/CT at the time of diagnosis between May 2010 and May 2014 were enrolled in this retrospective study. The patients were stratified according to the median SUVmax value, which was found as 11. Progression-free survival (PFS rates for 3, 6, and 12 months were examined for SUVmax values and epidermal growth factor receptor (EGFR mutation status.Results: The number of EGFR-sensitizing mutation positive/negative/unknown was 26/17/17, respectively, and the number of patients using erlotinib at first-line, second-line, and third-line therapy was 15, 31, and 14 consecutively. The PFS rates of EGFR mutation positive, negative, and unknown patients for 3 months were 73.1%, 35.3%, and 41.2% (P=0.026, odds ratio [OR]=4.39; 95% confidence interval [CI]: 1.45–13.26, respectively. The PFS rates of EGFR positive, negative, and unknown patients for 6 months were 50%, 29.4%, and 29.4% (P=0.267, OR: 2.4; 95% CI: 0.82–6.96, respectively. The PFS rates of EGFR positive, negative, and unknown patients for 12 months were 42.3%, 29.4%, 23.5% (P=0.408, OR: 2.0; 95% CI: 0.42
Sürer Budak, Evrim; Toptaş, Tayfun; Aydın, Funda; Öner, Ali Ozan; Çevikol, Can; Şimşek, Tayup
2017-02-05
To explore the correlation of the primary tumor's maximum standardized uptake value (SUVmax) and minimum apparent diffusion coefficient (ADCmin) with clinicopathologic features, and to determine their predictive power in endometrial cancer (EC). A total of 45 patients who had undergone staging surgery after a preoperative evaluation with (18)F-fluorodeoxyglucose (FDG) positron emission tomography/computerized tomography (PET/CT) and diffusion-weighted magnetic resonance imaging (DW-MRI) were included in a prospective case-series study with planned data collection. Multiple linear regression analysis was used to determine the correlations between the study variables. The mean ADCmin and SUVmax values were determined as 0.72±0.22 and 16.54±8.73, respectively. A univariate analysis identified age, myometrial invasion (MI) and lymphovascular space involvement (LVSI) as the potential factors associated with ADCmin while it identified age, stage, tumor size, MI, LVSI and number of metastatic lymph nodes as the potential variables correlated to SUVmax. In multivariate analysis, on the other hand, MI was the only significant variable that correlated with ADCmin (p=0.007) and SUVmax (p=0.024). Deep MI was best predicted by an ADCmin cutoff value of ≤0.77 [93.7% sensitivity, 48.2% specificity, and 93.0% negative predictive value (NPV)] and SUVmax cutoff value of >20.5 (62.5% sensitivity, 86.2% specificity, and 81.0% NPV); however, the two diagnostic tests were not significantly different (p=0.266). Among clinicopathologic features, only MI was independently correlated with SUVmax and ADCmin. However, the routine use of (18)F-FDG PET/CT or DW-MRI cannot be recommended at the moment due to less than ideal predictive performances of both parameters.
Nakajo, Masatoyo [Nanpuh Hospital, Department of Radiology, Kagoshima (Japan); Kagoshima University, Department of Radiology, Graduate School of Medical and Dental Sciences, Kagoshima (Japan); Kajiya, Yoriko; Tani, Atsushi; Ueno, Masako [Nanpuh Hospital, Department of Radiology, Kagoshima (Japan); Kaneko, Tomoyo; Kaneko, Youichi [Kaneko Clinic, Department of Breast Surgery, Kagoshima (Japan); Takasaki, Takashi [Department of Pathology, Clinical Pathology Laboratory, Kagoshima (Japan); Koriyama, Chihaya [Kagoshima University, Department of Epidemiology and Preventive Medicine, Graduate School of Medical and Dental Sciences, Kagoshima (Japan); Nakajo, Masayuki [Kagoshima University, Department of Radiology, Graduate School of Medical and Dental Sciences, Kagoshima (Japan)
2010-11-15
To correlate both primary lesion {sup 18}F-fluorodeoxyglucose (FDG) maximum standardized uptake value (SUVmax) and diffusion-weighted imaging (DWI) apparent diffusion coefficient (ADC) with clinicopathological prognostic factors and compare the prognostic value of these indexes in breast cancer. The study population consisted of 44 patients with 44 breast cancers visible on both preoperative FDG PET/CT and DWI images. The breast cancers included 9 ductal carcinoma in situ (DCIS) and 35 invasive ductal carcinomas (IDC). The relationships between both SUVmax and ADC and clinicopathological prognostic factors were evaluated by univariate and multivariate regression analysis and the degree of correlation was determined by Spearman's rank test. The patients were divided into a better prognosis group (n = 24) and a worse prognosis group (n = 20) based upon invasiveness (DCIS or IDC) and upon their prognostic group (good, moderate or poor) determined from the modified Nottingham prognostic index. Their prognostic values were examined by receiver operating characteristic analysis. Both SUVmax and ADC were significantly associated (p<0.05) with histological grade (independently), nodal status and vascular invasion. Significant associations were also noted between SUVmax and tumour size (independently), oestrogen receptor status and human epidermal growth factor receptor-2 status, and between ADC and invasiveness. SUVmax and ADC were negatively correlated ({rho}=-0.486, p = 0.001) and positively and negatively associated with increasing of histological grade, respectively. The threshold values for predicting a worse prognosis were {>=}4.2 for SUVmax (with a sensitivity, specificity and accuracy of 80%, 75% and 77%, respectively) and {<=}0.98 for ADC (with a sensitivity, specificity and accuracy of 90%, 67% and 77%, respectively). SUVmax and ADC correlated with several of pathological prognostic factors and both indexes may have the same potential for predicting the
Carkaci, Selin; Adrada, Beatriz E; Rohren, Eric; Wei, Wei; Quraishi, Mohammad A; Mawlawi, Osama; Buchholz, Thomas A; Yang, Wei
2012-05-01
The aim of this study was to determine an optimum standardized uptake value (SUV) threshold for identifying regional nodal metastasis on 18F-fluorodeoxyglucose (FDG) positron emission tomographic (PET)/computed tomographic (CT) studies of patients with inflammatory breast cancer. A database search was performed of patients newly diagnosed with inflammatory breast cancer who underwent 18F-FDG PET/CT imaging at the time of diagnosis at a single institution between January 1, 2001, and September 30, 2009. Three radiologists blinded to the histopathology of the regional lymph nodes retrospectively analyzed all 18F-FDG PET/CT images by measuring the maximum SUV (SUVmax) in visually abnormal nodes. The accuracy of 18F-FDG PET/CT image interpretation was correlated with histopathology when available. Receiver-operating characteristic curve analysis was performed to assess the diagnostic performance of PET/CT imaging. Sensitivity, specificity, positive predictive value, and negative predictive value were calculated using three different SUV cutoff values (2.0, 2.5, and 3.0). A total of 888 regional nodal basins, including bilateral axillary, infraclavicular, internal mammary, and supraclavicular lymph nodes, were evaluated in 111 patients (mean age, 56 years). Of the 888 nodal basins, 625 (70%) were negative and 263 (30%) were positive for metastasis. Malignant lymph nodes had significantly higher SUVmax than benign lymph nodes (P lymph nodes on 18F-FDG PET/CT imaging may help differentiate benign and malignant lymph nodes in patients with inflammatory breast cancer. An SUV cutoff of 2 provided the best accuracy in identifying regional nodal metastasis in this patient population. Copyright © 2012 AUR. Published by Elsevier Inc. All rights reserved.
Rosewarne, P J; Wilson, J M; Svendsen, J C
2016-01-01
Metabolic rate is one of the most widely measured physiological traits in animals and may be influenced by both endogenous (e.g. body mass) and exogenous factors (e.g. oxygen availability and temperature). Standard metabolic rate (SMR) and maximum metabolic rate (MMR) are two fundamental physiological variables providing the floor and ceiling in aerobic energy metabolism. The total amount of energy available between these two variables constitutes the aerobic metabolic scope (AMS). A laboratory exercise aimed at an undergraduate level physiology class, which details the appropriate data acquisition methods and calculations to measure oxygen consumption rates in rainbow trout Oncorhynchus mykiss, is presented here. Specifically, the teaching exercise employs intermittent flow respirometry to measure SMR and MMR, derives AMS from the measurements and demonstrates how AMS is affected by environmental oxygen. Students' results typically reveal a decline in AMS in response to environmental hypoxia. The same techniques can be applied to investigate the influence of other key factors on metabolic rate (e.g. temperature and body mass). Discussion of the results develops students' understanding of the mechanisms underlying these fundamental physiological traits and the influence of exogenous factors. More generally, the teaching exercise outlines essential laboratory concepts in addition to metabolic rate calculations, data acquisition and unit conversions that enhance competency in quantitative analysis and reasoning. Finally, the described procedures are generally applicable to other fish species or aquatic breathers such as crustaceans (e.g. crayfish) and provide an alternative to using higher (or more derived) animals to investigate questions related to metabolic physiology.
Iskender, Ilker; Kadioglu, Salih Zeki; Kosar, Altug; Atasalihi, Ali; Kir, Altan
2011-06-01
The maximum standardized uptake value (SUV(max)) varies among positron emission tomography-integrated computed tomography (PET/CT) centers in the staging of non-small cell lung cancer. We evaluated the ratio of the optimum SUV(max) cut-off for the lymph nodes to the median SUV(max) of the primary tumor (ratioSUV(max)) to determine SUV(max) variations between PET/CT scanners. The previously described PET predictive ratio (PPR) was also evaluated. PET/CT and mediastinoscopy and/or thoracotomy were performed on 337 consecutive patients between September 2005 and March 2009. Thirty-six patients were excluded from the study. The pathological results were correlated with the PET/CT findings. Histopathological examination was performed on 1136 N2 lymph nodes using 10 different PET/CT centers. The majority of patients (group A: 240) used the same PET/CT scanner at four different centers. Others patients were categorized as group B. The ratioSUV(max) for groups A and B was 0.18 and 0.22, respectively. The same ratio for centers 1, 2, 3 and 4 was 0.2, 0.21, 0.21, and 0.23, respectively. The optimal cut-off value of the PPR to predict mediastinal lymph node pathology for malignancy was 0.49 (likelihood ratio +2.02; sensitivity 70%, specificity 65%). We conclude that the ratioSUV(max) was similar for different scanners. Thus, SUV(max) is a valuable cut-off for comparing-centers.
Amos JM Ela Bella; Ya-Rui Zhang; Wei Fan; Kong-Jia Luo; Tie-Hua Rong; Peng Lin; Hong Yang; Jian-Hua Fu
2014-01-01
The presence of lymph node metastasis is an important prognostic factor for patients with esophageal cancer. Accurate assessment of lymph nodes in thoracic esophageal carcinoma is essential for selecting appropriate treatment and forecasting disease progression. Positron emission tomography combined with computed tomography (PET/CT) is becoming an important tool in the workup of esophageal carcinoma. Here, we evaluated the effectiveness of the maximum standardized uptake value (SUVmax) in assessing lymph node metastasis in esophageal squamous cell carcinoma (ESCC) prior to surgery. Fifty-nine surgical patients with pathologically confirmed thoracic ESCC were retrospectively studied. These patients underwent radical esophagectomy with pathologic evaluation of lymph nodes. They al had 18F-FDG PET/CT scans in their preoperative staging procedures. None had a prior history of cancer. The pathologic status and PET/CT SUVmax of lymph nodes were col ected to calculate the receiver operating characteristic (ROC) curve and to determine the best cutoff value of the PET/CT SUVmax to distinguish benign from malignant lymph nodes. Lymph node data from 27 others were used for the validation. A total of 323 lymph nodes including 39 metastatic lymph nodes were evaluated in the training cohort, and 117 lymph nodes including 32 metastatic lymph nodes were evaluated in the validation cohort. The cutoff point of the SUVmax for lymph nodes was 4.1, as calculated by ROC curve (sensitivity, 80%; specificity, 92%;accuracy, 90%). When this cutoff value was applied to the validation cohort, a sensitivity, a specificity, and an accuracy of 81%, 88%, and 86%, respectively, were obtained. These results suggest that the SUVmax of lymph nodes predicts malignancy. Indeed, when an SUVmax of 4.1 was used instead of 2.5, FDG-PET/CT was more accurate in assessing nodal metastasis.
Large deviations in Taylor dispersion
Kahlen, Marcel; Engel, Andreas; Van den Broeck, Christian
2017-01-01
We establish a link between the phenomenon of Taylor dispersion and the theory of empirical distributions. Using this connection, we derive, upon applying the theory of large deviations, an alternative and much more precise description of the long-time regime for Taylor dispersion.
Rodrigues, Elsa Teresa; Pardal, Miguel Ângelo; Gante, Cristiano; Loureiro, João; Lopes, Isabel
2017-02-01
The main goal of the present study was to determine and validate an aquatic Maximum Acceptable Concentration-Environmental Quality Standard (MAC-EQS) value for the agricultural fungicide azoxystrobin (AZX). Assessment factors were applied to short-term toxicity data using the lowest EC50 and after the Species Sensitivity Distribution (SSD) method. Both ways of EQS generation were applied to a freshwater toxicity dataset for AZX based on available data, and to marine toxicity datasets for AZX and Ortiva(®) (a commercial formulation of AZX) obtained by the present study. A high interspecific variability in AZX sensitivity was observed in all datasets, being the copepoda Eudiaptomus graciloides (LC50,48h = 38 μg L(-1)) and the gastropod Gibbula umbilicalis (LC50,96h = 13 μg L(-1)) the most sensitive freshwater and marine species, respectively. MAC-EQS values derived using the lowest EC50 (≤0.38 μg L(-1)) were more protective than those derived using the SSD method (≤3.2 μg L(-1)). After comparing the MAC-EQS values estimated in the present study to the smallest AA-EQS available, which protect against the occurrence of prolonged exposure of AZX, the MAC-EQS values derived using the lowest EC50 were considered overprotective and a MAC-EQS of 1.8 μg L(-1) was validated and recommended for AZX for the water column. This value was derived from marine toxicity data, which highlights the importance of testing marine organisms. Moreover, Ortiva affects the most sensitive marine species to a greater extent than AZX, and marine species are more sensitive than freshwater species to AZX. A risk characterization ratio higher than one allowed to conclude that AZX might pose a high risk to the aquatic environment. Also, in a wider conclusion, before new pesticides are approved, we suggest to improve the Tier 1 prospective Ecological Risk Assessment by increasing the number of short-term data, and apply the SSD approach, in order to ensure the safety of
Kus, Tulay; Aktas, Gokmen; Sevinc, Alper; Kalender, Mehmet Emin; Yilmaz, Mustafa; Kul, Seval; Oztuzcu, Serdar; Oktay, Cemil; Camci, Celaletdin
2015-01-01
Purpose To investigate whether the initial maximum standardized uptake value (SUVmax) on fluorine-18 fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT) has a prognostic significance in metastatic lung adenocarcinoma. Patients and methods Sixty patients (24 females, mean age: 57.9±12 years) with metastatic stage lung adenocarcinoma who used erlotinib and underwent 18F-FDG PET/CT at the time of diagnosis between May 2010 and May 2014 were enrolled in this retrospective study. The patients were stratified according to the median SUVmax value, which was found as 11. Progression-free survival (PFS) rates for 3, 6, and 12 months were examined for SUVmax values and epidermal growth factor receptor (EGFR) mutation status. Results The number of EGFR-sensitizing mutation positive/negative/unknown was 26/17/17, respectively, and the number of patients using erlotinib at first-line, second-line, and third-line therapy was 15, 31, and 14 consecutively. The PFS rates of EGFR mutation positive, negative, and unknown patients for 3 months were 73.1%, 35.3%, and 41.2% (P=0.026, odds ratio [OR]=4.39; 95% confidence interval [CI]: 1.45–13.26), respectively. The PFS rates of EGFR positive, negative, and unknown patients for 6 months were 50%, 29.4%, and 29.4% (P=0.267, OR: 2.4; 95% CI: 0.82–6.96), respectively. The PFS rates of EGFR positive, negative, and unknown patients for 12 months were 42.3%, 29.4%, 23.5% (P=0.408, OR: 2.0; 95% CI: 0.42–5.26), respectively. Thirty-one of 60 patients had SUVmax values ≤11. The PFS rates for 3, 6, and 12 months were 70.5%/28% (P=0.001, OR=9.0; 95% CI: 2.79–29.04), 61.7%/8% (P11) group, respectively. Conclusion Initial SUVmax value on 18F-FDG PET/CT is found to be a prognostic factor anticipating the response to erlotinib for 3, 6, and 12-month rates of PFS in both EGFR-sensitizing mutation and wild-type tumor group. PMID:26719702
Hoeij, F.B. van; Stadhouders, P.H.G.M.; Weusten, B.L.A.M. [St Antonius Ziekenhuis, Department of Gastroenterology, Nieuwegein (Netherlands); Keijsers, R.G.M. [St Antonius Ziekenhuis, Department of Nuclear Medicine, Nieuwegein (Netherlands); Loffeld, B.C.A.J. [Zuwe Hofpoort Ziekenhuis, Department of Internal Medicine, Woerden (Netherlands); Dun, G. [Ziekenhuis Rivierenland, Department of Internal Medicine, Tiel (Netherlands)
2015-01-15
In patients undergoing {sup 18}F-FDG PET/CT, incidental colonic focal lesions can be indicative of inflammatory, premalignant or malignant lesions. The maximum standardized uptake value (SUV{sub max}) of these lesions, representing the FDG uptake intensity, might be helpful in differentiating malignant from benign lesions, and thereby be helpful in determining the urgency of colonoscopy. The aim of our study was to assess the incidence and underlying pathology of incidental PET-positive colonic lesions in a large cohort of patients, and to determine the usefulness of the SUV{sub max} in differentiating benign from malignant pathology. The electronic records of all patients who underwent FDG PET/CT from January 2010 to March 2013 in our hospital were retrospectively reviewed. The main indications for PET/CT were: characterization of an indeterminate mass on radiological imaging, suspicion or staging of malignancy, and suspicion of inflammation. In patients with incidental focal FDG uptake in the large bowel, data regarding subsequent colonoscopy were retrieved, if performed within 120 days. The final diagnosis was defined using colonoscopy findings, combined with additional histopathological assessment of the lesion, if applicable. Of 7,318 patients analysed, 359 (5 %) had 404 foci of unexpected colonic FDG uptake. In 242 of these 404 lesions (60 %), colonoscopy follow-up data were available. Final diagnoses were: adenocarcinoma in 25 (10 %), adenoma in 90 (37 %), and benign in 127 (53 %). The median [IQR] SUV{sub max} was significantly higher in adenocarcinoma (16.6 [12 - 20.8]) than in benign lesions (8.2 [5.9 - 10.1]; p < 0.0001), non-advanced adenoma (8.3 [6.1 - 10.5]; p < 0.0001) and advanced adenoma (9.7 [7.2 - 12.6]; p < 0.001). The receiver operating characteristic curve of SUV{sub max} for malignant versus nonmalignant lesions had an area under the curve of 0.868 (SD ± 0.038), the optimal cut-off value being 11.4 (sensitivity 80 %, specificity 82
Modified maximum likelihood registration based on information fusion
Yongqing Qi; Zhongliang Jing; Shiqiang Hu
2007-01-01
The bias estimation of passive sensors is considered based on information fusion in multi-platform multisensor tracking system. The unobservable problem of bearing-only tracking in blind spot is analyzed. A modified maximum likelihood method, which uses the redundant information of multi-sensor system to calculate the target position, is investigated to estimate the biases. Monte Carlo simulation results show that the modified method eliminates the effect of unobservable problem in the blind spot and can estimate the biases more rapidly and accurately than maximum likelihood method. It is statistically efficient since the standard deviation of bias estimation errors meets the theoretical lower bounds.
Paroxysmal upgaze deviation: case report
Echeverría-Palacio CM
2012-05-01
Full Text Available The paroxysmal upgaze deviation is a syndrome that described in infants for first time in 1988; there are just about 50 case reports worldwide ever since. Its etiology is unclear and though it prognosis is variable; most case reports indicate that during growth the episodes tend to decrease in frequency and duration until they disappear. It describes a 16-months old male child who since 11-months old presented many episodes of variable conjugate upward deviation of the eyes, compensatory neck flexion and down-beat saccades in attempted downgaze. These events are predominantly diurnal, and are exacerbated by stressful situations such as fasting or insomnia, however and improve with sleep. They have normal neurologic and ophthalmologic examination, and neuroimaging and EEG findings are not relevant.
Perception of aircraft Deviation Cues
Martin, Lynne; Azuma, Ronald; Fox, Jason; Verma, Savita; Lozito, Sandra
2005-01-01
To begin to address the need for new displays, required by a future airspace concept to support new roles that will be assigned to flight crews, a study of potentially informative display cues was undertaken. Two cues were tested on a simple plan display - aircraft trajectory and flight corridor. Of particular interest was the speed and accuracy with which participants could detect an aircraft deviating outside its flight corridor. Presence of the trajectory cue significantly reduced participant reaction time to a deviation while the flight corridor cue did not. Although non-significant, the flight corridor cue seemed to have a relationship with the accuracy of participants judgments rather than their speed. As this is the second of a series of studies, these issues will be addressed further in future studies.
48 CFR 2001.403 - Individual deviations.
2010-10-01
... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Individual deviations. 2001... Individual deviations. In individual cases, deviations from either the FAR or the NRCAR will be authorized... deviations clearly in the best interest of the Government. Individual deviations must be authorized...
48 CFR 801.403 - Individual deviations.
2010-10-01
... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Individual deviations. 801... Individual deviations. (a) Authority to authorize individual deviations from the FAR and VAAR is delegated to... nature of the deviation. (d) The DSPE may authorize individual deviations from the FAR and VAAR when...
48 CFR 1301.403 - Individual deviations.
2010-10-01
... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Individual deviations... DEPARTMENT OF COMMERCE ACQUISITION REGULATIONS SYSTEM Deviations From the FAR 1301.403 Individual deviations. The designee authorized to approve individual deviations from the FAR is set forth in CAM 1301.70....
48 CFR 401.403 - Individual deviations.
2010-10-01
... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Individual deviations. 401... AGRICULTURE ACQUISITION REGULATION SYSTEM Deviations From the FAR and AGAR 401.403 Individual deviations. In individual cases, deviations from either the FAR or the AGAR will be authorized only when essential to...
48 CFR 2801.403 - Individual deviations.
2010-10-01
... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Individual deviations. 2801... OF JUSTICE ACQUISITION REGULATIONS SYSTEM Deviations From the FAR and JAR 2801.403 Individual deviations. Individual deviations from the FAR or the JAR shall be approved by the head of the...
48 CFR 301.403 - Individual deviations.
2010-10-01
... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Individual deviations. 301... ACQUISITION REGULATION SYSTEM Deviations From the FAR 301.403 Individual deviations. Contracting activities shall prepare requests for individual deviations to either the FAR or HHSAR in accordance with 301.470....
48 CFR 1501.403 - Individual deviations.
2010-10-01
... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Individual deviations. 1501.403 Section 1501.403 Federal Acquisition Regulations System ENVIRONMENTAL PROTECTION AGENCY GENERAL GENERAL Deviations 1501.403 Individual deviations. Requests for individual deviations from the FAR and...
48 CFR 501.403 - Individual deviations.
2010-10-01
... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Individual deviations. 501... Individual deviations. (a) An individual deviation affects only one contract action. (1) The Head of the Contracting Activity (HCA) must approve an individual deviation to the FAR. The authority to grant...
48 CFR 2401.403 - Individual deviations.
2010-10-01
... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Individual deviations. 2401... DEVELOPMENT GENERAL FEDERAL ACQUISITION REGULATION SYSTEM Deviations 2401.403 Individual deviations. In individual cases, proposed deviations from the FAR or HUDAR shall be submitted to the Senior...
Deviation of the statistical fluctuation in heterogeneous anomalous diffusion
Itto, Yuichi
2016-01-01
The exponent of anomalous diffusion of virus in cytoplasm of a living cell is experimentally known to fluctuate depending on localized areas of the cytoplasm, indicating heterogeneity of diffusion. In a recent paper (Itto, 2012), a maximum-entropy-principle approach has been developed in order to propose an Ansatz for the statistical distribution of such exponent fluctuations. Based on this approach, here the deviation of the statistical distribution of the fluctuations from the proposed one is studied from the viewpoint of Einstein's theory of fluctuations (of the thermodynamic quantities). This may present a step toward understanding the statistical property of the deviation. It is shown in a certain class of small deviations that the deviation obeys the multivariate Gaussian distribution.
王红; 金煜炜; 陈晓波; 曹延延; 白晋丽; 瞿宇晋; 宋昉
2015-01-01
Objective To analyze the distribution of common chromosomal karyotypes of patients with Turner syndrome (TS), and to explore the correlation between the age and height standard deviation scores (HSDS) on diagnosis.Methods Retrospective investigation was performed for the data of age and HSDS on diagnosis in 273 TS girls(≤ 18.0 years old)diagnosed by chromosomal karyotypes.The main statistical methods were analyzed with t-test and Pearson correlation test by using the SPSS 18.0 statistical software.Results (1) There were 4 kinds of common chromosomal karyotypes in the TS :45, X (87/273 cases,31.9％),46, X, i (Xq) (43/273 cases, 15.7％) ,45, X/46, X, i (Xq) (36/273 cases, 13.2％) and 45, X/46, XX (23/273 cases, 8.4％), respectively, the adolescent TS all had delayed puberty.For the cases with 45, X karyotypes ,3 cases presented mental retardation and 2 cases with organs deformity.(2)The patients with 45 ,X/46,X,i(Xq) karyotypes or with 46,X,i(Xq) karyotypes had the maximum(12.56 age) or the minimum(9.70 age) mean age on diagnosis, respectively, there was a significant difference between 2 groups (t =3.019, P =0.004).The maximum deviation from normal height was found in the patients with karyotypes of 46, X,i (Xq) (HSDS =-4.04), and the minimum deviation was in the patients with karyotypes of 45,X/46, XX (HSDS =-3.16), and there was a significant difference between 2 groups (t =-2.95, P =0.004).(3) More than 75.7％ of TS patients was diagnosed when their heights deviated above 3 SD,and their mean age on diagnosis was 12.10 age,which was 3 years later than those patients within 2 SD.(4) There was a significant negative correlation between the age and HSDS on diagnosis in the groups of common chromosomal karyotypes[45,X、46,X,i(Xq) and 45,X/46,XX] (r =-0.551,-0.560,-0.622,all P ＜ 0.01), except for the group with the 45, X/46, X, i (Xq).Conclusions (1) In this study, the consti-tuent ratios of these 4 common chromosomal karyotypes were different from those in
Maximum likelihood positioning for gamma-ray imaging detectors with depth of interaction measurement
Lerche, Ch.W. [Grupo de Sistemas Digitales, ITACA, Universidad Politecnica de Valencia, 46022 Valencia (Spain)], E-mail: lerche@ific.uv.es; Ros, A. [Grupo de Fisica Medica Nuclear, IFIC, Universidad de Valencia-Consejo Superior de Investigaciones Cientificas, 46980 Paterna (Spain); Monzo, J.M.; Aliaga, R.J.; Ferrando, N.; Martinez, J.D.; Herrero, V.; Esteve, R.; Gadea, R.; Colom, R.J.; Toledo, J.; Mateo, F.; Sebastia, A. [Grupo de Sistemas Digitales, ITACA, Universidad Politecnica de Valencia, 46022 Valencia (Spain); Sanchez, F.; Benlloch, J.M. [Grupo de Fisica Medica Nuclear, IFIC, Universidad de Valencia-Consejo Superior de Investigaciones Cientificas, 46980 Paterna (Spain)
2009-06-01
The center of gravity algorithm leads to strong artifacts for gamma-ray imaging detectors that are based on monolithic scintillation crystals and position sensitive photo-detectors. This is a consequence of using the centroids as position estimates. The fact that charge division circuits can also be used to compute the standard deviation of the scintillation light distribution opens a way out of this drawback. We studied the feasibility of maximum likelihood estimation for computing the true gamma-ray photo-conversion position from the centroids and the standard deviation of the light distribution. The method was evaluated on a test detector that consists of the position sensitive photomultiplier tube H8500 and a monolithic LSO crystal (42mmx42mmx10mm). Spatial resolution was measured for the centroids and the maximum likelihood estimates. The results suggest that the maximum likelihood positioning is feasible and partially removes the strong artifacts of the center of gravity algorithm.
Allan deviation analysis of financial return series
Hernández-Pérez, R.
2012-05-01
We perform a scaling analysis for the return series of different financial assets applying the Allan deviation (ADEV), which is used in the time and frequency metrology to characterize quantitatively the stability of frequency standards since it has demonstrated to be a robust quantity to analyze fluctuations of non-stationary time series for different observation intervals. The data used are opening price daily series for assets from different markets during a time span of around ten years. We found that the ADEV results for the return series at short scales resemble those expected for an uncorrelated series, consistent with the efficient market hypothesis. On the other hand, the ADEV results for absolute return series for short scales (first one or two decades) decrease following approximately a scaling relation up to a point that is different for almost each asset, after which the ADEV deviates from scaling, which suggests that the presence of clustering, long-range dependence and non-stationarity signatures in the series drive the results for large observation intervals.
颜云荞; 吴婷婷; 冉昇; 蒋葭蒹; 朱双良
2012-01-01
对比分析我国国家标准与国际食品法典委员会(CAC)标准中关于食品中亚硝酸盐限量指标的异同,并列举比较了我国与CAC的五个成员地区的亚硝酸盐的限量标准.从标准涉及亚硝酸盐在食品中存在的客观性、具体食品种类和限量值3个层面逐级对比.结果表明,我国标准中共有16个有关亚硝酸盐的限量指标,涉及16种食品种类;其中亚硝酸盐为污染物限量指标的有8个,涉及8种食品种类,而CAC标准没有;亚硝酸盐标准为添加剂限量指标的有8种,涉及8种食品种类,CAC标准为5种,涉及5种食品种类,两者共有的有关亚硝酸盐限量指标涉及的食品种类为3种.我国标准涉及的16种食品种类中,13种为我国独有规定限量的食品,总数远多于CAC标准中涉及的食品总数5种;在可比指标范围内,我国的3个限量指标值都严于CAC,与CAC的一致性程度较高.%the maximum levels ofnitrite in food standards between international Codex Alimentarius Commission ( CAC ) and China are compared, the maximum levels of five members from CAC and our county are listed. The natural existence of nitrite in food, categories which has the standards of limitation of Nitrite and maximum levels were analyzed. The comparison shows that China National Standards sets maximum levelsindex of nitrite for 16kinds of food. Among these, nitrite was used as a contaminant index in8 food standards, while CAC does not have, the maximum nitrite content as food additives was listed in eight food standards in China and five in CAC' s. Three types of food has similar maximum levelsof nitrite as a food additive in both CAC and China. China National Standards has set the maximum levelsindex of nitrite for 16 kinds of food, including 13 unique Chinese foodand it is far more than 5 types in CAC food standards. Three maximum levelsof nitrite in China National Standards are lower than those of CAC.
Deformation behavior of A6063 tube with initial thickness deviation in free hydraulic bulging
YANG Lian-fa; GUO Cheng; DENG Yang
2006-01-01
Experiment on seamless tubes of aluminum alloy A6063 with initial thickness deviation of 0-20% was conducted through a free hydraulic bulging with tube ends free. The influence of initial thickness deviation on the cross-section profile, thickness distribution, maximum internal pressure and maximum radial expansion was investigated. FEM simulation was also performed in order to examine and help explaining the experimental results. The results indicate that the internal pressure and maximum internal pressure appear to be little influenced by the initial thickness deviation, and that the cross-section profile of the bulged tube changes diversely and can not be a perfect circle. The results also suggest that the increase in initial thickness deviation may lead to a remarkable decrease in maximum radial expansion, and a rapid increase in thickness deviation and the center eccentricity of the inner and outer profiles.
48 CFR 2501.403 - Individual deviations.
2010-10-01
... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Individual deviations. 2501.403 Section 2501.403 Federal Acquisition Regulations System NATIONAL SCIENCE FOUNDATION GENERAL FEDERAL ACQUISITION REGULATIONS SYSTEM Deviations From the FAR 2501.403 Individual deviations....
48 CFR 1901.403 - Individual deviations.
2010-10-01
... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Individual deviations. 1901.403 Section 1901.403 Federal Acquisition Regulations System BROADCASTING BOARD OF GOVERNORS GENERAL... Individual deviations. Deviations from the IAAR or the FAR in individual cases shall be authorized by...
48 CFR 201.403 - Individual deviations.
2010-10-01
... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Individual deviations. 201.403 Section 201.403 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM... Individual deviations. (1) Individual deviations, except those described in 201.402(1) and paragraph (2)...
48 CFR 1.403 - Individual deviations.
2010-10-01
... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Individual deviations. 1.403 Section 1.403 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION GENERAL FEDERAL ACQUISITION REGULATIONS SYSTEM Deviations from the FAR 1.403 Individual deviations....
48 CFR 601.403 - Individual deviations.
2010-10-01
... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Individual deviations. 601.403 Section 601.403 Federal Acquisition Regulations System DEPARTMENT OF STATE GENERAL DEPARTMENT OF STATE ACQUISITION REGULATIONS SYSTEM Deviations from the FAR 601.403 Individual deviations....
48 CFR 3401.403 - Individual deviations.
2010-10-01
... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Individual deviations. 3401.403 Section 3401.403 Federal Acquisition Regulations System DEPARTMENT OF EDUCATION ACQUISITION REGULATION GENERAL ED ACQUISITION REGULATION SYSTEM Deviations 3401.403 Individual deviations. An...
Teaching Standard Deviation by Building from Student Invention
Day, James; Nakahara, Hiroko; Bonn, Doug
2010-11-01
First-year physics laboratories are often driven by a mix of goals that includes the illustration or discovery of basic physics principles and a myriad of technical skills involving specific equipment, data analysis, and report writing. The sheer number of such goals seems guaranteed to produce cognitive overload, even when highly detailed "cookbook" instructions are given. Recent studies indicate that this approach leaves students with a poor conceptual understanding of one of the most important features of laboratory physics and of the real world of science, in general: the development of an understanding of the nature of measurement and its attendant uncertainty . While students might be able to reproduce certain technical manipulations of data, as novice thinkers they lack the mental scaffolding that allows an expert to organize and apply this knowledge.2,3 Our goal is to put novices on the path to expertise, so that they will be able to transfer their knowledge to novel situations.
LARGE DEVIATIONS AND MODERATE DEVIATIONS FOR SUMS OF NEGATIVELY DEPENDENT RANDOM VARIABLES
Liu Li; Wan Chenggao; Feng Yanqin
2011-01-01
In this article, we obtain the large deviations and moderate deviations for negatively dependent (ND) and non-identically distributed random variables defined on (-∞, +∞). The results show that for some non-identical random variables, precise large deviations and moderate deviations remain insensitive to negative dependence structure.
Wage and Labor Standards Administration (DOL), Washington, DC.
The 1966 amendments to the Fair Labor Standards Act extended enterprise coverage to all public and private educational institutions. In October 1968, one out of seven of the 2 million nonsupervisory nonteaching employees working in schools was paid below the $1.30 minimum wage which became effective on February 1, 1969. Three-fifths of those below…
Ensemble standar deviation of wind speed and direction of the FDDA input to WRF
U.S. Environmental Protection Agency — NetCDF file of the SREF standard deviation of wind speed and direction that was used to inject variability in the FDDA input. variable U_NDG_OLD contains standard...
48 CFR 1201.403 - Individual deviations.
2010-10-01
...) 48 CFR 1.405(e) applies). However, see TAM 1201.403. ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Individual deviations... FEDERAL ACQUISITION REGULATIONS SYSTEM 70-Deviations From the FAR and TAR 1201.403 Individual...
48 CFR 1401.403 - Individual deviations.
2010-10-01
... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Individual deviations. 1401.403 Section 1401.403 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR GENERAL DEPARTMENT OF THE INTERIOR ACQUISITION REGULATION SYSTEM Deviations from the FAR and DIAR 1401.403...
48 CFR 3001.403 - Individual deviations.
2010-10-01
.... 3001.403 Section 3001.403 Federal Acquisition Regulations System DEPARTMENT OF HOMELAND SECURITY... from the FAR and HSAR 3001.403 Individual deviations. Unless precluded by law, executive order, or..., including complete documentation of the justification for the deviation (See HSAM 3001.403)....
2010-07-01
... 41 Public Contracts and Property Management 2 2010-07-01 2010-07-01 true Deviation. 101-1.110 Section 101-1.110 Public Contracts and Property Management Federal Property Management Regulations System FEDERAL PROPERTY MANAGEMENT REGULATIONS GENERAL 1-INTRODUCTION 1.1-Regulation System § 101-1.110 Deviation...
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Deviations. 435.4 Section 435.4 Employees' Benefits SOCIAL SECURITY ADMINISTRATION UNIFORM ADMINISTRATIVE REQUIREMENTS FOR GRANTS AND AGREEMENTS WITH... General § 435.4 Deviations. The Office of Management and Budget (OMB) may grant exceptions for classes...
On the applicability of the geodesic deviation equation in General Relativity
Philipp, Dennis; Laemmerzahl, Claus
2016-01-01
Within the theory of General Relativity we study the solution and range of applicability of the standard geodesic deviation equation in highly symmetric spacetimes. The deviation equation is used to model satellite orbit constellations around the earth. In particular, we reconsider the deviation equation in Newtonian gravity and then determine relativistic effects within the theory of General Relativity. The deviation of nearby orbits, as constructed from exact solutions of the underlying geodesic equation, is compared to the solution of the geodesic deviation equation to assess the accuracy of the latter. Furthermore, we comment on the so-called Shirokov effect in Schwarzschild spacetime.
Large Deviations in Quantum Spin Chain
Ogata, Yoshiko
2008-01-01
We show the full large deviation principle for KMS-states and $C^*$-finitely correlated states on a quantum spin chain. We cover general local observables. Our main tool is Ruelle's transfer operator method.
Large deviations for a random speed particle
Lefevere, Raphael; Zambotti, Lorenzo
2011-01-01
We investigate large deviations for the empirical measure of the position and momentum of a particle traveling in a box with hot walls. The particle travels with uniform speed from left to right, until it hits the right boundary. Then it is absorbed and re-emitted from the left boundary with a new random speed, taken from an i.i.d. sequence. It turns out that this simple model, often used to simulate a heat bath, displays unusually complex large deviations features, that we explain in detail. In particular, if the tail of the update distribution of the speed is sufficiently oscillating, then the empirical measure does not satisfy a large deviations principle, and we exhibit optimal lower and upper large deviations functionals.
On geodesic deviation in Schwarzschild spacetime
Philipp, Dennis; Laemmerzahl, Claus; Deshpande, Kaustubh
2015-01-01
For metrology, geodesy and gravimetry in space, satellite based instruments and measurement techniques are used and the orbits of the satellites as well as possible deviations between nearby ones are of central interest. The measurement of this deviation itself gives insight into the underlying structure of the spacetime geometry, which is curved and therefore described by the theory of general relativity (GR). In the context of GR, the deviation of nearby geodesics can be described by the Jacobi equation that is a result of linearizing the geodesic equation around a known reference geodesic with respect to the deviation vector and the relative velocity. We review the derivation of this Jacobi equation and restrict ourselves to the simple case of the spacetime outside a spherically symmetric mass distribution and circular reference geodesics to find solutions by projecting the Jacobi equation on a parallel propagated tetrad as done by Fuchs. Using his results, we construct solutions of the Jacobi equation for...
Deviation from the superparamagnetic behaviour of fine-particle systems
Malaescu, I
2000-01-01
Studies concerning superparamagnetic behaviour of fine magnetic particle systems were performed using static and radiofrequency measurements, in the range 1-60 MHz. The samples were: a ferrofluid with magnetite particles dispersed in kerosene (sample A), magnetite powder (sample B) and the same magnetite powder dispersed in a polymer (sample C). Radiofrequency measurements indicated a maximum in the imaginary part of the complex magnetic susceptibility, for each of the samples, at frequencies with the magnitude order of tens of MHz, the origin of which was assigned to Neel-type relaxation processes. The static measurements showed a Langevin-type dependence of magnetisation M and of susceptibility chi, on the magnetic field for sample A. For samples B and C deviations from this type of dependence were found. These deviations were analysed qualitatively and explained in terms of the interparticle interactions, dispersion medium influence and surface effects.
Deviations in delineated GTV caused by artefacts in 4DCT
Persson, Gitte Fredberg; Nygaard, Ditte Eklund; Brink, Carsten;
2010-01-01
BACKGROUND AND PURPOSE: Four-dimensional computed tomography (4DCT) is used for breathing-adapted radiotherapy planning. Irregular breathing, large tumour motion or interpolation of images can cause artefacts in the 4DCT. This study evaluates the impact of artefacts on gross tumour volume (GTV......) size. MATERIAL AND METHODS: In 19 4DCT scans of patients with peripheral lung tumours, GTV was delineated in all bins. Variations in GTV size between bins in each 4DCT scan were analysed and correlated to tumour motion and variations in breathing signal amplitude and breathing signal period. End......-expiration GTV size (GTVexp) was considered as reference for GTV size. Intra-session delineation error was estimated by re-delineation of GTV in eight of the 4DCT scans. RESULTS: In 16 of the 4DCT scans the maximum deviations from GTVexp were larger than could be explained by delineation error. The deviations...
Large deviations for fractional Poisson processes
Beghin, Luisa
2012-01-01
We present large deviation results for two versions of fractional Poisson processes: the main version which is a renewal process, and the alternative version where all the random variables are weighted Poisson distributed. We also present a sample path large deviation result for suitably normalized counting processes; finally we show how this result can be applied to the two versions of fractional Poisson processes considered in this paper.
The large deviations theorem and ergodicity
Gu Rongbao [School of Finance, Nanjing University of Finance and Economics, Nanjing 210046 (China)
2007-12-15
In this paper, some relationships between stochastic and topological properties of dynamical systems are studied. For a continuous map f from a compact metric space X into itself, we show that if f satisfies the large deviations theorem then it is topologically ergodic. Moreover, we introduce the topologically strong ergodicity, and prove that if f is a topologically strongly ergodic map satisfying the large deviations theorem then it is sensitively dependent on initial conditions.
Bernstein, Eric F; Civiok, Jennifer M
2013-12-01
Laser beam diameter affects the depth of laser penetration. Q-switched lasers tend to have smaller maximum spot sizes than other dermatologic lasers, making beam diameter a potentially more significant factor in treatment outcomes. To compare the clinical effect of using the maximum-size treatment beam available for each delivered fluence during laser tattoo removal to a standard 4-mm-diameter treatment beam. Thirteen tattoos were treated in 12 subjects using a Q-switched Nd:YAG laser equipped with a treatment beam diameter that was adjustable in 1 mm increments and a setting that would enable the maximally achievable diameter ("MAX-ON" setting) with any fluence. Tattoos were randomly bisected and treated on one side with the MAX-ON setting and on the contralateral side with a standard 4-mm-diameter spot ("MAX-OFF" setting). Photographs were taken 8 weeks following each treatment and each half-tattoo was evaluated for clearance on a 10-point scale by physicians blinded to the treatment conditions. Tattoo clearance was greater on the side treated with the MAX-ON setting in a statistically significant manner following the 1st through 4th treatments, with the MAX-OFF treatment site approaching the clearance of the MAX-ON treatment site after the 5th and 6th treatments. This high-energy, Q-switched Nd:YAG laser with a continuously variable spot-size safely and effectively removes tattoos, with greater removal when using a larger spot-size. © 2013 Wiley Periodicals, Inc.
Search for SM deviations in top precision studies at CMS
Skovpen, Kirill
2017-01-01
Precision studies of top quark properties provide a unique playground to test the predictions of the standard model and to search for new physics. Reviewed results from the CMS experiment done with the data collected at 8 TeV include studies of top quark Wtb anomalous and FCNC couplings, polarization, CP-violation and spin correlation effects. No significant deviations from the SM predictions are observed.
Anterior septal deviation and contralateral alar collapse.
Schalek, P; Hahn, A
2011-01-01
Septal deviation is often found in conjunction with other pathological conditions that adversely affect nasal patency. Anterior septal deviation, together with contralateral alar collapse, is a relatively rare type of anatomical and functional incompetence. In our experience, it can often be resolved with septoplasty, without the necessity of surgery involving the external valve. The aim of this paper was to verify this hypothesis prospectively. Twelve patients with anterior septal deviation and simultaneous alar collapse on the opposite side were prospectively enrolled in the study. Subjective assessment of nasal patency was made on post-operative day 1, and again 6 months after surgery, using a subjective evaluation of nasal breathing. The width of the nostril (alar-columellar distance) on the side with the alar collapse was measured during inspiration pre-operatively, 1 day after surgery and again 6 months after surgery. Immediately after surgery, all patients reported improved or excellent nasal breathing on the side of the original septal deviation. On the collapsed side, one patient reported no change in condition. With the exception of one patient, all measurements showed some degree of improvement in the extension of the alar-columellar distance. The average benefit 6 months after surgery was an improvement of 4.54 mm. In our group of patients (anterior septal deviation and simultaneous contralateral alar collapse and no obvious structural changes of the alar cartilage) we found septoplasty to be entirely suitable and we recommend it as the treatment of choice in such cases.
A study on the deviation aspects of the poem “The Eightieth Stage”
Soghra Salmaninejad Mehrabadi
2016-02-01
's innovation. New expressions are also used in other parts of abnormality in “The Eightieth Stag e” . Stylistic deviation Sometimes, Akhavan uses local and slang words, and words with different songs and music produces deviation as well. This Application is one kind of abnormality. Words such as “han, hey, by the truth, pity, hoome, kope, meydanak and ...” are of this type of abnormality . Ancient deviation One way to break out of the habit of poetry , is attention to ancient words and actions . Archaism is one of the factors affecting the deviation. Archaism deviation helps to make the old sp. According to Leach, the ancient is the survival of the old language in the now. Syntactic factors, type of music and words, are effective in escape from the standard language. ”Sowrat (sharpness, hamgenan (counterparts, parine (last year, pour ( son, pahlaw (champion’’are Words that show Akhavan’s attention to archaism. The ancient pronunciation is another part of his work. Furthermore, use of mythology and allusion have created deviation of this type. Cases such as anagram adjectival compounds, the use of two prepositions for a word, the use of the adjective and noun in the plural form, are signs of archaism in grammar and syntax. He is interested in grammatical elements of Khorasani Style. Most elements of this style used in “The Eightieth Stage” poetry. S emantic deviation Semantic deviation is caused by the imagery . The poet uses frequently literary figures. By this way, he produces new meaning and therefore highlights his poem. Simile, metaphor, personification and irony are the most important examples of this deviation. Apparently the maximum deviation from the norm in this poem is of periodic deviation (ancient or archaism. The second row belongs to the semantic deviation in which metaphor is the most meaningful. The effect of metaphor in this poem is quite well. In general, Poet’s notice to the different deviations is one of his techniques and the key
Investigating deviations from norms in court interpreting
Dubslaff, Friedel; Martinsen, Bodil
, in some cases, all - professional users involved (judges, lawyers, prosecutors). As far as the non-Danish speaking users are concerned, it has, with one notable exception, unfortunately not been possible to obtain data from this group via questionnaires. As this type of data, however, is important...... behaviour, explore why the deviations in question occur, find out what happens if deviations are perceived as such by the other participants involved in the interpreted event. We will reconstruct the norms in question by examining interpreters' and (mainly) professional users' behaviour in the course...... deviations and sanctions in every case. By way of example: Several judges, who had given their consent to recordings of authentic data in connection with the research project, reported that they had experienced problems with insufficient language proficiency on the part of untrained interpreters speaking...
On large deviations for ensembles of distributions
Khrychev, D. A.
2013-11-01
The paper is concerned with the large deviations problem in the Freidlin-Wentzell formulation without the assumption of the uniqueness of the solution to the equation involving white noise. In other words, it is assumed that for each \\varepsilon>0 the nonempty set \\mathscr P_\\varepsilon of weak solutions is not necessarily a singleton. Analogues of a number of concepts in the theory of large deviations are introduced for the set \\{\\mathscr P_\\varepsilon,\\,\\varepsilon>0\\}, hereafter referred to as an ensemble of distributions. The ensembles of weak solutions of an n-dimensional stochastic Navier-Stokes system and stochastic wave equation with power-law nonlinearity are shown to be uniformly exponentially tight. An idempotent Wiener process in a Hilbert space and idempotent partial differential equations are defined. The accumulation points in the sense of large deviations of the ensembles in question are shown to be weak solutions of the corresponding idempotent equations. Bibliography: 14 titles.
李太平
2012-01-01
Under the information asymmetry and market supervision lack of agri-food quality,the excessively rigorous standard of maximum residue limits（MRLs）for pesticide in agrifood is not only unbeneficial to protect consumer health and ecological environment,but also aggregate serious pesticide residues in agrifood.Taking national standard Maximum Residue Limits for Pesticides in Food（GB2763-2005）as a case,calculated the Theory Daily Intake（TDI）related agrifoods of 439 residue indexes of 126 pesticides with the quantity relation between MRLs,acceptable daily intake（ADI）and TDI,and compared with the customer＇s Real Daily Intake（RDI）related agrifoods.It was found that there were 111 residue indexes related agri-foods which TDI were overly above and beyond their RDI of Chinese residents,accounted for 23.22% of 478 pesticide residue indexes in this national standards.This evidence proved that the national standard had the excessively rigorous tendency really and suggested our government would revise this standard at once in order to eliminate the food safety management trap.%在农产品质量信息不对称、市场监管不到位的情况下,农药最大残留限量标准制定过严,不但不能保护消费者健康和农业生态环境,反而会加剧农产品的农药残留问题泛滥。利用农药最大残留限量（MRLs）、日允许摄入量（ADI）与被测食品每日最大理论摄入量（TDI）三者之间的数量关系,以《食品中农药最大残留限量》GB2763-2005为例,计算了126种农药439个残留指标的TDI值,发现有111个残留指标的TDI值远远大于我国居民每日实际摄入量（RDI）,占GB2763-2005国家标准478个残留限量指标的23.22%,表明我国农药最大残留限量国家标准部分指标值设定存在过严的倾向,建议政府尽快修订该标准,以消除食品安全管理隐患。
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
PoDMan: Policy Deviation Management
Aishwarya Bakshi
2017-07-01
Full Text Available Whenever an unexpected or exceptional situation occurs, complying with the existing policies may not be possible. The main objective of this work is to assist individuals and organizations to decide in the process of deviating from policies and performing a non-complying action. The paper proposes utilizing software agents as supportive tools to provide the best non-complying action while deviating from policies. The article also introduces a process in which the decision on the choice of non-complying action can be made. The work is motivated by a real scenario observed in a hospital in Norway and demonstrated through the same settings.
NSGIC State | GIS Inventory — Environmental Modeling dataset current as of 1999. Florida Adopted TMDLs. What is a TMDL (Total Maximum Daily Load)? A scientific determination of the maximum amount...
21 CFR 600.14 - Reporting of biological product deviations by licensed manufacturers.
2010-04-01
... 21 Food and Drugs 7 2010-04-01 2010-04-01 false Reporting of biological product deviations by... HEALTH AND HUMAN SERVICES (CONTINUED) BIOLOGICS BIOLOGICAL PRODUCTS: GENERAL Establishment Standards § 600.14 Reporting of biological product deviations by licensed manufacturers. (a) Who must report...
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
Bodily Deviations and Body Image in Adolescence
Vilhjalmsson, Runar; Kristjansdottir, Gudrun; Ward, Dianne S.
2012-01-01
Adolescents with unusually sized or shaped bodies may experience ridicule, rejection, or exclusion based on their negatively valued bodily characteristics. Such experiences can have negative consequences for a person's image and evaluation of self. This study focuses on the relationship between bodily deviations and body image and is based on a…
Bodily Deviations and Body Image in Adolescence
Vilhjalmsson, Runar; Kristjansdottir, Gudrun; Ward, Dianne S.
2012-01-01
Adolescents with unusually sized or shaped bodies may experience ridicule, rejection, or exclusion based on their negatively valued bodily characteristics. Such experiences can have negative consequences for a person's image and evaluation of self. This study focuses on the relationship between bodily deviations and body image and is based on a…
2010-10-01
... 45 Public Welfare 4 2010-10-01 2010-10-01 false Deviations. 2543.4 Section 2543.4 Public Welfare Regulations Relating to Public Welfare (Continued) CORPORATION FOR NATIONAL AND COMMUNITY SERVICE GRANTS AND AGREEMENTS WITH INSTITUTIONS OF HIGHER EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS General...
Voice Deviations and Coexisting Communication Disorders.
St. Louis, Kenneth O.; And Others
1992-01-01
This study examined the coexistence of other communicative disorders with voice disorders in about 3,400 children in grades 1-12 at 100 sites throughout the United States. The majority of voice-disordered children had coexisting articulation deviations and also differed from controls on two language measures and mean pure-tone hearing thresholds.…
41 CFR 109-1.5304 - Deviations.
2010-07-01
... Secretary for Procurement and Assistance Management. A HFO's decision not to provide life-cycle control... through the cognizant HFO to the Deputy Assistant Secretary for Procurement and Assistance Management. ... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false Deviations....
2010-10-01
... 43 Public Lands: Interior 1 2010-10-01 2010-10-01 false Deviations. 12.904 Section 12.904 Public Lands: Interior Office of the Secretary of the Interior ADMINISTRATIVE AND AUDIT REQUIREMENTS AND COST PRINCIPLES FOR ASSISTANCE PROGRAMS Uniform Administrative Requirements for Grants and Agreements...
Exact Moderate and Large Deviations for Linear Processes
Peligrada, Magda; Zhong, Yunda; Wu, Wei Biao
2011-01-01
Large and moderate deviation probabilities play an important role in many applied areas, such as insurance and risk analysis. This paper studies the exact moderate and large deviation asymptotics in non-logarithmic form for linear processes with independent innovations. The linear processes we analyze are general and therefore they include the long memory case. We give an asymptotic representation for probability of the tail of the normalized sums and specify the zones in which it can be approximated either by a standard normal distribution or by the marginal distribution of the innovation process. The results are then applied to regression estimates, moving averages, fractionally integrated processes, linear processes with regularly varying exponents and functions of linear processes. We also consider the computation of value at risk and expected shortfall, fundamental quantities in risk theory and finance.
Moderate Deviation Principle for dynamical systems with small random perturbation
ma, Yutao; Wu, Liming
2011-01-01
Consider the stochastic differential equation in $\\rr^d$ dX^{\\e}_t&=b(X^{\\e}_t)dt+\\sqrt{\\e}\\sigma(X^\\e_t)dB_t X^{\\e}_0&=x_0,\\quad x_0\\in\\rr^d where $b:\\rr^d\\rightarrow\\rr^d$ is $C^1$ such that $ \\leq C(1+|x|^2)$, $\\sigma:\\rr^d\\rightarrow \\MM(d\\times n)$ is locally Lipschitzian with linear growth, and $B_t$ is a standard Brownian motion taking values in $\\rr^n$. Freidlin-Wentzell's theorem gives the large deviation principle for $X^\\e$ for small $\\e$. In this paper we establish its moderate deviation principle.
Association between septal deviation and sinonasal papilloma.
Nomura, Kazuhiro; Ogawa, Takenori; Sugawara, Mitsuru; Honkura, Yohei; Oshima, Hidetoshi; Arakawa, Kazuya; Oshima, Takeshi; Katori, Yukio
2013-12-01
Sinonasal papilloma is a common benign epithelial tumor of the sinonasal tract and accounts for 0.5% to 4% of all nasal tumors. The etiology of sinonasal papilloma remains unclear, although human papilloma virus has been proposed as a major risk factor. Other etiological factors, such as anatomical variations of the nasal cavity, may be related to the pathogenesis of sinonasal papilloma, because deviated nasal septum is seen in patients with chronic rhinosinusitis. We, therefore, investigated the involvement of deviated nasal septum in the development of sinonasal papilloma. Preoperative computed tomography or magnetic resonance imaging findings of 83 patients with sinonasal papilloma were evaluated retrospectively. The side of papilloma and the direction of septal deviation showed a significant correlation. Septum deviated to the intact side in 51 of 83 patients (61.4%) and to the affected side in 18 of 83 patients (21.7%). Straight or S-shaped septum was observed in 14 of 83 patients (16.9%). Even after excluding 27 patients who underwent revision surgery and 15 patients in whom the papilloma touched the concave portion of the nasal septum, the concave side of septal deviation was associated with the development of sinonasal papilloma (p = 0.040). The high incidence of sinonasal papilloma in the concave side may reflect the consequences of the traumatic effects caused by wall shear stress of the high-velocity airflow and the increased chance of inhaling viruses and pollutants. The present study supports the causative role of human papilloma virus and toxic chemicals in the occurrence of sinonasal papilloma.
邱桂华
2013-01-01
Cloud computing is the focus at present in the field of information technology , and the cloud platform covert channel is a new safety problem caused by the infrastructure of cloud computing platform .The covert channel in cloud platform will leak the confidential information of cloud customers , seriously damages the safety of cloud platform .We summarise the correlative works of the covert channel detection, aiming at the cloud platform covert channel based on CPU response time , we abstract its model, and put forward at the first time a detection method with hybrid indicators of the integration of entropy and standard deviation .Experimental results show that this detection method can reach a false positive rate less than 5%, therefore has good detection performance .%云计算是目前信息技术领域研究的热点，而云平台隐蔽信道是由云计算平台的基础架构导致的新的安全问题。云平台隐蔽信道会泄漏云客户的机密信息，严重危害云平台安全。总结隐蔽信道检测的相关工作，并针对基于CPU响应时间的云平台隐蔽信道，抽象云平台隐蔽信道模型，首次提出融合熵率和标准差混合指标的检测方法。实验结果表明使用该检测方法能够达到低于5％的误报率，具有很好的检测性能。
李为喜; 孙娟; 董晓丽; 杨秀兰; 李静梅; 宋敬可; 王步军
2011-01-01
介绍了食品中主要真菌毒素的毒理,比较了中国食品安全国家标准和国际食品法典委员会（CAC）标准中真菌毒素限量指标,研究并确认了新、旧版真菌毒素国家标准之间以及新版真菌毒素国家标准与CAC最新限量标准之间的差异,为提高中国食品安全监管水平、保护人民健康、有效应对国际贸易及国外技术性贸易壁垒提供依据。%Toxicity of main mycotoxins was introduced.Maximum levels（MLs） in Chinese national food safety standard were also compared with the Codex Alimentarius Commission（CAC）mycotoxin standard.Therefore,the difference between revised and previous Chinese standard for mycotoxin,as well as the difference between the revised Chinese standard and CAC standard for mycotoxin were studied and confirmed.All these studies will help China to facilitate monitoring food safety,to protect human health,and to deal with technical barrier in world trade.
Habibollah Ghassemzadeh
1994-06-01
Full Text Available The Bender-Gestalt Test was given to thirty mentally-retarded psychiatric patients. The mean, standard deviation, and standard error were 56.73, 26.25, and 4.80 respectively. Rotation was the most frequent major deviation which occurred in all the designs."nDesign # 7 was the most difficult one to be reproduced in the sample. This design by itself, was subject to 47% of distortion, 79% of omission, and 21% of rotation.
Large Deviation Strategy for Inverse Problem
Ojima, Izumi
2011-01-01
Taken traditionally as a no-go theorem against the theorization of inductive processes, Duheme-Quine thesis may interfere with the essence of statistical inference. This difficulty can be resolved by \\textquotedblleft Micro-Macro duality\\textquotedblright\\ \\cite{Oj03, Oj05} which clarifies the importance of specifying the pertinent aspects and accuracy relevant to concrete contexts of scientific discussions and which ensures the matching between what to be described and what to describe in the form of the validity of duality relations. This consolidates the foundations of the inverse problem, induction method, and statistical inference crucial for the sound relations between theory and experiments. To achieve the purpose, we propose here Large Deviation Strategy (LDS for short) on the basis of Micro-Macro duality, quadrality scheme, and large deviation principle. According to the quadrality scheme emphasizing the basic roles played by the dynamics, algebra of observables together with its representations and ...
Deviations from LTE in a stellar atmosphere
Kalkofen, W.; Klein, R. I.; Stein, R. F.
1979-01-01
Deviations for LTE are investigated in an atmosphere of hydrogen atoms with one bound level, satisfying the equations of radiative, hydrostatic, and statistical equilibrium. The departure coefficient and the kinetic temperature as functions of the frequency dependence of the radiative cross section are studied analytically and numerically. Near the outer boundary of the atmosphere, the departure coefficient is smaller than unity when the radiative cross section grows with frequency faster than with the square of frequency; it exceeds unity otherwise. Far from the boundary the departure coefficient tends to exceed unity for any frequency dependence of the radiative cross section. Overpopulation always implies that the kinetic temperature in the statistical-equilibrium atmosphere is higher than the temperature in the corresponding LTE atmosphere. Upper and lower bounds on the kinetic temperature are given for an atmosphere with deviations from LTE only in the optically shallow layers when the emergent intensity can be described by a radiation temperature.
Large deviations for tandem queueing systems
Roland L. Dobrushin
1994-01-01
Full Text Available The crude asymptotics of the large delay probability in a tandem queueing system is considered. The main result states that one of the two channels in the tandem system defines the crude asymptotics. The constant that determines the crude asymptotics is given. The results obtained are based on the large deviation principle for random processes with independent increments on an infinite interval recently established by the authors.
On large deviations for ensembles of distributions
Khrychev, D A [Moscow State Institute of Radio-Engineering, Electronics and Automation (Technical University), Moscow (Russian Federation)
2013-11-30
The paper is concerned with the large deviations problem in the Freidlin-Wentzell formulation without the assumption of the uniqueness of the solution to the equation involving white noise. In other words, it is assumed that for each ε>0 the nonempty set P{sub ε} of weak solutions is not necessarily a singleton. Analogues of a number of concepts in the theory of large deviations are introduced for the set (P{sub ε}, ε>0), hereafter referred to as an ensemble of distributions. The ensembles of weak solutions of an n-dimensional stochastic Navier-Stokes system and stochastic wave equation with power-law nonlinearity are shown to be uniformly exponentially tight. An idempotent Wiener process in a Hilbert space and idempotent partial differential equations are defined. The accumulation points in the sense of large deviations of the ensembles in question are shown to be weak solutions of the corresponding idempotent equations. Bibliography: 14 titles.
Stochastic gene expression conditioned on large deviations
Horowitz, Jordan M.; Kulkarni, Rahul V.
2017-06-01
The intrinsic stochasticity of gene expression can give rise to large fluctuations and rare events that drive phenotypic variation in a population of genetically identical cells. Characterizing the fluctuations that give rise to such rare events motivates the analysis of large deviations in stochastic models of gene expression. Recent developments in non-equilibrium statistical mechanics have led to a framework for analyzing Markovian processes conditioned on rare events and for representing such processes by conditioning-free driven Markovian processes. We use this framework, in combination with approaches based on queueing theory, to analyze a general class of stochastic models of gene expression. Modeling gene expression as a Batch Markovian Arrival Process (BMAP), we derive exact analytical results quantifying large deviations of time-integrated random variables such as promoter activity fluctuations. We find that the conditioning-free driven process can also be represented by a BMAP that has the same form as the original process, but with renormalized parameters. The results obtained can be used to quantify the likelihood of large deviations, to characterize system fluctuations conditional on rare events and to identify combinations of model parameters that can give rise to dynamical phase transitions in system dynamics.
Analysis of Road Base Uniformity via the Deviation of Modulus of Asphalt Mixtures
ZHI Yufeng; ZHANG Xiaoning
2007-01-01
The modulus deviation of base material calculated from the data of falling weight deflectometer (FWD) was used to evaluate the uniformity of road base so as to reflect the construction quality. Four parameters,the repeatability standard deviation of the data in the same driveway, the reproducibility standard deviation of the data in the different driveway, the consistency statistics value of the data in the different driveway, and the consistency statistics value of the data in the same driveway, were introduced for the construction uniformity analysis. The experimental result shows that the materials modulus calculated from FWD has a highly correlative relationship with the uniformity of road base.
On the physical chemistry of seawater with deviating ion composition
Feistel, R. [Rostock Univ., Warnemuende (Germany). Inst. fuer Ostseeforschung
1998-04-01
The salt composition in natural seawaters is not strictly conservative. Physico-chemical properties of the mixed electrolyte ``standard seawater`` and their variations with ionic composition are briefly reviewed. It is shown that a ``same absolute salinity`` rule, known from seawater densities, may lead to good results for sound speeds, too. Refractive index measurements have now become sufficiently precise to detect local deviations of ion abundances along with routine ocean profiling. The question is discussed which quantities/formulas still need to be quantitatively determined for this purpose, and which theoretical, empirical or experimental aids can be applied. (orig.) 52 refs.
邵懿; 王君; 吴永宁
2014-01-01
目的：探讨我国食品中铅限量标准与国际接轨的程度，为我国食品中污染物限量标准完善提供参考。方法从标准涉及的食品类别和限量值两个方面来对比我国铅限量标准与国际食品法典委员会、欧盟、澳新制定的铅限量标准的异同。结果考虑到国际风险评估结果，我国基本对铅膳食暴露有贡献的食品都设置了限量值要求，因此我国标准涉及的食品种类要多于国际食品法典委员会、欧盟及澳新标准。但仍有个别食品的限量值较国际标准或其他国家标准宽松。结论建议加强对铅污染源头的治理，开展全国食品中铅污染情况的调研工作，为我国标准逐步完善打基础。%Objective To explore the extent of coincidence for lead concentration limits in food between China and Codex Alimentarius Commission (CAC) and provide evidence and reference for improving the Maximum Levels (MLs) of Contaminants in Foods. Methods Food categories and concentration limits for lead in China were compared with those of CAC, European Union, Australia and New Zealand. Results Con-sidering the international risk assessment result of lead, China almost set MLs for all the food that possibly contributes to the dietary exposure of lead, so the food categories for lead in China were more than those in CAC, the European Union and Australia and New Zealand standards. However, some MLs of lead in China are still looser than those in CAC or other countries. Conclusion The measures to control major contributing sources of lead in food and the comprehensive national survey of lead contamination in food should be taken, in order to lay the foundation for further improvement of the food contaminants standard in China.
Karan, Belgin; Pourbagher, Aysin; Torun, Nese
2016-06-01
To evaluate the correlations between the apparent diffusion coefficient (ADC) value and the standardized uptake value (SUV) with prognostic factors in breast cancer. Seventy women with invasive breast cancer (56 cases of invasive ductal carcinoma, four of mixed ductal and lobular invasive carcinoma, three of lobular invasive carcinoma, two of micropapillary carcinoma, and one each of mixed ductal and mucinous carcinoma, mucinous carcinoma, medullary carcinoma, metaplastic carcinoma, and tubular carcinoma) were included in this study. All patients underwent presurgical breast magnetic resonance imaging (MRI) with diffusion-weighted imaging (DWI) at 1.5T and whole-body (18) F-fluorodeoxyglucose ((18) F-FDG) positron emission tomography (PET) / computed tomography (CT). For all invasive breast cancers and invasive ductal carcinomas, we assessed the relationships among ADC, SUV, and pathological prognostic factors. Both the median ADC value and maximum SUV (SUVmax) were significantly associated with vascular invasion (P = 0.008 and P = 0.026, respectively). SUVmax was also significantly correlated with tumor size (P = 0.001), histological grade (P = 0.001), lymph node status (P = 0.0015), estrogen receptor status (P = 0.010), and human epidermal growth factor receptor 2 status (P = 0.020), whereas ADC values were not. The correlation between the ADC and SUVmax was not significant (P = 0.356; R = -0.112). Mucinous carcinoma showed high ADC and relatively low SUVmax. Medullary carcinoma showed low ADC and high SUVmax. When we evaluated the relationships among ADC, SUVmax, and prognostic factors in the 56 invasive ductal carcinomas, our statistical results were not significantly changed, except SUVmax was also significantly associated with progesterone receptor status (P = 0.034), but not lymph node status. SUVmax may be valuable for predicting the prognosis of breast cancer. Both ADC and SUVmax are useful to predict vascular invasion. J. Magn. Reson. Imaging 2016
LARGE DEVIATIONS AND MODERATE DEVIATIONS FOR m-NEGATIVELY ASSOCIATED RANDOM VARIABLES
Hu Yijun; Ming Ruixing; Yang Wenquan
2007-01-01
M-negatively associated random variables, which generalizes the classical one of negatively associated random variables and includes m-dependent sequences as its particular case, are introduced and studied. Large deviation principles and moderate deviation upper bounds for stationary m-negatively associated random variables are proved.Kolmogorov-type and Marcinkiewicz-type strong laws of large numbers as well as the three series theorem for m-negatively associated random variables are also given.
Meiosis and its deviations in polyploid plants.
Grandont, L; Jenczewski, E; Lloyd, A
2013-01-01
Meiosis is a fundamental process in all sexual organisms that ensures fertility and genome stability and creates genetic diversity. For each of these outcomes, the exclusive formation of crossovers between homologous chromosomes is needed. This is more difficult to achieve in polyploid species which have more than 2 sets of chromosomes able to recombine. In this review, we describe how meiosis and meiotic recombination 'deviate' in polyploid plants compared to diploids, and give an overview of current knowledge on how they are regulated. See also the sister article focusing on animals by Stenberg and Saura in this themed issue.
Guessing Revisited: A Large Deviations Approach
Hanawal, Manjesh Kumar
2010-01-01
The problem of guessing a random string is revisited. A close relation between guessing and compression is first established. Then it is shown that if the sequence of distributions of the information spectrum satisfies the large deviation property with a certain rate function, then the limiting guessing exponent exists and is a scalar multiple of the Legendre-Fenchel dual of the rate function. Other sufficient conditions related to certain continuity properties of the information spectrum are briefly discussed. This approach highlights the importance of the information spectrum in determining the limiting guessing exponent. All known prior results are then re-derived as example applications of our unifying approach.
VAR Portfolio Optimal: Perbandingan Antara Metode Markowitz dan Mean Absolute Deviation
R. Agus Sartono
2009-05-01
Full Text Available Portfolio selection method which have been introduced by Harry Markowitz (1952 used variance or deviation standard as a measure of risk. Kanno and Yamazaki (1991 introduced another method and used mean absolute deviation as a measure of risk instead of variance. The Value-at Risk (VaR is a relatively new method to capitalized risk that been used by financial institutions. The aim of this research is compare between mean variance and mean absolute deviation of two portfolios. Next, we attempt to assess the VaR of two portfolios using delta normal method and historical simulation. We use the secondary data from the Jakarta Stock Exchange – LQ45 during 2003. We find that there is a weak-positive correlation between deviation standard and return in both portfolios. The VaR nolmal delta based on mean absolute deviation method eventually is higher than the VaR normal delta based on mean variance method. However, based on the historical simulation the VaR of two methods is statistically insignificant. Thus, the deviation standard is sufficient measures of portfolio risk.Keywords: optimalisasi portofolio, mean-variance, mean-absolute deviation, value-at-risk, metode delta normal, metode simulasi historis
Statistical properties of the deviations of f 0 F 2 from monthly medians
Y. Tulunay
2002-06-01
Full Text Available The deviations of hourly f 0 F 2 from monthly medians for 20 stations in Europe during the period 1958-1998 are studied. Spectral analysis is used to show that, both for original data (for each hour and for the deviations from monthly medians, the deterministic components are the harmonics of 11 years (solar cycle, 1 year and its harmonics, 27 days and 12 h 50.49 m (2nd harmonic of lunar rotation period L 2 periodicities. Using histograms for one year samples, it is shown that the deviations from monthly medians are nearly zero mean (mean < 0.5 and approximately Gaussian (relative difference range between %10 to %20 and their standard deviations are larger for daylight hours (in the range 5-7. It is shown that the amplitude distribution of the positive and negative deviations is nearly symmetrical at night hours, but asymmetrical for day hours. The positive and negative deviations are then studied separately and it is observed that the positive deviations are nearly independent of R12 except for high latitudes, but negative deviations are modulated by R12 . The 90% confidence interval for negative deviations for each station and each hour is computed as a linear model in terms of R12. After correction for local time, it is shown that for all hours the confidence intervals increase with latitude but decrease above 60N. Long-term trend analysis showed that there is an increase in the amplitude of positive deviations from monthly means irrespective of the solar conditions. Using spectral analysis it is also shown that the seasonal dependency of negative deviations is more accentuated than the seasonal dependency of positive deviations especially at low latitudes. In certain stations, it is also observed that the 4th harmonic of 1 year corresponding to a periodicity of 3 months, which is missing in f 0 F 2 data, appears in the spectra of negative variations.
Marasek, K; Nowicki, A
1994-01-01
The performance of three spectral techniques (FFT, AR Burg and ARMA) for maximum frequency estimation of the Doppler spectra is described. Different definitions of fmax were used: frequency at which spectral power decreases down to 0.1 of its maximum value, modified threshold crossing method (MTCM) and novel geometrical method. "Goodness" and efficiency of estimators were determined by calculating the bias and the standard deviation of the estimated maximum frequency of the simulated Doppler spectra with known statistics. The power of analysed signals was assumed to have the exponential distribution function. The SNR ratios were changed over the range from 0 to 20 dB. Different spectrum envelopes were generated. A Gaussian envelope approximated narrow band spectral processes (P. W. Doppler) and rectangular spectra were used to simulate a parabolic flow insonified with C. W. Doppler. The simulated signals were generated out of 3072-point records with sampling frequency of 20 kHz. The AR and ARMA models order selections were done independently according to Akaike Information Criterion (AIC) and Singular Value Decomposition (SVD). It was found that the ARMA model, computed according to SVD criterion, had the best overall performance and produced results with the smallest bias and standard deviation. In general AR(SVD) was better than AR(AIC). The geometrical method of fmax estimation was found to be more accurate than other tested methods, especially for narrow band signals.
Miura Takeshi
2010-12-01
Full Text Available Abstract Background In this era of molecular targeting therapy when various systematic treatments can be selected, prognostic biomarkers are required for the purpose of risk-directed therapy selection. Numerous reports of various malignancies have revealed that 18-Fluoro-2-deoxy-D-glucose (18F-FDG accumulation, as evaluated by positron emission tomography, can be used to predict the prognosis of patients. The purpose of this study was to evaluate the impact of the maximum standardized uptake value (SUVmax from 18-fluoro-2-deoxy-D-glucose positron emission tomography/computed tomography (18F-FDG PET/CT on survival for patients with advanced renal cell carcinoma (RCC. Methods A total of 26 patients with advanced or metastatic RCC were enrolled in this study. The FDG uptake of all RCC lesions diagnosed by conventional CT was evaluated by 18F-FDG PET/CT. The impact of SUVmax on patient survival was analyzed prospectively. Results FDG uptake was detected in 230 of 243 lesions (94.7% excluding lung or liver metastases with diameters of less than 1 cm. The SUVmax of 26 patients ranged between 1.4 and 16.6 (mean 8.8 ± 4.0. The patients with RCC tumors showing high SUVmax demonstrated poor prognosis (P = 0.005 hazard ratio 1.326, 95% CI 1.089-1.614. The survival between patients with SUVmax equal to the mean of SUVmax, 8.8 or more and patients with SUVmax less than 8.8 were statistically different (P = 0.0012. This is the first report to evaluate the impact of SUVmax on advanced RCC patient survival. However, the number of patients and the follow-up period were still not extensive enough to settle this important question conclusively. Conclusions The survival of patients with advanced RCC can be predicted by evaluating their SUVmax using 18F-FDG-PET/CT. 18F-FDG-PET/CT has potency as an "imaging biomarker" to provide helpful information for the clinical decision-making.
Jin F
2016-05-01
Full Text Available Feng Jin,1,2 Hui Zhu,2 Zheng Fu,3 Li Kong,2 Jinming Yu2 1School of Medicine and Life Sciences, University of Jinan-Shandong Academy of Medical Sciences, 2Department of Radiation Oncology, Shandong Cancer Hospital Affiliated to Shandong University, Shandong Academy of Medical Sciences, 3Department of Nuclear Medicine, Shandong Cancer Hospital Affiliated to Shandong University, Shandong Academy of Medical Sciences, Jinan, People’s Republic of China Purpose: The purpose of this study was to investigate the prognostic value of the standardized uptake value maximum (SUVmax change calculated by dual-time-point 18F-fluorodeoxyglucose positron emission tomography (PET imaging in patients with advanced non-small-cell lung cancer (NSCLC.Patients and methods: We conducted a retrospective review of 115 patients with advanced NSCLC who underwent pretreatment dual-time-point 18F-fluorodeoxyglucose PET acquired at 1 and 2 hours after injection. The SUVmax from early images (SUVmax1 and SUVmax from delayed images (SUVmax2 were recorded and used to calculate the SUVmax changes, including the SUVmax increment (ΔSUVmax and percent change of the SUVmax (%ΔSUVmax. Progression-free survival (PFS and overall survival (OS were determined by the Kaplan–Meier method and were compared with the studied PET parameters, and the clinicopathological prognostic factors in univariate analyses and multivariate analyses were constructed using Cox proportional hazards regression.Results: One hundred and fifteen consecutive patients were reviewed, and the median follow-up time was 12.5 months. The estimated median PFS and OS were 3.8 and 9.6 months, respectively. In univariate analysis, SUVmax1, SUVmax2, ΔSUVmax, %ΔSUVmax, clinical stage, and Eastern Cooperative Oncology Group (ECOG scores were significant prognostic factors for PFS. Similar results were significantly correlated with OS, except %ΔSUVmax. In multivariate analysis, ΔSUVmax and %ΔSUVmax were significant
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
Avoiding moving obstacles by deviation from a mobile robot`s nominal path
Tsoularis, A.; Kambhampati, C. [Univ. of Reading, Whiteknights (United Kingdom). Dept. of Cybernetics
1999-05-01
This paper deals with the problem of obstacle avoidance by deviation from the nominal path. Deviation is the only option available to the robot when the acceleration or deceleration plan on the nominal path fails to produce a viable avoidance strategy. The obstacle avoidance on the nominal path was dealt with in the authors` previous development, where the robot`s motion was only subject to an upper bound on its speed. When the robot has to deviate, its motion is subject to a maximum steering constraint and a maximum deviation constraint in addition to the maximum speed constraint. The problem is solved geometrically by identifying final states for the robot that are reachable, satisfy all the constraints, and guarantee collision avoidance. The final state-reachability conditions that the authors obtain in the process ensure that no unnecessary deviation plan is initiated. These conditions, along with the simplicity of the geometric arguments they employ, make the scheme an attractive option for on-line implementation. The only significant complexity arises when minimizing the performance index. They have suggested dynamic programming as an optimization took, but any other nonlinear optimization technique can be adopted.
Theory of Deviation and Its Application in College English Teaching
Xu Yanqiu
2008-01-01
Deviation is an important concept in stylistics.Besides Shklovskij and Mukarovsky,who made a theoreti cal generalization of deviational phenomena,Leech is the one who studies deviation systematically and catego rizes it into groups.To apply the theory of deviation to College English teaching is an effective way to culti rate students' interest in and aesthetic ability of English texts.
Hypotropic Dissociated Vertical Deviation; a Case Report
Zhale Rajavi
2013-01-01
Full Text Available Purpose: To report the clinical features of a rare case of hypotropic dissociated vertical deviation (DVD. Case report: A 25-year-old female was referred with unilateral esotropia, hypotropia and slow variable downward drift in her left eye. She had history of esotropia since she had been 3-4 months of age. Best corrected visual acuity was 20/20 in her right eye and 20/40 in the left one when hyperopia was corrected. She underwent bimedial rectus muscle recession of 5.25mm for 45 prism diopters (PDs of esotropia. She was orthophoric 3 months after surgery and no further operation was planned for correction of the hypotropic DVD. Conclusion: This rare case of hypotropic DVD showed only mild amblyopia in her non-fixating eye. The etiology was most probably acquired considering hyperopia as a sign of early onset accommodative esotropia.
Spotting deviations from R^2 inflation
de la Cruz-Dombriz, Alvaro; Odintsov, Sergei D; Saez-Gomez, Diego
2016-01-01
We discuss the soundness of inflationary scenarios in theories beyond the Starobinsky model, namely a class of theories described by arbitrary functions of the Ricci scalar and the K-essence field. We discuss the pathologies associated with higher-order equations of motion which will be shown to constrain the stability of this class of theories. We provide a general framework to calculate the slow-roll parameters and the corresponding mappings to the theory parameters. For paradigmatic gravitational models within the class of theories under consideration we illustrate the power of the Planck/Bicep2 latest results to constrain such gravitational Lagrangians. Finally, bounds for potential deviations from Starobinsky-like inflation are derived.
Large Deviations and Asymptotic Methods in Finance
Gatheral, Jim; Gulisashvili, Archil; Jacquier, Antoine; Teichmann, Josef
2015-01-01
Topics covered in this volume (large deviations, differential geometry, asymptotic expansions, central limit theorems) give a full picture of the current advances in the application of asymptotic methods in mathematical finance, and thereby provide rigorous solutions to important mathematical and financial issues, such as implied volatility asymptotics, local volatility extrapolation, systemic risk and volatility estimation. This volume gathers together ground-breaking results in this field by some of its leading experts. Over the past decade, asymptotic methods have played an increasingly important role in the study of the behaviour of (financial) models. These methods provide a useful alternative to numerical methods in settings where the latter may lose accuracy (in extremes such as small and large strikes, and small maturities), and lead to a clearer understanding of the behaviour of models, and of the influence of parameters on this behaviour. Graduate students, researchers and practitioners will find th...
An discussion on Graphological Deviation in Oliver Twist
肖潇
2016-01-01
In stylistic analysis,when we identifying the stylistic features in literary works,deviation serves as an important sign.According to Leech,there are eight types of deviation in poetry:lexical deviation,grammatical deviation,phonological deviation,graphological deviation,semantic deviation,dialectal deviation,deviation of register,deviation of historical period. Realism marks as an significant development in the history of fiction,for its success in achieving an exposure of the truth of people’s real life and fierce social problems.And foregrounded feature is inevitable part that constitute his language style.We will focus on Oliver Twist,for it is presented with unique writing style,which worthy our investigation.
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Large Deviations for Random Matricial Moment Problems
Nagel, Jan; Gamboa, Fabrice; Rouault, Alain
2010-01-01
We consider the moment space $\\mathcal{M}_n^{K}$ corresponding to $p \\times p$ complex matrix measures defined on $K$ ($K=[0,1]$ or $K=\\D$). We endow this set with the uniform law. We are mainly interested in large deviations principles (LDP) when $n \\rightarrow \\infty$. First we fix an integer $k$ and study the vector of the first $k$ components of a random element of $\\mathcal{M}_n^{K}$. We obtain a LDP in the set of $k$-arrays of $p\\times p$ matrices. Then we lift a random element of $\\mathcal{M}_n^{K}$ into a random measure and prove a LDP at the level of random measures. We end with a LDP on Carth\\'eodory and Schur random functions. These last functions are well connected to the above random measure. In all these problems, we take advantage of the so-called canonical moments technique by introducing new (matricial) random variables that are independent and have explicit distributions.
Meiosis and its deviations in polyploid animals.
Stenberg, P; Saura, A
2013-01-01
We review the different modes of meiosis and its deviations encountered in polyploid animals. Bisexual reproduction involving normal meiosis occurs in some allopolyploid frogs with variable degrees of polyploidy. Aberrant modes of bisexual reproduction include gynogenesis, where a sperm stimulates the egg to develop. The sperm may enter the egg but there is no fertilization and syngamy. In hybridogenesis, a genome is eliminated to produce haploid or diploid eggs or sperm. Ploidy can be elevated by fertilization with a haploid sperm in meiotic hybridogenesis, which elevates the ploidy of hybrid offspring such that they produce diploid gametes. Polyploids are then produced in the next generation. In kleptogenesis, females acquire full or partial genomes from their partners. In pre-equalizing hybrid meiosis, one genome is transmitted in the Mendelian fashion, while the other is transmitted clonally. Parthenogenetic animals have a very wide range of mechanisms for restoring or maintaining the mother's ploidy level, including gamete duplication, terminal fusion, central fusion, fusion of the first polar nucleus with the product of the first division, and premeiotic duplication followed by a normal meiosis. In apomictic parthenogenesis, meiosis is replaced by what is effectively mitotic cell division. The above modes have different evolutionary consequences, which are discussed. See also the sister article by Grandont et al. in this themed issue.
Large deviations in the random sieve
Grimmett, Geoffrey
1997-05-01
The proportion [rho]k of gaps with length k between square-free numbers is shown to satisfy log[rho]k=[minus sign](1+o(1))(6/[pi]2) klogk as k[rightward arrow][infty infinity]. Such asymptotics are consistent with Erdos's challenge to prove that the gap following the square-free number t is smaller than clogt/log logt, for all t and some constant c satisfying c>[pi]2/12. The results of this paper are achieved by studying the probabilities of large deviations in a certain ‘random sieve’, for which the proportions [rho]k have representations as probabilities. The asymptotic form of [rho]k may be obtained in situations of greater generality, when the squared primes are replaced by an arbitrary sequence (sr) of relatively prime integers satisfying [sum L: summation operator]r1/sr<[infty infinity], subject to two further conditions of regularity on this sequence.
A Historical Study of Contemporary Human Rights: Deviation or Extinction?
Tanel Kerikmäe
2016-10-01
Full Text Available Human rights is a core issue of continuing political, legal and economic relevance. The current article discusses the historical perceptions of the very essence of human rights standards and poses the question whether the Realpolitik of the changed world and Europe can justify the deviation from the “purist” approach to human rights. The EU Charter, as the most eminent and contemporary “bill of rights”, is chosen as an example of the divergence from “traditional values”. The article does not offer solutions but rather focuses on the expansive development in the doctrinal approach of interpreting human rights that has not been conceptually agreed upon by historians, philosophers and legal scholars.
Deviation Optimal Learning using Greedy Q-aggregation
Dai, Dong; Zhang, Tong
2012-01-01
Given a finite family of functions, the goal of model selection is to construct a procedure that mimics the function from this family that is the closest to an unknown regression function. More precisely, we consider a general regression model with fixed design and measure the distance between functions by the mean squared error at the design points. While procedures based on exponential weights are known to solve the problem of model selection in expectation, they are, surprisingly, sub-optimal in deviation. We propose a new formulation called Q-aggregation that addresses this limitation; namely, its solution leads to sharp oracle inequalities that are optimal in a minimax sense. Moreover, based on the new formulation, we design greedy Q-aggregation procedures that produce sparse aggregation models achieving the optimal rate. The convergence and performance of these greedy procedures are illustrated and compared with other standard methods on simulated examples.
A study on the deviation aspects of the poem âThe Eightieth Stageâ
Soghra Salmaninejad Mehrabadi
2016-01-01
of synergistic base has helped to the poet's innovation. New expressions are also used in other parts of abnormality in âThe Eightieth Stag eâ . Stylistic deviation Sometimes, Akhavan uses local and slang words, and words with different songs and music produces deviation as well. This Application is one kind of abnormality. Words such as âhan, hey, by the truth, pity, hoome, kope, meydanak and ...â are of this type of abnormality . Â Ancient deviation One way to break out of the habit of poetry , is attention to ancient words and actions . Archaism is one of the factors affecting the deviation. Archaism deviation helps to make the old sp. According to Leach, the ancient is the survival of the old language in the now. Syntactic factors, type of music and words, are effective in escape from the standard language. âSowrat (sharpness, hamgenan (counterparts, parine (last year, pour ( son, pahlaw (championââare Words that show Akhavanâs attention to archaism. The ancient pronunciation is another part of his work. Furthermore, use of mythology and allusion have created deviation of this type. Cases such as anagram adjectival compounds, the use of two prepositions for a word, the use of the adjective and noun in the plural form, are signs of archaism in grammar and syntax. He is interested in grammatical elements of Khorasani Style. Most elements of this style used in âThe Eightieth Stageâ poetry. S emantic deviation Semantic deviation is caused by the imagery . The poet uses frequently literary figures. By this way, he produces new meaning and therefore highlights his poem. Simile, metaphor, personification and irony are the most important examples of this deviation. Apparently the maximum deviation from the norm in this poem is of periodic deviation (ancient or archaism. The second row belongs to the semantic deviation in which metaphor is the most meaningful. The effect of metaphor in this poem is quite well. In
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
Sensitivity analysis of the channel estimation deviation to the MAP decoding algorithm
WAN Ke; FAN Ping-zhi
2006-01-01
As a necessary input parameter for maximum a-posteriori(MAP) decoding algorithm,SNR is normally obtained from the channel estimation unit.Corresponding research indicated that SNR estimation deviation degraded the performance of Turbo decoding significantly.In this paper,MAP decoding algorithm with SNR estimation deviation was investigated in detail,and the degradation mechanism of Turbo decoding was explained analytically.The theoretical analysis and computer simulation disclosed the specific reasons for the performance degradation when SNR estimation was less than the actual value,and for the higher sensitivity of SNR estimation to long-frame Turbo codes.
赵凤霞; 王正平; 宋学立; 朱景伟; 孙卉卉; 高相彬; 王海涛
2014-01-01
The heavy metal maximum residue limit standard of agricultural products between China and EU was compared to protect health of consumers and meet the demand of export trade development of agricultural products in China and main hazard of Pb,Cd,Hg,Sn,As,Cr and Ni to human body was discussed in the paper.The Pb MRLs of most agricultural products such as cereal,fruits and vegetables in China is no difference with in EU but the Pb MRLs of most agricultural products such as poultry & meat, aquatic animals and dairy products in China is higher than EU.The Cd MRLs of cereal,beans,fruits, vegetables,livestock liver and kidney is no difference with EU but the Cd MRLs of poultry & meat and aquatic products in China is higher than EU.The Hg MRLs of aquatic products is in accord with EU basically and the Hg content in cereal,vegetables,meats,dairy products,eggs and edible mushrooms is regulated in detail.The Sn MRLs of beverages in China is a little more than EU.Total As and inorganic As content in most agricultural products was regulated in GB2762 - 2012.The Ni MRLs of grease and grease products such as hydrogenated vegetable oil and products containing hydrogenated vegetable oil is 1.0 mg/kg.The suggestions to reduce heavy metal content in agricultural products is proposed according to the current status of high heavy metal content in some agricultural products in China compared with developed countries.%为了保护我国消费者的健康，满足农产品出口贸易发展的要求，通过收集整理，简述了 Pb、Cd、Hg、锡（Sn）、As、Cr 和镍（Ni）等几种重金属对人体的主要危害，并对这几种重金属在我国和欧盟主要农产品中的限量标准进行了详细的对比分析。结果表明：Pb，我国谷物、水果和蔬菜等大部分农产品中的限量标准与欧盟一致，禽畜肉类、水产动物类和乳品类等部分农产品中限量标准高于欧盟；Cd，我国的谷物、豆类、水果、蔬菜、禽畜肝脏
Assessment of gait deviation on the Babinski-Weill test in healthy Brazilians
Camila Souza Miranda
2013-09-01
Full Text Available Objective The aim of this study was to validate a simple and reproducible method for assessing gait deviation on the Babinski-Weill test in a representative sample of healthy Brazilians. Methods Gait deviations were measured in 75 individuals (median=30 years, 41 women for forward, backwards, and Babinski-Weill steps. The test entailed blindfolded individuals walking 10 paces at a frequency of 1 Hz with deviations subsequently measured by a protractor. Results Mean gait deviation forward was 0.53° with standard deviation (SD=4.22 and backwards was 2.14° with SD=4.29. No significant difference in deviation was detected between genders (t test p=0.40 forward and p=0.77 backwards or for age (ANOVA, p=0.33 forward and p=0.63 backwards. On the Babinski-Weill test, mean gait deviation was 5.26°; SD=16.32 in women and -3.11°; SD=12.41 in men, with no significant difference between genders (t test, p=0.056. Discussion Defining normative gait patterns helps distinguish pathological states.
9 CFR 318.308 - Deviations in processing.
2010-01-01
... AGENCY ORGANIZATION AND TERMINOLOGY; MANDATORY MEAT AND POULTRY PRODUCTS INSPECTION AND VOLUNTARY...) Deviations in processing (or process deviations) must be handled according to: (1)(i) A HACCP plan for canned...) of this section. (c) (d) Procedures for handling process deviations where the HACCP plan...
21 CFR 330.11 - NDA deviations from applicable monograph.
2010-04-01
... 21 Food and Drugs 5 2010-04-01 2010-04-01 false NDA deviations from applicable monograph. 330.11... EFFECTIVE AND NOT MISBRANDED Administrative Procedures § 330.11 NDA deviations from applicable monograph. A new drug application requesting approval of an OTC drug deviating in any respect from a monograph that...
Large deviations for Glauber dynamics of continuous gas
2008-01-01
This paper is devoted to the large deviation principles of the Glauber-type dynamics of finite or infinite volume continuous particle systems.We prove that the level-2 empirical process satisfies the large deviation principles in the weak convergence topology,while it does not satisfy the large deviation principles in the T-topology.
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
I. I. Kravchenko
2016-01-01
Full Text Available There is a variety of objectives for measuring deviations of flatness, size and mutual arrangement of flat surfaces, namely: processing accuracy control, machinery condition monitoring, treatment process control in terms of shape deviation, comparative analysis of machine rigidity. If for a processing accuracy control it is sufficient to obtain the flatness deviation, as the maximum adjoining surface deviation, the choice of the adjoining surface as a zero reference datum deviation leads to considerable difficulties in creating devices and in particular devices for measuring size and shape variations. The flat surface is characterized by mutual arrangement of its points and can be represented by equation in the selected coordinate system. The objective of this work is to provide analytical construction of the vector field F, which describes the real surface with an appropriate approximation upon modelling the face milling of the flat surfaces of body parts in conditions of anisotropic rigidity of technological system. To determine the numerical value of shape and size deviation characteristics the average surfaces can serve a basis for the zero reference values of vectors. A mean value theorem allows to obtain measurement information about deviations in shape, size and arrangement of processed flat surfaces in terms of metrology, as well as about the process parameters such as depth of cut, feed, cutting speed, anisotropic rigidity of technological system that characterize the specific processing conditions. The machining center MS 12-250 was used to carry out a number of experiments with processing the surfaces of the prism-shaped body parts (300x300x250 and the subsequent measurements of flatness on the IS-49 optical line to prove the correlation between expected and observed values of the vectors of flatness deviations.
Evaluation of dynamic electromagnetic tracking deviation
Hummel, Johann; Figl, Michael; Bax, Michael; Shahidi, Ramin; Bergmann, Helmar; Birkfellner, Wolfgang
2009-02-01
Electromagnetic tracking systems (EMTS's) are widely used in clinical applications. Many reports have evaluated their static behavior and errors caused by metallic objects were examined. Although there exist some publications concerning the dynamic behavior of EMTS's the measurement protocols are either difficult to reproduce with respect of the movement path or only accomplished at high technical effort. Because dynamic behavior is of major interest with respect to clinical applications we established a simple but effective modal measurement easy to repeat at other laboratories. We built a simple pendulum where the sensor of our EMTS (Aurora, NDI, CA) could be mounted. The pendulum was mounted on a special bearing to guarantee that the pendulum path is planar. This assumption was tested before starting the measurements. All relevant parameters defining the pendulum motion such as rotation center and length are determined by static measurement at satisfactory accuracy. Then position and orientation data were gathered over a time period of 8 seconds and timestamps were recorded. Data analysis provided a positioning error and an overall error combining both position and orientation. All errors were calculated by means of the well know equations concerning pendulum movement. Additionally, latency - the elapsed time from input motion until the immediate consequences of that input are available - was calculated using well-known equations for mechanical pendulums for different velocities. We repeated the measurements with different metal objects (rods made of stainless steel type 303 and 416) between field generator and pendulum. We found a root mean square error (eRMS) of 1.02mm with respect to the distance of the sensor position to the fit plane (maximum error emax = 2.31mm, minimum error emin = -2.36mm). The eRMS for positional error amounted to 1.32mm while the overall error was 3.24 mm. The latency at a pendulum angle of 0° (vertical) was 7.8ms.
Temperature deviation index and elderly mortality in Japan
Lim, Youn-Hee; Reid, Colleen E.; Honda, Yasushi; Kim, Ho
2016-07-01
Few studies have examined how the precedence of abnormal temperatures in previous neighboring years affects the population's health. In the present study, we attempted to quantify the health effects of abnormal weather patterns by creating a metric called the temperature deviation index (TDI) and estimated the effects of TDI on mortality in Japan. We used data from 47 prefectures in Japan to compute the TDI on days between May and September from 1966 to 2010. The TDI is a summed product of an indicator of absence of high temperatures in the neighboring years, and more weights were assigned to the years closest to the current year. To estimate the TDI effects on elderly mortality, we used generalized linear modeling with a Poisson distribution after adjusting for apparent temperature, barometric pressure, day of the week, and time trend. For each prefecture, we estimated the TDI effects and pooled the estimates to yield a national average for 1991-2010 in Japan. The estimated effects of TDI in middle- or high-latitude prefectures were greater than in low-latitude prefectures. The estimated national average of TDI effects was a 0.5 % (95 % confidence intervals [CI], 0.1, 1.0) increase in elderly mortality per 1-unit (around 1 standard deviation) increase in the TDI. The significant pooled estimation of TDI effects was mainly due to the TDI effects on summer days with moderate temperature (25th-49th percentile, mean temperature 22.9 °C): a 1.9 % (95 % CI, 1.1, 2.6) increase in elderly mortality per 1-unit increase in the TDI. However, TDI effects were insignificant in other temperature ranges. These findings suggest that elderly deaths increased on moderate temperature days in the summer that differed substantially from days during that time window in the neighboring years. Therefore, not only high temperature itself but also temperature deviation compared to previous years could be considered to be a risk factor for elderly mortality in the summer.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
49 CFR 192.943 - When can an operator deviate from these reassessment intervals?
2010-10-01
...) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Gas... 49 Transportation 3 2010-10-01 2010-10-01 false When can an operator deviate from these reassessment intervals? 192.943 Section 192.943 Transportation Other Regulations Relating to Transportation...
无
2010-01-01
In this paper, we study an even order neutral differential equation with deviating arguments, and obtain new oscillation results without the assumptions which were required for related results given before. Our results extend and improve many known oscillation criteria, based on the standard integral averaging technique.
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Mechanism Modeling and Simulation Based on Dimensional Deviation
无
2008-01-01
To analyze the effects on motion characteristics of mechanisms of dimensional variations, a study on random dimensional deviation generation techniques for 3D models on the basis of the present mechanical modeling software was carried out, which utilized the redeveloped interfaces provided by the modeling software to develop a random dimensional deviation generation system with certain probability distribution characteristics. This system has been used to perform modeling and simulation of the specific mechanical time delayed mechanism under multiple deviation varieties, simulation results indicate the dynamic characteristics of the mechanism are influenced significantly by the dimensional deviation in the tolerance distribution range, which should be emphasized in the design.
Deviation and rotation of the larynx in computer tomography
Shibusawa, Mitsunobu (Tokyo Medical and Dental Univ., Tokyo (Japan). Medical Research Institute); Yano, Kazuhiko
1990-01-01
Many authors described the clinical importance of asymmetry of the laryngeal framework. However, its pathogenesis is generally unknown. In this study, CT images of 315 Japanese subjects were investigated to define the laryngeal position relative to the midline of the cervical vertebra. The CT slice of each subject within 5 mm cephalad of the cricoarytenoid joint was traced. Then, the deviation and rotation angles were measured using our method. Seventy one percent of the subjects' larynges deviated and/or rotated to the right side, while 17% to the left side. Six percent showed neither deviation nor rotation. As to the rest of 6%, deviation and rotation were in opposite directions. Besides, the length of the thyroid alae were measured in 282 subjects. Left ala was longer in 55%, and right was in 23%, and almost equal in 22%. The conclusions are as follows. The majority of the subjects' CT images showed deviation and/or rotation of the laryngeal framework to the right side. So called idiopathic laryngeal deviation is a case which observed in those cases with remarkable deviation and/or rotation of the laryngeal framework. Aging seemed to be an important factor in accerelation of the laryngeal deviation and rotation. The type of diseases and the side of mass lesions had no statistical significance in deviation and rotation of the larynx. (author).
Instance Optimality of the Adaptive Maximum Strategy
L. Diening; C. Kreuzer; R. Stevenson
2016-01-01
In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e
Maximum gain of Yagi-Uda arrays
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
Ramalingam, S; Jayaprakash, A; Mohan, S; Karabacak, M
2011-11-01
FT-IR and FT-Raman (4000-100 cm(-1)) spectral measurements of 3-methyl-1,2-butadiene (3M12B) have been attempted in the present work. Ab-initio HF and DFT (LSDA/B3LYP/B3PW91) calculations have been performed giving energies, optimized structures, harmonic vibrational frequencies, IR intensities and Raman activities. Complete vibrational assignments on the observed spectra are made with vibrational frequencies obtained by HF and DFT (LSDA/B3LYP/B3PW91) at 6-31G(d,p) and 6-311G(d,p) basis sets. The results of the calculations have been used to simulate IR and Raman spectra for the molecule that showed good agreement with the observed spectra. The potential energy distribution (PED) corresponding to each of the observed frequencies are calculated which confirms the reliability and precision of the assignment and analysis of the vibrational fundamentals modes. The oscillation of vibrational frequencies of butadiene due to the couple of methyl group is also discussed. A study on the electronic properties such as HOMO and LUMO energies, were performed by time-dependent DFT (TD-DFT) approach. The calculated HOMO and LUMO energies show that charge transfer occurs within the molecule. The thermodynamic properties of the title compound at different temperatures reveal the correlations between standard heat capacities (C) standard entropies (S), and standard enthalpy changes (H).
Refraction in Terms of the Deviation of the Light.
Goldberg, Fred M.
1985-01-01
Discusses refraction in terms of the deviation of light. Points out that in physics courses where very little mathematics is used, it might be more suitable to describe refraction entirely in terms of the deviation, rather than by introducing Snell's law. (DH)
Downhole control of deviation with steerable straight-hole turbodrills
Gaynor, T.M.
1988-03-01
Advances in directional drilling have until recently been confined to issues that are peripheral to the central problem of controlling assembly behavior downhole. Examples of these advances are measurement while drilling (MWD) and the increasing use of computer assistance in well planning. These were significant steps forward, but the major problem remained. Changes in formation deviation tendencies led to trips to change bottomhole assemblies (BHA's) to cope with the new conditions. There is almost no direct control of deviation behavior. The steerable straight-hole turbodrill (SST) addresses this problem directly, allowing alteration of the well course without the need to trip. The availability of such a system radically changes the way in which directional well planning may be approached. This paper describes the equipment used and the equipment's construction and operational requirements. It discusses the capabilities and current limitation of the systems. Field results are presented for some 300,000 ft (91 500 m) of deviated drilling carried out over 2 years in Alaska and the North Sea. A series of four highly deviated wells totaling 35,000 ft (10 700m) with only three deviation trips is included. The SST is the first deviation drilling system to achieve deviation control over long sections without tripping to change BHA's. Bits and downhole equipment are now more reliable and long-lived than ever, therefore, deviation trips are becoming a major target for well cost saving.
7 CFR 3015.3 - Conflicting policies and deviations.
2010-01-01
... 7 Agriculture 15 2010-01-01 2010-01-01 false Conflicting policies and deviations. 3015.3 Section... Conflicting policies and deviations. (a) Statutory provisions. Federal statutes that apply to some USDA grant..., when permissible under existing laws. In those instances where a program receives an exception to...
Large Deviations without Principle: Join the Shortest Queue
Ridder, Ad; Shwartz, Adam
2004-01-01
We develop a methodology for studying "large deviations type" questions. Our approach does not require that the large deviations principle holds, and is thus applicable to a larg class of systems. We study a system of queues with exponential servers, which share an arrival stream. Arrivals are route
38 CFR 36.4304 - Deviations; changes of identity.
2010-07-01
... identity. 36.4304 Section 36.4304 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS... Deviations; changes of identity. A deviation of more than 5 percent between the estimates upon which a... change in the identity of the property upon which the original appraisal was based, will invalidate...
Large deviations for stochastic flows and their applications
高付清; 任佳刚
2001-01-01
Large deviations for stochastic flow solutions to SDEs containing a small parameter are studied. The obtained results are applied to establish a Cp, r-large deviation principle for stochastic flows and for solutions to anticipating SDEs. The recent results of Millet-Nualart-Sans and Yoshida are improved and refined.
Large Deviations and a Fluctuation Symmetry for Chaotic Homeomorphisms
Maes, Christian; Verbitskiy, Evgeny
2003-01-01
We consider expansive homeomorphisms with the specification property. We give a new simple proof of a large deviation principle for Gibbs measures corresponding to a regular potential and we establish a general symmetry of the rate function for the large deviations of the antisymmetric part, under t
[The crooked nose: correction of dorsal and caudal septal deviations].
Foda, H M T
2010-09-01
The deviated nose represents a complex cosmetic and functional problem. Septal surgery plays a central role in the successful management of the externally deviated nose. This study included 800 patients seeking rhinoplasty to correct external nasal deviations; 71% of these suffered from variable degrees of nasal obstruction. Septal surgery was necessary in 736 (92%) patients, not only to improve breathing, but also to achieve a straight, symmetric external nose. A graduated surgical approach was adopted to allow correction of the dorsal and caudal deviations of the nasal septum without weakening its structural support to the nasal dorsum or nasal tip. The approach depended on full mobilization of deviated cartilage, followed by straightening of the cartilage and its fixation in the corrected position by using bony splinting grafts through an external rhinoplasty approach.
Structure of Turbulence in Katabatic Flows Below and Above the Wind-Speed Maximum
Grachev, Andrey A.; Leo, Laura S.; Sabatino, Silvana Di; Fernando, Harindra J. S.; Pardyjak, Eric R.; Fairall, Christopher W.
2016-06-01
Measurements of small-scale turbulence made in the atmospheric boundary layer over complex terrain during the Mountain Terrain Atmospheric Modeling and Observations (MATERHORN) Program are used to describe the structure of turbulence in katabatic flows. Turbulent and mean meteorological data were continuously measured on four towers deployed along the east lower slope (2-4°) of Granite Mountain near Salt Lake City in Utah, USA. The multi-level (up to seven) observations made during a 30-day long MATERHORN field campaign in September-October 2012 allowed the study of temporal and spatial structure of katabatic flows in detail, and herein we report turbulence statistics (e.g., fluxes, variances, spectra, and cospectra) and their variations in katabatic flow. Observed vertical profiles show steep gradients near the surface, but in the layer above the slope jet the vertical variability is smaller. It is found that the vertical (normal to the slope) momentum flux and horizontal (along-slope) heat flux in a slope-following coordinate system change their sign below and above the wind maximum of a katabatic flow. The momentum flux is directed downward (upward) whereas the along-slope heat flux is downslope (upslope) below (above) the wind maximum. This suggests that the position of the jet-speed maximum can be obtained by linear interpolation between positive and negative values of the momentum flux (or the along-slope heat flux) to derive the height where the flux becomes zero. It is shown that the standard deviations of all wind-speed components (and therefore of the turbulent kinetic energy) and the dissipation rate of turbulent kinetic energy have a local minimum, whereas the standard deviation of air temperature has an absolute maximum at the height of wind-speed maximum. We report several cases when the destructive effect of vertical heat flux is completely cancelled by the generation of turbulence due to the along-slope heat flux. Turbulence above the wind
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2015-01-01
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Yu-Erh Huang (Dept. of Nuclear Medicine, Chang Gung Memorial Hospital-Kaohsiung Medical Center, Kaohsiung, Taiwan (China)); Chih-Feng Chen (Dept. of Radiology, Chang Gung Memorial Hospital-Kaohsiung Medical Center, Kaohsiung, Taiwan (China)); Yu-Jie Huang (Dept. of Radiation Oncology, Chang Gung Memorial Hospital-Kaohsiung Medical Center, Kaohsiung, Taiwan (China)); Konda, Sheela D.; Appelbaum, Daniel E.; Yonglin Pu (Dept. of Radiology, Univ. of Chicago, Chicago, IL (United States)), e-mail: ypu@radiology.bsd.uchicago.edu
2010-09-15
Background: 18F-fluoro-2-deoxyglucose positron emission tomography (18F-FDG PET) imaging has been shown to be an accurate method for diagnosing pulmonary lesions, and the standardized uptake value (SUV) has been shown to be useful in differentiating benign from malignant lesions. Purpose: To survey the interobserver variability of SUVmax and SUVmean measurements on 18F-FDG PET/CT scans and compare them with tumor size measurements on diagnostic CT scans in the same group of patients with focal pulmonary lesions. Material and Methods: Forty-three pulmonary nodules were measured on both 18F-FDG PET/CT and diagnostic chest CT examinations. Four independent readers measured the SUVmax and SUVmean of the 18F-FDG PET images, and the unidimensional nodule size of the diagnostic CT scans (UDCT) in all nodules. The region of interest (ROI) for the SUV measurements was drawn manually around each tumor on all consecutive slices that contained the nodule. The interobserver reliability and variability, represented by the intraclass correlation coefficient (ICC) and coefficient of variation (COV), respectively, were compared among the three parameters. The correlation between the SUVmax and SUVmean was also analyzed. Results: There was 100% agreement in the SUVmax measurements among the 4 readers in the 43 pulmonary tumors. The ICCs for the SUVmax, SUVmean, and UDCT by the four readers were 1.00, 0.97, and 0.97, respectively. The root-mean-square values of the COVs for the SUVmax, SUVmean, and UDCT by the four readers were 0%, 13.56%, and 11.03%, respectively. There was a high correlation observed between the SUVmax and SUVmean (Pearson's r=0.958; P <0.01). Conclusion: This study has shown that the SUVmax of lung nodules can be calculated without any interobserver variation. These findings indicate that SUVmax is a more valuable parameter than the SUVmean or UDCT for the evaluation of therapeutic effects of chemotherapy or radiation therapy on serial studies
Honguero Martínez, A F; García Jiménez, M D; García Vicente, A; López-Torres Hidalgo, J; Colon, M J; van Gómez López, O; Soriano Castrejón, Á M; León Atance, P
2016-01-01
F-18 fluorodeoxyglucose integrated PET-CT scan is commonly used in the work-up of lung cancer to improve preoperative disease stage. The aim of the study was to analyze the ratio between SUVmax of N1 lymph nodes and primary lung cancer to establish prediction of mediastinal disease (N2) in patients operated on non-small cell lung cancer. This is a retrospective study of a prospective database. Patients operated on non-small cell lung cancer (NSCLC) with N1 disease by PET-CT scan were included. None of them had previous induction treatment, but they underwent standard surgical resection plus systematic lymphadenectomy. There were 51 patients with FDG-PET-CT scan N1 disease. 44 (86.3%) patients were male with a mean age of 64.1±10.8 years. Type of resection: pneumonectomy=4 (7.9%), lobectomy/bilobectomy=44 (86.2%), segmentectomy=3 (5.9%). adenocarcinoma=26 (51.0%), squamous=23 (45.1%), adenosquamous=2 (3.9%). Lymph nodes after surgical resection: N0=21 (41.2%), N1=12 (23.5%), N2=18 (35.3%). Mean ratio of the SUVmax of N1 lymph node to the SUVmax of the primary lung tumor (SUVmax N1/T ratio) was 0.60 (range 0.08-2.80). ROC curve analysis to obtain the optimal cut-off value of SUVmax N1/T ratio to predict N2 disease was performed. At multivariate analysis, we found that a ratio of 0.46 or greater was an independent predictor factor of N2 mediastinal lymph node metastases with a sensitivity and specificity of 77.8% and 69.7%, respectively. SUVmax N1/T ratio in NSCLC patients correlates with mediastinal lymph node metastasis (N2 disease) after surgical resection. When SUVmax N1/T ratio on integrated PET-CT scan is equal or superior to 0.46, special attention should be paid on higher probability of N2 disease. Copyright © 2015 Elsevier España, S.L.U. and SEMNIM. All rights reserved.
Measurement of z-axis deviation angle of electro-optic crystal by conoscopic interference
Li, Dong; Liu, Yong; Liu, Xu; Jiang, Hongzhen; Zheng, Fanglan
2016-09-01
Properties of plasma electrode pockels cell is directly affected by the Z-axis deviation angle of the electro-optic crystal. Therefore, high precision measurement of the Z-axis deviation angle is indispensable. By using conoscopic interference technique, a measurement system for Z-axis deviation angle of electro-optic crystal is introduced. The principle of conoscopic interference method is described in detail, and a series of techniques are implied in this measurement system to improve the accuracy. High-precision positioning method of the crystal based on Michelson interference is proposed to determine the normal consistency of crystal, which can ensure the high positioning repeatability of crystal in the measurement process. The positioning comparison experiment of the crystal shows that the standard deviation of our method is less than 1pixel, which is much better than the traditional method (nearly 4pixels). Moreover, melatope extraction algorithm of optical axis based on image matching technique is proposed to ensure the melatope can be extracted in high precision. Calibration method of the normal of transmission surface of crystal is also proposed. The experiment results show that the PV and rms of Z-axis deviation angle is less than 0.05mrad and 0.02mrad, respectively. The repeatability accuracy is less than 0.01mrad.
Vertical deviations of the midplane of the Galaxy.
Malhotra, S.; Rhoads, J. E.
Besides the integral sign warp in the outer Galaxy, the gas in the Milky Way shows small, but systematic deviations from a flat z = 0 plane both in the inner and the outer Galaxy. In the inner Galaxy, the tangent points have no distance ambiguity, so their distances, and hence midplane deviations, can be measured. From the tangent point analysis the authors find that the molecular and atomic gas layers deviate from the z = 0 plane with an amplitude of ≅50 pc. Whether these deviations are due to a small, smooth inner warp or are similar to the m = 10 mode corrugations found in the outer Galaxy (Kulkarni, Blitz & Heiles, 1982) can be checked by looking at the two-dimensional (in Galactic radius and azimuthal angle) structure of the z deviations. For the inner Galaxy, distance ambiguity at points other than the tangent points makes the interpretation difficult, but these hypotheses can be checked in a limited way. Magnetic instabilities can cause vertical deviations of the gas, but if stars share the same deviations the origin has to be gravitational.
Structure of deviations from optimality in biological systems.
Pérez-Escudero, Alfonso; Rivera-Alba, Marta; de Polavieja, Gonzalo G
2009-12-01
Optimization theory has been used to analyze evolutionary adaptation. This theory has explained many features of biological systems, from the genetic code to animal behavior. However, these systems show important deviations from optimality. Typically, these deviations are large in some particular components of the system, whereas others seem to be almost optimal. Deviations from optimality may be due to many factors in evolution, including stochastic effects and finite time, that may not allow the system to reach the ideal optimum. However, we still expect the system to have a higher probability of reaching a state with a higher value of the proposed indirect measure of fitness. In systems of many components, this implies that the largest deviations are expected in those components with less impact on the indirect measure of fitness. Here, we show that this simple probabilistic rule explains deviations from optimality in two very different biological systems. In Caenorhabditis elegans, this rule successfully explains the experimental deviations of the position of neurons from the configuration of minimal wiring cost. In Escherichia coli, the probabilistic rule correctly obtains the structure of the experimental deviations of metabolic fluxes from the configuration that maximizes biomass production. This approach is proposed to explain or predict more data than optimization theory while using no extra parameters. Thus, it can also be used to find and refine hypotheses about which constraints have shaped biological structures in evolution.
Fluctuations and large deviations in non-equilibrium systems
B Derrida
2005-05-01
For systems in contact with two reservoirs at different densities or with two thermostats at different temperatures, the large deviation function of the density gives a possible way of extending the notion of free energy to non-equilibrium systems. This large deviation function of the density can be calculated explicitly for exclusion models in one dimension with open boundary conditions. For these models, one can also obtain the distribution of the current of particles flowing through the system and the results lead to a simple conjecture for the large deviation function of the current of more general diffusive systems.
Lin, Jin; Sun, Yuanzhang; Song, Yonghua
2013-01-01
loss, this paper designs a risk assessment model of grid frequency deviation, which is capable of locally estimating the maximum grid frequency deviation risk of the next dispatch cycle. A wind turbine speed optimizer then uses the estimated frequency deviation risk to search for the optimal power......Wind power fluctuation raises the security concern of grid frequency deviation, especially for an isolated power system. Thus, better control methodology needs to be developed to smooth the fluctuation without excessive spillage. Based on an actual industrial power system, this paper proposes...... a smoothing controller to suppress the power fluctuation from doubly-fed induction generator (DFIG)-based wind farm. This controller consists of threemain functionality components: risk assessmentmodel, wind turbine rotor speed optimizer, and rotor speed upper limiter. In order to avoid unnecessary energy...
Velocity Structure Determination Through Seismic Waveform Modeling and Time Deviations
Savage, B.; Zhu, L.; Tan, Y.; Helmberger, D. V.
2001-12-01
Through the use of seismic waveforms recorded by TriNet, a dataset of earthquake focal mechanisms and deviations (time shifts) relative to a standard model facilitates the investigation of the crust and uppermost mantle of southern California. The CAP method of focal mechanism determination, in use by TriNet on a routine basis, provides time shifts for surface waves and Pnl arrivals independently relative to the reference model. These shifts serve as initial data for calibration of local and regional seismic paths. Time shifts from the CAP method are derived by splitting the Pnl section of the waveform, the first arriving Pn to just before the arrival of the S wave, from the much slower surface waves then cross-correlating the data with synthetic waveforms computed from a standard model. Surface waves interact with the entire crust, but the upper crust causes the greatest effect. Whereas, Pnl arrivals sample the deeper crust, upper mantle, and source region. This natural division separates the upper from lower crust for regional calibration and structural modeling and allows 3-D velocity maps to be created using the resulting time shifts. Further examination of Pnl and other arrivals which interact with the Moho illuminate the complex nature of this boundary. Initial attempts at using the first 10 seconds of the Pnl section to determine upper most mantle structure have proven insightful. Two large earthquakes north of southern California in Nevada and Mammoth Lakes, CA allow the creation of record sections from 200 to 600 km. As the paths swing from east to west across southern California, simple 1-D models turn into complex structure, dramatically changing the waveform character. Using finite difference models to explain the structure, we determine that a low velocity zone is present at the base of the crust and extends to 100 km in depth. Velocity variations of 5 percent of the mantle in combination with steeply sloping edges produces complex waveform variations
A Hybrid Method with Deviational Particles for Spatial Inhomogeneous Plasma
Yan, Bokai
2015-01-01
In this work we propose a Hybrid method with Deviational Particles (HDP) for a plasma modeled by the inhomogeneous Vlasov-Poisson-Landau system. We split the distribution into a Maxwellian part evolved by a grid based fluid solver and a deviation part simulated by numerical particles. These particles, named deviational particles, could be both positive and negative. We combine the Monte Carlo method proposed in \\cite{YC15}, a Particle in Cell method and a Macro-Micro decomposition method \\cite{BLM08} to design an efficient hybrid method. Furthermore, coarse particles are employed to accelerate the simulation. A particle resampling technique on both deviational particles and coarse particles is also investigated and improved. The efficiency is significantly improved compared to a PIC-MCC method, especially near the fluid regime.
Spin-geodesic deviations in the Schwarzschild spacetime
Bini, Donato; Geralico, Andrea; Jantzen, Robert T.
2011-04-01
The deviation of the path of a spinning particle from a circular geodesic in the Schwarzschild spacetime is studied by an extension of the idea of geodesic deviation. Within the Mathisson-Papapetrou-Dixon model and assuming the spin parameter to be sufficiently small so that it makes sense to linearize the equations of motion in the spin variables as well as in the geodesic deviation, the spin-curvature force adds an additional driving term to the second order system of linear ordinary differential equations satisfied by nearby geodesics. Choosing initial conditions for geodesic motion leads to solutions for which the deviations are entirely due to the spin-curvature force, and one finds that the spinning particle position for a given fixed total spin oscillates roughly within an ellipse in the plane perpendicular to the motion, while the azimuthal motion undergoes similar oscillations plus an additional secular drift which varies with spin orientation.
Spin-geodesic deviations in the Schwarzschild spacetime
Bini, Donato; Jantzen, Robert T
2014-01-01
The deviation of the path of a spinning particle from a circular geodesic in the Schwarzschild spacetime is studied by an extension of the idea of geodesic deviation. Within the Mathisson-Papapetrou-Dixon model and assuming the spin parameter to be sufficiently small so that it makes sense to linearize the equations of motion in the spin variables as well as in the geodesic deviation, the spin-curvature force adds an additional driving term to the second order system of linear ordinary differential equations satisfied by nearby geodesics. Choosing initial conditions for geodesic motion leads to solutions for which the deviations are entirely due to the spin-curvature force, and one finds that the spinning particle position for a given fixed total spin oscillates roughly within an ellipse in the plane perpendicular to the motion, while the azimuthal motion undergoes similar oscillations plus an additional secular drift which varies with spin orientation.
Large Deviations for Multi-valued Stochastic Differential Equations
Ren, Jiagang; Zhang, Xicheng
2009-01-01
We prove a large deviation principle of Freidlin-Wentzell's type for the multivalued stochastic differential equations with monotone drifts, which in particular contains a class of SDEs with reflection in a convex domain.
Static large deviations of boundary driven exclusion processes
Farfan, Jonathan
2009-01-01
We prove that the stationary measure associated to a boundary driven exclusion process in any dimension satisfies a large deviation principle with rate function given by the quasi potential of the Freidlin and Wentzell theory.
ALTERNATING HYPERPHORIA - DISSOCIATED VERTICAL DEVIATION (DVD) OCCLUSION HYPERPHORIA
HOUTMAN, WA; ROZE, JH; DEVRIES, B; LETSCH, MC
1991-01-01
Alternating hyperphoria (synonyms: dissociated vertical deviation (DVD) or occlusion hyperphoria) and variants like 'unilateral patching hyperphoria' ('periodic vertical squint') and monocular vertical nystagmus, which may arise after strabismus operations or loss of the function of one of the eyes,
Large Deviations: An Introduction to 2007 Abel Prize
S Ramasubramanian
2008-05-01
2007 Abel prize has been awarded to S R S Varadhan for creating a unified theory of large deviations. We attempt to give a flavour of this branch of probability theory, highlighting the role of Varadhan.
Quenched moderate deviations principle for random walk in random environment
无
2010-01-01
We derive a quenched moderate deviations principle for the one-dimensional nearest random walk in random environment,where the environment is assumed to be stationary and ergodic.The approach is based on hitting time decomposition.
General Freidlin-Wentzell large deviations and positive diffusions
P. Baldi; Caramellino, L.
2011-01-01
Abstract We prove Freidlin-Wentzell Large Deviation estimates under rather minimal assumptions. This allows to derive Wentzell-Freidlin Large Deviation estimates for diffusions on the positive half line with coefficients that are neither bounded nor Lipschitz continuous. This applies to models of interest in Finance, i.e. the CIR and the CEV models, which are positive diffusion processes whose diffusion coefficient is only Holder continuous. correspondence: C...
The Analysis of a Deviation of Investment and Corporate Governance.
HISA Shoichi
2008-01-01
Investment of firms is affected by not only fundamentals factors, but liquidity constraint, ownership or corporate structure. Information structure between manager and owner is a significant factor to decide the level of investment, and deviation of investment from optimal condition. The reputation model between manager and owner suggest that the separate of ownership and management may induce the deviation of investment, and indicate that governance structure is important to reduce it. In th...
Large Deviations and a Fluctuation Symmetry for Chaotic Homeomorphisms
Maes, Christian; Verbitskiy, Evgeny
We consider expansive homeomorphisms with the specification property. We give a new simple proof of a large deviation principle for Gibbs measures corresponding to a regular potential and we establish a general symmetry of the rate function for the large deviations of the antisymmetric part, under time-reversal, of the potential. This generalizes the Gallavotti-Cohen fluctuation theorem to a larger class of chaotic systems.
Perception of midline deviations in smile esthetics by laypersons
Ferreira, Jamille Barros; da Silva, Licínio Esmeraldo; Caetano, Márcia Tereza de Oliveira; da Motta, Andrea Fonseca Jardim; Cury-Saramago, Adriana de Alcantara; Mucha, José Nelson
2016-01-01
ABSTRACT Objective: To evaluate the esthetic perception of upper dental midline deviation by laypersons and if adjacent structures influence their judgment. Methods: An album with 12 randomly distributed frontal view photographs of the smile of a woman with the midline digitally deviated was evaluated by 95 laypersons. The frontal view smiling photograph was modified to create from 1 mm to 5 mm deviations in the upper midline to the left side. The photographs were cropped in two different manners and divided into two groups of six photographs each: group LCN included the lips, chin, and two-thirds of the nose, and group L included the lips only. The laypersons performed the rate of each smile using a visual analog scale (VAS). Wilcoxon test, Student’s t-test and Mann-Whitney test were applied, adopting a 5% level of significance. Results: Laypersons were able to perceive midline deviations starting at 1 mm. Statistically significant results (p< 0.05) were found for all multiple comparisons of the values in photographs of group LCN and for almost all comparisons in photographs of group L. Comparisons between the photographs of groups LCN and L showed statistically significant values (p< 0.05) when the deviation was 1 mm. Conclusions: Laypersons were able to perceive the upper dental midline deviations of 1 mm, and above when the adjacent structures of the smiles were included. Deviations of 2 mm and above when the lips only were included. The visualization of structures adjacent to the smile demonstrated influence on the perception of midline deviation. PMID:28125140
Perception of midline deviations in smile esthetics by laypersons
Jamille Barros Ferreira
Full Text Available ABSTRACT Objective: To evaluate the esthetic perception of upper dental midline deviation by laypersons and if adjacent structures influence their judgment. Methods: An album with 12 randomly distributed frontal view photographs of the smile of a woman with the midline digitally deviated was evaluated by 95 laypersons. The frontal view smiling photograph was modified to create from 1 mm to 5 mm deviations in the upper midline to the left side. The photographs were cropped in two different manners and divided into two groups of six photographs each: group LCN included the lips, chin, and two-thirds of the nose, and group L included the lips only. The laypersons performed the rate of each smile using a visual analog scale (VAS. Wilcoxon test, Student’s t-test and Mann-Whitney test were applied, adopting a 5% level of significance. Results: Laypersons were able to perceive midline deviations starting at 1 mm. Statistically significant results (p< 0.05 were found for all multiple comparisons of the values in photographs of group LCN and for almost all comparisons in photographs of group L. Comparisons between the photographs of groups LCN and L showed statistically significant values (p< 0.05 when the deviation was 1 mm. Conclusions: Laypersons were able to perceive the upper dental midline deviations of 1 mm, and above when the adjacent structures of the smiles were included. Deviations of 2 mm and above when the lips only were included. The visualization of structures adjacent to the smile demonstrated influence on the perception of midline deviation.
A phase deviation based split-spectrum processing algorithm for ultrasonic flaw detection
LIU Zhenqing
2002-01-01
The Split Spectrum Processing (SSP) technique has proved its ability in reduction of interference noise in ultrasonic nondestructive testing for coarse grained materials. However,the results of SSP algorithms are not sufficiently stable since they are sensitive to the filter bank and filter parameters. And the mechanism of the technique to fully explore the signals is not clear. The statistical phase response characteristic of filter outputs for ultrasonic testing is discussed. Thus, a new SSP algorithm based on phase standard deviation is proposed. The performance is examined for both computer simulated and experimental data, and compared to commonly used minimum algorithm. The phase standard deviation algorithm is proved its superior effect and is less sensitive on the number of filters.
Nasal Septal Deviations: A Systematic Review of Classification Systems
Jeffrey Teixeira
2016-01-01
Full Text Available Objective. To systematically review the international literature for internal nasal septal deviation classification systems and summarize them for clinical and research purposes. Data Sources. Four databases (including PubMed/MEDLINE were systematically searched through December 16, 2015. Methods. Systematic review, adhering to PRISMA. Results. After removal of duplicates, this study screened 952 articles for relevance. A final comprehensive review of 50 articles identified that 15 of these articles met the eligibility criteria. The classification systems defined in these articles included C-shaped, S-shaped, reverse C-shaped, and reverse S-shaped descriptions of the septal deviation in both the cephalocaudal and anteroposterior dimensions. Additional studies reported use of computed tomography and categorized deviation based on predefined locations. Three studies graded the severity of septal deviations based on the amount of deflection. The systems defined in the literature also included an evaluation of nasal septal spurs and perforations. Conclusion. This systematic review ascertained that the majority of the currently published classification systems for internal nasal septal deviations can be summarized by C-shaped or reverse C-shaped, as well as S-shaped or reverse S-shaped deviations in the anteroposterior and cephalocaudal dimensions. For imaging studies, predefined points have been defined along the septum. Common terminology can facilitate future research.
Nasal Septal Deviations: A Systematic Review of Classification Systems
Teixeira, Jeffrey; Certal, Victor; Chang, Edward T.; Camacho, Macario
2016-01-01
Objective. To systematically review the international literature for internal nasal septal deviation classification systems and summarize them for clinical and research purposes. Data Sources. Four databases (including PubMed/MEDLINE) were systematically searched through December 16, 2015. Methods. Systematic review, adhering to PRISMA. Results. After removal of duplicates, this study screened 952 articles for relevance. A final comprehensive review of 50 articles identified that 15 of these articles met the eligibility criteria. The classification systems defined in these articles included C-shaped, S-shaped, reverse C-shaped, and reverse S-shaped descriptions of the septal deviation in both the cephalocaudal and anteroposterior dimensions. Additional studies reported use of computed tomography and categorized deviation based on predefined locations. Three studies graded the severity of septal deviations based on the amount of deflection. The systems defined in the literature also included an evaluation of nasal septal spurs and perforations. Conclusion. This systematic review ascertained that the majority of the currently published classification systems for internal nasal septal deviations can be summarized by C-shaped or reverse C-shaped, as well as S-shaped or reverse S-shaped deviations in the anteroposterior and cephalocaudal dimensions. For imaging studies, predefined points have been defined along the septum. Common terminology can facilitate future research. PMID:26933510
Probability of ventricular fibrillation: allometric model based on the ST deviation
Arini Pedro D
2011-01-01
Full Text Available Abstract Background Allometry, in general biology, measures the relative growth of a part in relation to the whole living organism. Using reported clinical data, we apply this concept for evaluating the probability of ventricular fibrillation based on the electrocardiographic ST-segment deviation values. Methods Data collected by previous reports were used to fit an allometric model in order to estimate ventricular fibrillation probability. Patients presenting either with death, myocardial infarction or unstable angina were included to calculate such probability as, VFp = δ + β (ST, for three different ST deviations. The coefficients δ and β were obtained as the best fit to the clinical data extended over observational periods of 1, 6, 12 and 48 months from occurrence of the first reported chest pain accompanied by ST deviation. Results By application of the above equation in log-log representation, the fitting procedure produced the following overall coefficients: Average β = 0.46, with a maximum = 0.62 and a minimum = 0.42; Average δ = 1.28, with a maximum = 1.79 and a minimum = 0.92. For a 2 mm ST-deviation, the full range of predicted ventricular fibrillation probability extended from about 13% at 1 month up to 86% at 4 years after the original cardiac event. Conclusions These results, at least preliminarily, appear acceptable and still call for full clinical test. The model seems promising, especially if other parameters were taken into account, such as blood cardiac enzyme concentrations, ischemic or infarcted epicardial areas or ejection fraction. It is concluded, considering these results and a few references found in the literature, that the allometric model shows good predictive practical value to aid medical decisions.
Computing Rooted and Unrooted Maximum Consistent Supertrees
van Iersel, Leo
2009-01-01
A chief problem in phylogenetics and database theory is the computation of a maximum consistent tree from a set of rooted or unrooted trees. A standard input are triplets, rooted binary trees on three leaves, or quartets, unrooted binary trees on four leaves. We give exact algorithms constructing rooted and unrooted maximum consistent supertrees in time O(2^n n^5 m^2 log(m)) for a set of m triplets (quartets), each one distinctly leaf-labeled by some subset of n labels. The algorithms extend to weighted triplets (quartets). We further present fast exact algorithms for constructing rooted and unrooted maximum consistent trees in polynomial space. Finally, for a set T of m rooted or unrooted trees with maximum degree D and distinctly leaf-labeled by some subset of a set L of n labels, we compute, in O(2^{mD} n^m m^5 n^6 log(m)) time, a tree distinctly leaf-labeled by a maximum-size subset X of L that all trees in T, when restricted to X, are consistent with.
Intra-op measurement of the mechanical axis deviation: an evaluation study on 19 human cadaver legs.
Wang, Lejing; Fallavollita, Pascal; Brand, Alexander; Erat, Okan; Weidert, Simon; Thaller, Peter-Helmut; Euler, Ekkehard; Navab, Nassir
2012-01-01
The alignment of the lower limb in high tibial osteotomy (HTO) or total knee arthroplasty (TKA) must be determined intraoperatively. One way to do so is to deform the mechanical axis deviation (MAD), for which a tolerance measurement of 10 mm is widely accepted. Many techniques are proposed in clinical practice such as visual inspection, cable method, grid with lead impregnated reference lines, or more recently, navigation systems. Each has their disadvantages including reliability of the MAD measurement, excess radiation, prolonged operation time, complicated setup and high cost. To alleviate such shortcomings, we propose a novel clinical protocol that allows quick and accurate intraoperative calculation of MAD. This is achieved by an X-ray stitching method requiring only three X-ray images placed into a panoramic image frame during the entire procedure. The method has been systematically analyzed in a simulation framework in order to investigate its accuracy and robustness. Furthermore, we validated our protocol via a preclinical study comprising 19 human cadaver legs. Four surgeons determined MAD measurements using our X-ray panorama and compared these values to a gold-standard CT-based technique. The maximum average MAD error was 3.5mm which shows great potential for the technique.
Knowledge of food and drug administration reportable deviations.
Lam, Rebecca; Bryant, Barbara J
2011-07-01
As early as 2001, the Food and Drug Administration (FDA) required blood centers and hospital transfusion services to report events associated with testing, storage, or distribution of blood products that deviated from current good manufacturing practices or affected the safety, purity, or potency of the product. Between 2004 and 2009, an average of only 8.6% of hospitals reported blood product deviations. Case scenarios designed to evaluate knowledge of FDA reportable deviations were developed and sent for evaluation to the Center for Biologics Evaluation and Research (CBER) and FDA division directors for FDA reportable deviations. A final survey containing eight cases was launched in a web-based online survey tool and sent to blood bank medical technologists. Additional information was queried regarding job title/responsibilities and the size of the blood center and/or transfusion service. There were 176 respondents to the survey. Only 5.7% (10/176) answered all questions correctly. Analysis by job title and place of employment revealed no correlation to the number of correct responses. More importance was attached to deviations involving quality control, blood bank identification, unit specifications, and antibody identification. Less importance was attached to deviations involving phlebotomist's initials, failure to issue units in the computer, and using a recent sample from a previous hospitalization. This study revealed that blood bankers did not have clear understanding of what constituted an FDA reportable occurrence. Size or type of blood establishment or individual job title was not associated with more knowledge of FDA reportable deviations. © 2011 American Association of Blood Banks.
Cosolvency and deviations from log-linear solubilization.
Rubino, J T; Yalkowsky, S H
1987-06-01
The solubilities of three nonpolar drugs, phenytoin, diazepam, and benzocaine, have been measured in 14 cosolvent-water binary mixtures. The observed solubilities were examined for deviations from solubilities calculated by the equation log Sm = f log Sc + (1 - f) log Sw, where Sm is the solubility of the drug in the cosolvent-water mixture, Sc is the solubility of the drug in neat cosolvent, f is the volume fraction of cosolvent, and Sw is the solubility of the drug in water. When presented graphically, the patterns of the deviations were similar for all three drugs in mixtures of amphiprotic cosolvents (glycols, polyols, and alcohols) and water as well as nonpolar, aprotic cosolvents (dioxane, triglyme, dimethyl isosorbide) and water. The deviations were positive for phenytoin and benzocaine but negative for diazepam in mixtures of dipolar, aprotic cosolvents (dimethylsulfoxide, dimethylformamide, and dimethylacetamide) and water. The source of the deviations could not consistently be attributed to physical properties of the cosolvent-water mixtures or to alterations in the solute crystal. Similarities between the results of this study and those of previous investigations suggest that changes in the structure of the solvent play a role in the deviations from the expected solubilities.
DYSFUNCTION OF THE MODERN RUSSIAN FAMILY AND PROBLEM OF THE DEVIATING SOCIALIZATION OF TEENAGERS
Tatyana I. BARSUKOVA
2015-01-01
Full Text Available In article influence of dysfunction of the family interpreted by authors as a negative consequence of its transformation, on deviations in socialization of teenagers, in their sotsializatsionnykh trajectories, on a gabitualization of "trouble" is analysed. Locates that violation of functioning of a family (its dysfunction in the attitude towards the teenager can be referred to determinants of deviations of socialization. Negative influence of a disfunktsionalnost of a family on formation of the identity of the modern teenager is shown in his deviant acts of a negative orientation now; forms of deviant behavior in which these deviations, and also consequences of refusal of a number of families of performance of economic and economic and protective functions prove are described. Author's definition of the normal and deviating socialization is presented. Attempt to define border between the normal and deviating socialization, between normal sotsializatsionny process and anomiyny, deformable becomes. By authors of article it is proved that the deviating socialization the deviation from sotsializatsionny norm as multidimensional a standard is the cornerstone, of fiksiruyushchy a sotsializirovannost of the person. Definition of sotsializatsionny norm as interval, as admissible in behavior of people and as its regulator is given. The emphasis is placed on a role of trajectory model of socialization which changes depending on influence of a family on socialization of teenagers. The choice locates in article as object of research of the teenager that at such age some "binarity" of the personality, a combination in it of lines, both the adult, and the child is observed. It, according to authors, complicates a choice the teenager of ethical standards and vital values. Besides, variety of alternatives at a choice of norms and behavior models it is complicated by a polinormativnost of social space of the teenager. The
Deviations of the distributions of seismic energies from the Gutenberg-Richter law
Pisarenko, V; Rodkin, M
2003-01-01
A new non-parametric statistic is introduced for the characterization of deviations from power laws. It is tested on the distribution of seismic energies given by the Gutenberg-Richter law. Based on the two first statistical log-moments, it evaluates quantitatively the deviations of the distribution of scalar seismic moments from a power-like (Pareto) law. This statistic is close to zero for the Pareto law with arbitrary power index, and deviates from zero for any non-Pareto distribution. A version of this statistic for discrete distribution of quantified magnitudes is also given. A methodology based on this statistics consisting in scanning the lower threshold for earthquake energies provides an explicit visualization of deviations from the Pareto law, surpassing in sensitivity the standard Hill estimator or other known techniques. This new statistical technique has been applied to shallow earthquakes (h < 70 km) both in subduction zones and in mid-ocean ridge zones (using the Harvard catalog of seismic m...
Jacobson, Gloria; Rella, Chris; Farinas, Alejandro
2014-05-01
Technological advancement of instrumentation in atmospheric and other geoscience disciplines over the past decade has lead to a shift from discrete sample analysis to continuous, in-situ monitoring. Standard error analysis used for discrete measurements is not sufficient to assess and compare the error contribution of noise and drift from continuous-measurement instruments, and a different statistical analysis approach should be applied. The Allan standard deviation analysis technique developed for atomic clock stability assessment by David W. Allan [1] can be effectively and gainfully applied to continuous measurement instruments. As an example, P. Werle et al has applied these techniques to look at signal averaging for atmospheric monitoring by Tunable Diode-Laser Absorption Spectroscopy (TDLAS) [2]. This presentation will build on, and translate prior foundational publications to provide contextual definitions and guidelines for the practical application of this analysis technique to continuous scientific measurements. The specific example of a Picarro G2401 Cavity Ringdown Spectroscopy (CRDS) analyzer used for continuous, atmospheric monitoring of CO2, CH4 and CO will be used to define the basics features the Allan deviation, assess factors affecting the analysis, and explore the time-series to Allan deviation plot translation for different types of instrument noise (white noise, linear drift, and interpolated data). In addition, the useful application of using an Allan deviation to optimize and predict the performance of different calibration schemes will be presented. Even though this presentation will use the specific example of the Picarro G2401 CRDS Analyzer for atmospheric monitoring, the objective is to present the information such that it can be successfully applied to other instrument sets and disciplines. [1] D.W. Allan, "Statistics of Atomic Frequency Standards," Proc, IEEE, vol. 54, pp 221-230, Feb 1966 [2] P. Werle, R. Miicke, F. Slemr, "The Limits
Scaling Deviations for Neutrino Reactions in Aysmptotically Free Field Theories
Wilczek, F. A.; Zee, A.; Treiman, S. B.
1974-11-01
Several aspects of deep inelastic neutrino scattering are discussed in the framework of asymptotically free field theories. We first consider the growth behavior of the total cross sections at large energies. Because of the deviations from strict scaling which are characteristic of such theories the growth need not be linear. However, upper and lower bounds are established which rather closely bracket a linear growth. We next consider in more detail the expected pattern of scaling deviation for the structure functions and, correspondingly, for the differential cross sections. The analysis here is based on certain speculative assumptions. The focus is on qualitative effects of scaling breakdown as they may show up in the X and y distributions. The last section of the paper deals with deviations from the Callan-Gross relation.
Minimizing Hexapod Robot Foot Deviations Using Multilayer Perceptron
Vytautas Valaitis
2015-12-01
Full Text Available Rough-terrain traversability is one of the most valuable characteristics of walking robots. Even despite their slower speeds and more complex control algorithms, walking robots have far wider usability than wheeled or tracked robots. However, efficient movement over irregular surfaces can only be achieved by eliminating all possible difficulties, which in many cases are caused by a high number of degrees of freedom, feet slippage, frictions and inertias between different robot parts or even badly developed inverse kinematics (IK. In this paper we address the hexapod robot-foot deviation problem. We compare the foot-positioning accuracy of unconfigured inverse kinematics and Multilayer Perceptron-based (MLP methods via theory, computer modelling and experiments on a physical robot. Using MLP-based methods, we were able to significantly decrease deviations while reaching desired positions with the hexapod’s foot. Furthermore, this method is able to compensate for deviations of the robot arising from any possible reason.
Exact Large Deviation Function in the Asymmetric Exclusion Process
Derrida, Bernard; Lebowitz, Joel L.
1998-01-01
By an extension of the Bethe ansatz method used by Gwa and Spohn, we obtain an exact expression for the large deviation function of the time averaged current for the fully asymmetric exclusion process in a ring containing N sites and p particles. Using this expression we easily recover the exact diffusion constant obtained earlier and calculate as well some higher cumulants. The distribution of the deviation y of the average current is, in the limit N-->∞, skew and decays like exp-\\(Ay5/2\\) for y-->+∞ and exp-\\(A'\\|y\\|3/2\\) for y-->-∞. Surprisingly, the large deviation function has an expression very similar to the pressure (as a function of the density) of an ideal Bose or Fermi gas in 3D.
Mean-deviation analysis in the theory of choice.
Grechuk, Bogdan; Molyboha, Anton; Zabarankin, Michael
2012-08-01
Mean-deviation analysis, along with the existing theories of coherent risk measures and dual utility, is examined in the context of the theory of choice under uncertainty, which studies rational preference relations for random outcomes based on different sets of axioms such as transitivity, monotonicity, continuity, etc. An axiomatic foundation of the theory of coherent risk measures is obtained as a relaxation of the axioms of the dual utility theory, and a further relaxation of the axioms are shown to lead to the mean-deviation analysis. Paradoxes arising from the sets of axioms corresponding to these theories and their possible resolutions are discussed, and application of the mean-deviation analysis to optimal risk sharing and portfolio selection in the context of rational choice is considered.
Spin-geodesic deviations in the Kerr spacetime
Bini, D.; Geralico, A.
2011-11-01
The dynamics of extended spinning bodies in the Kerr spacetime is investigated in the pole-dipole particle approximation and under the assumption that the spin-curvature force only slightly deviates the particle from a geodesic path. The spin parameter is thus assumed to be very small and the back reaction on the spacetime geometry neglected. This approach naturally leads to solve the Mathisson-Papapetrou-Dixon equations linearized in the spin variables as well as in the deviation vector, with the same initial conditions as for geodesic motion. General deviations from generic geodesic motion are studied, generalizing previous results limited to the very special case of an equatorial circular geodesic as the reference path.
Spin-geodesic deviations in the Kerr spacetime
Bini, Donato
2014-01-01
The dynamics of extended spinning bodies in the Kerr spacetime is investigated in the pole-dipole particle approximation and under the assumption that the spin-curvature force only slightly deviates the particle from a geodesic path. The spin parameter is thus assumed to be very small and the back reaction on the spacetime geometry neglected. This approach naturally leads to solve the Mathisson-Papapetrou-Dixon equations linearized in the spin variables as well as in the deviation vector, with the same initial conditions as for geodesic motion. General deviations from generic geodesic motion are studied, generalizing previous results limited to the very special case of an equatorial circular geodesic as the reference path.
Large deviation theory for coin tossing and turbulence.
Chakraborty, Sagar; Saha, Arnab; Bhattacharjee, Jayanta K
2009-11-01
Large deviations play a significant role in many branches of nonequilibrium statistical physics. They are difficult to handle because their effects, though small, are not amenable to perturbation theory. Even the Gaussian model, which is the usual initial step for most perturbation theories, fails to be a starting point while discussing intermittency in fluid turbulence, where large deviations dominate. Our contention is: in the large deviation theory, the central role is played by the distribution associated with the tossing of a coin and the simple coin toss is the "Gaussian model" of problems where rare events play significant role. We illustrate this by applying it to calculate the multifractal exponents of the order structure factors in fully developed turbulence.
Deviations From Newton's Law in Supersymmetric Large Extra Dimensions
Callin, P
2006-01-01
Deviations from Newton's Inverse-Squared Law at the micron length scale are smoking-gun signals for models containing Supersymmetric Large Extra Dimensions (SLEDs), which have been proposed as approaches for resolving the Cosmological Constant Problem. Just like their non-supersymmetric counterparts, SLED models predict gravity to deviate from the inverse-square law because of the advent of new dimensions at sub-millimeter scales. However SLED models differ from their non-supersymmetric counterparts in three important ways: (i) the size of the extra dimensions is fixed by the observed value of the Dark Energy density, making it impossible to shorten the range over which new deviations from Newton's law must be seen; (ii) supersymmetry predicts there to be more fields in the extra dimensions than just gravity, implying different types of couplings to matter and the possibility of repulsive as well as attractive interactions; and (iii) the same mechanism which is purported to keep the cosmological constant natu...
Heterodyne Angle Deviation Interferometry in Vibration and Bubble Measurements
Ming-Hung Chiu
2016-07-01
Full Text Available We proposed heterodyne angle deviation interferometry (HADI for angle deviation measurements. The phase shift of an angular sensor (which can be a metal film or a surface plasmon resonance (SPR prism is proportional to the deviation angle of the test beam. The method has been demonstrated in bubble and speaker’s vibration measurements in this paper. In the speaker’s vibration measurement, the voltage from the phase channel of a lock-in amplifier includes the vibration level and frequency. In bubble measurement, we can count the number of bubbles passing through the cross section of the laser beam and measure the bubble size from the phase pulse signal.
Deviation of eyes and head in acute cerebral stroke
Ilg UJ
2006-06-01
Full Text Available Abstract Background It is a well-known phenomenon that some patients with acute left or right hemisphere stroke show a deviation of the eyes (Prévost's sign and head to one side. Here we investigated whether both right- and left-sided brain lesions may cause this deviation. Moreover, we studied the relationship between this phenomenon and spatial neglect. In contrast to previous studies, we determined not only the discrete presence or absence of eye deviation with the naked eye through clinical inspection, but actually measured the extent of horizontal eye-in-head and head-on-trunk deviation. In further contrast, measurements were performed early after stroke onset (1.5 days on average. Methods Eye-in-head and head-on-trunk positions were measured at the bedside in 33 patients with acute unilateral left or right cerebral stroke consecutively admitted to our stroke unit. Results Each single patient with spatial neglect and right hemisphere lesion showed a marked deviation of the eyes and the head to the ipsilesional, right side. The average spontaneous gaze position in this group was 46° right, while it was close to the saggital body midline (0° in the groups with acute left- or right-sided stroke but no spatial neglect as well as in healthy subjects. Conclusion A marked horizontal eye and head deviation observed ~1.5 days post-stroke is not a symptom associated with acute cerebral lesions per se, nor is a general symptom of right hemisphere lesions, but rather is specific for stroke patients with spatial neglect. The evaluation of the patient's horizontal eye and head position thus could serve as a brief and easy way helping to diagnose spatial neglect, in addition to the traditional paper-and-pencil tests.
Moderate Deviation Principles for Stochastic Differential Equations with Jumps
2014-01-15
random measure and an in�nite dimensional Brownian motion) was derived. As in the Brownian motion case, the representation is motivated in part by...deviations of a smaller order than in large deviation theory . Consider for example an independent and identically distributed (iid) sequence fYigi1 of...8217") " E " 1 2 Z X[0;T ] ( ")21fj "jB"gdT + F G "("N " 1’") # " 1 2 3M 2(1); (3.6) where the last inequality follows from (3.5) on
Influence of Deviation on Optical Transmission through Aperiodic Superlattices
YIN Hai-Long; YANG Xiang-Bo; LAN Sheng; HU Wei
2007-01-01
We propose a deviation model and study the influences of the relative error and sensitivity of a machine on the transmission coefficients (TCs) of Fibonacci superlattices. It is found that for a system with fewer layers, the influence of deviation can be ignored. When superlattices become more complicated, they may be fabricated by a machine with suitable relative error and possess the designed value of TC. However, when the number of system layers exceeds some critical value, superlattices should be manufactured only by precise machines. The influence of the sensitivity is also discussed.
Small shape deviations causes complex dynamics in large electric generators
Lundström, Niklas L. P.; Grafström, Anton; Aidanpää, Jan-Olov
2014-05-01
We prove that combinations of small eccentricity, ovality and/or triangularity in the rotor and stator can produce complex whirling motions of an unbalanced rotor in large synchronous generators. It is concluded which structures of shape deviations that are more harmful, in the sense of producing complex whirling motions, than others. For each such structure, we derive simplified equations of motions from which we conclude analytically the relation between shape deviations and mass unbalance that yield non-smooth whirling motions. Finally we discuss validity of our results in the sense of modeling of the unbalanced magnetic pull force.
Sample-path Large Deviations in Credit Risk
Leijdekker, Vincent; Spreij, Peter
2009-01-01
The event of large losses plays an important role in credit risk. As these large losses are typically rare, and portfolios usually consist of a large number of positions, large deviation theory is the natural tool to analyze the tail asymptotics of the probabilities involved. We first derive a sample-path large deviation principle (LDP) for the portfolio's loss process, which enables the computation of the logarithmic decay rate of the probabilities of interest. In addition, we derive exact asymptotic results for a number of specific rare-event probabilities, such as the probability of the loss process exceeding some given function.
UNUSUAL SEXUAL DEVIATIONS IN A YOUNG MAN: A CASE REPORT
John Dinesh
2014-08-01
Full Text Available Sexual deviance in human refers to abnormal sexual expression. Though it is very difficult to exactly say what is normal or abnormal in sexual relationships, some sexual behaviors are clearly documented as abnormal in our society. Paraphilias or perversions are sexual stimuli or acts that are deviations from normal sexual behaviors, but are necessary for some individual’s to experience arousal and orgasm. Here we discuss abnormal sexual deviations in a young married male who presented with just feelings of guilt and without any psychosocial dysfunctions because of his uncommon sexual perversions.
童瑶; 高琴; 谢和宾; 曹霞; 李晓翠; 王乐三
2012-01-01
目的 探讨18F-FDG PET/CT最大标准摄取值(SUVmax)在非小细胞肺癌(NSCLC)中的适宜诊断界值.资料与方法 102例行胸部或全身PET/CT检查并经支气管内镜病理、肿块穿刺细胞学检查、术后病理确诊证实的肺部良、恶性病变患者,根据Youden指数最大原则、误诊率与漏诊率同等重要原则、正确率最大原则寻找18F-FDG PET/CT SUVmax鉴别NSCLC与肺良性病变的适宜诊断标准.结果 18F-FDG PET/CT诊断NSCLC与肺良性病变时,Youden指数最大原则下的适宜诊断界值为S UVmax=2.8,误诊率与漏诊率同等重要原则下的适宜诊断界值为SUVmax=5.45,正确率最大原则下的适宜诊断界值为SUVmax=2.8.结论 SUVmax鉴别NSCLC与肺良性病变的适宜诊断标准为2.8.%Purpose To investigate the suitable cutoff value of maximum standardized uptake value (SUVmax) for diagnosing non-small cell lung cancer (NSCLC) using 18F-FDG PET/CT. Materials and Methods 102 patients with malignant or benign pulmonary lesions proved by pathology underwent PET/CT The suitable cutoff value of SUVmax for 18F-FDG PET/CT was determined to differentiate NSCLC from pulmonary benign lesions based on Youden's index maximum, rate of equal false positive and false negative and accuracy maximum principle. Results The optimal cutoff values of SUVmax were 2.8, 5,45 and 2.8, respectively according to the rule of Youden's index maximum, rate of equal false positive and false negative, and the accuracy maximum. Conclusion The optimal cutoff value of SUVmax to differentiate NSCLC from pulmonary benign lesions is 2.8.
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
30 CFR 57.5039 - Maximum permissible concentration.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum permissible concentration. 57.5039... Maximum permissible concentration. Except as provided by standard § 57.5005, persons shall not be exposed to air containing concentrations of radon daughters exceeding 1.0 WL in active workings. ...
78 FR 2273 - Canned Tuna Deviating From Identity Standard; Temporary Permit for Market Testing
2013-01-10
... INFORMATION CONTACT: Loretta A. Carey, Center for Food Safety and Applied Nutrition (HFS-820), Food and Drug..., and Cosmetic Act (21 U.S.C. 341). ] The permit covers limited interstate marketing tests of products... panels on the labels of the test products must bear nutrition labeling in accordance with 21 CFR...
Violation of a temporal Bell inequality for single spins in solid by over 50 standard deviations
Waldherr, G; Huelga, S F; Jelezko, F; Wrachtrup, J
2011-01-01
Quantum non-locality has been experimentally investigated by testing different forms of Bell's inequality, yet a loophole-free realization has not been achieved up to now. Much less explored are temporal Bell inequalities, which are not subject to the locality assumption, but impose a constrain on the system's time-correlations. In this paper, we report on the experimental violation of a temporal Bell's inequality using a nitrogen vacancy defect (NV) in diamond and provide a novel quantitative test of quantum coherence. We therefore present a new technique to initialize the electronic state of the NV with high fidelity, a necessary requirement for reliable quantum information processing and/or the implementation of protocols for quantum metrology.
SNR and Standard Deviation of cGNSS-R and iGNSS-R Scatterometric Measurements.
Alonso-Arroyo, Alberto; Querol, Jorge; Lopez-Martinez, Carlos; Zavorotny, Valery U; Park, Hyuk; Pascual, Daniel; Onrubia, Raul; Camps, Adriano
2017-01-19
This work addresses the accuracy of the Global Navigation Satellite Systems (GNSS)-Reflectometry (GNSS-R) scatterometric measurements considering the presence of both coherent and incoherent scattered components, for both conventional GNSS-R (cGNSS-R) and interferometric GNSS-R (iGNSS-R) techniques. The coherent component is present for some type of surfaces, and it has been neglected until now because it vanishes for the sea surface scattering case. Taking into account the presence of both scattering components, the estimated Signal-to-Noise Ratio (SNR) for both techniques is computed based on the detectability criterion, as it is done in conventional GNSS applications. The non-coherent averaging operation is considered from a general point of view, taking into account that thermal noise contributions can be reduced by an extra factor of 0.88 dB when using partially overlapped or partially correlated samples. After the SNRs are derived, the received waveform's peak variability is computed, which determines the system's capability to measure geophysical parameters. This theoretical derivations are applied to the United Kingdom (UK) TechDemoSat-1 (UK TDS-1) and to the future GNSS REflectometry, Radio Occultation and Scatterometry on board the International Space Station (ISS) (GEROS-ISS) scenarios, in order to estimate the expected scatterometric performance of both missions.
7 CFR 1724.52 - Permitted deviations from RUS construction standards.
2010-01-01
... may not have the extra measure of protection needed in areas frequented by eagles and other large...) and 1 CFR part 51. Copies of this publication may be obtained from the Raptor Research Foundation, Inc... Division, 1400 Independence Avenue, SW., Washington, DC, Room 1246-S, and at the National Archives...
Rating Slam Dunks to Visualize the Mean, Median, Mode, Range, and Standard Deviation
Robinson, Nick W.; Castle Bell, Gina
2014-01-01
Among the many difficulties beleaguering the communication research methods instructor is the problem of contextualizing abstract ideas. Comprehension of variable operationalization, the utility of the measures of central tendency, measures of dispersion, and the visual distribution of data sets are difficult, since students have not handled data.…
SNR and Standard Deviation of cGNSS-R and iGNSS-R Scatterometric Measurements
Alberto Alonso-Arroyo
2017-01-01
Full Text Available This work addresses the accuracy of the Global Navigation Satellite Systems (GNSS-Reflectometry (GNSS-R scatterometric measurements considering the presence of both coherent and incoherent scattered components, for both conventional GNSS-R (cGNSS-R and interferometric GNSS-R (iGNSS-R techniques. The coherent component is present for some type of surfaces, and it has been neglected until now because it vanishes for the sea surface scattering case. Taking into account the presence of both scattering components, the estimated Signal-to-Noise Ratio (SNR for both techniques is computed based on the detectability criterion, as it is done in conventional GNSS applications. The non-coherent averaging operation is considered from a general point of view, taking into account that thermal noise contributions can be reduced by an extra factor of 0.88 dB when using partially overlapped or partially correlated samples. After the SNRs are derived, the received waveform’s peak variability is computed, which determines the system’s capability to measure geophysical parameters. This theoretical derivations are applied to the United Kingdom (UK TechDemoSat-1 (UK TDS-1 and to the future GNSS REflectometry, Radio Occultation and Scatterometry on board the International Space Station (ISS (GEROS-ISS scenarios, in order to estimate the expected scatterometric performance of both missions.
Endong, Floribert Patrick Calvain
2015-01-01
This paper presents the content analysis of randomly selected print advertising copies partially written in Nigerian Pidgin English (NPE), and used for the promotion of services and products made in Nigeria. It is equally based on a focus group discussion with 15 literate and semi literate users (readers) of these copies. It attempts to show how the writing of advertising copy is complex due to the prevalence of different and personalized spelling systems in the representation of NPE words. I...
Doppler standard deviation imaging for clinical monitoring of in vivo human skin blood flow
Zhao, Yonghua; Chen, Zhongping; Saxer, Christopher; Shen, Qimin; Xiang, Shaohua; Boer, Johannes F. de; Nelson, J. Stuart
2000-09-15
We used a novel phase-resolved optical Doppler tomographic (ODT) technique with very high flow-velocity sensitivity (10 {mu}m/s) and high spatial resolution (10 {mu}m) to image blood flow in port-wine stain (PWS) birthmarks in human skin. In addition to the regular ODT velocity and structural images, we use the variance of blood flow velocity to map the PWS vessels. Our device combines ODT and therapeutic systems such that PWS blood flow can be monitored in situ before and after laser treatment. To the authors' knowledge this is the first clinical application of ODT to provide a fast semiquantitative evaluation of the efficacy of PWS laser therapy in situ and in real time. (c) 2000 Optical Society of America.
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Dynamic deviation Volterra predistorter designed for linearizing power amplifiers
2011-01-01
Polynomial models of predistorter combined by the "black box" principle have been considered. A Volterra model using one-dimensional dynamic deviation was proposed. An adaptive predistorter was synthesized for linearizing the Wiener–Hammerstein model of power amplifiers. Estimates of the linearization accuracy and a comparative analysis of predistorter models were also presented.
A Positional Deviation Sensor for Training of Robots
Fredrik Dessen
1988-04-01
Full Text Available A device for physically guiding a robot manipulator through its task is described. It consists of inductive, contact-free positional deviation sensors. The sensor will be used in high performance sensory control systems. The paper describes problems concerning multi-dimensional, non-linear measurement functions and the design of the servo control system.
Large Deviation for Supercritical Branching Processes with Immigration
Jing Ning LIU; Mei ZHANG
2016-01-01
In this paper, we study the large deviation for a supercritical branching process with immigration controlled by a sequence of non-negative integer-valued independently identical distributed random variables, improving the previous results for non immigration processes. We rely heavily on the detail description and limit property of the generating function of immigration processes.
9 CFR 381.308 - Deviations in processing.
2010-01-01
... AGENCY ORGANIZATION AND TERMINOLOGY; MANDATORY MEAT AND POULTRY PRODUCTS INSPECTION AND VOLUNTARY...) must be handled according to: (1)(i) A HACCP plan for canned product that addresses hazards associated... (d) of this section. (c) (d) Procedures for handling process deviations where the HACCP plan...
Current Large Deviations for Asymmetric Exclusion Processes with Open Boundaries
Bodineau, T.; Derrida, B.
2006-04-01
We study the large deviation functional of the current for the Weakly Asymmetric Simple Exclusion Process in contact with two reservoirs. We compare this functional in the large drift limit to the one of the Totally Asymmetric Simple Exclusion Process, in particular to the Jensen-Varadhan functional. Conjectures for generalizing the Jensen-Varadhan functional to open systems are also stated.
48 CFR 2901.403 - Individual deviations from the FAR.
2010-10-01
... the FAR. 2901.403 Section 2901.403 Federal Acquisition Regulations System DEPARTMENT OF LABOR GENERAL DEPARTMENT OF LABOR ACQUISITION REGULATION SYSTEM Deviations From the FAR and DOLAR 2901.403 Individual... provisions (see FAR 1.403) or DOLAR provisions, which affect only one contracting action, unless FAR...
Freidlin-Wentzell's Large Deviations for Stochastic Evolution Equations
Ren, Jiagang; Zhang, Xicheng
2008-01-01
We prove a Freidlin-Wentzell large deviation principle for general stochastic evolution equations with small perturbation multiplicative noises. In particular, our general result can be used to deal with a large class of quasi linear stochastic partial differential equations, such as stochastic porous medium equations and stochastic reaction diffusion equations with polynomial growth zero order term and $p$-Laplacian second order term.
Process Measurement Deviation Analysis for Flow Rate due to Miscalibration
Oh, Eunsuk; Kim, Byung Rae; Jeong, Seog Hwan; Choi, Ji Hye; Shin, Yong Chul; Yun, Jae Hee [KEPCO Engineering and Construction Co., Deajeon (Korea, Republic of)
2016-10-15
An analysis was initiated to identify the root cause, and the exemption of high static line pressure correction to differential pressure (DP) transmitters was one of the major deviation factors. Also the miscalibrated DP transmitter range was identified as another major deviation factor. This paper presents considerations to be incorporated in the process flow measurement instrumentation calibration and the analysis results identified that the DP flow transmitter electrical output decreased by 3%. Thereafter, flow rate indication decreased by 1.9% resulting from the high static line pressure correction exemption and measurement range miscalibration. After re-calibration, the flow rate indication increased by 1.9%, which is consistent with the analysis result. This paper presents the brief calibration procedures for Rosemount DP flow transmitter, and analyzes possible three cases of measurement deviation including error and cause. Generally, the DP transmitter is required to be calibrated with precise process input range according to the calibration procedure provided for specific DP transmitter. Especially, in case of the DP transmitter installed in high static line pressure, it is important to correct the high static line pressure effect to avoid the inherent systematic error for Rosemount DP transmitter. Otherwise, failure to notice the correction may lead to indicating deviation from actual value.
Vertex deviation maps to bracked the Milky Way resonant radius
Roca-Fàbrega, S.; Antoja, T.; Figueras, F.; Valenzuela, O.; Romero-Gómez, M.; Pichardo, B.
2015-05-01
We map the kinematics of stars in simulated galaxy disks with spiral arms using the velocity ellipsoid vertex deviation (l_v). We use test particle simulations, and for the first time, fully self-consistent high resolution N-body models. We compare our maps with the Tight Winding Approximation model analytical predictions. We see that for all barred models spiral arms rotate closely to a rigid body manner and the vertex deviation values correlate with the density peaks position bounded by overdense and underdense regions. In such cases, vertex deviation sign changes from negative to positive when crossing the spiral arms in the direction of disk rotation, in regions where the spiral arms are in between corotation (CR) and the Outer Lindblad Resonance (OLR). By contrast, when the arm sections are inside the CR and outside the OLR, l_v changes from negative to positive.We propose that measurements of the vertex deviations pattern can be used to trace the position of the main resonances of the spiral arms. We propose that this technique might exploit future data from Gaia and APOGEE surveys. For unbarred N-body simulations with spiral arms corotating with disk material at all radii, our analysis suggests that no clear correlation exists between l_v and density structures.
Optical vibration and deviation measurement of rotating machine parts
无
2008-01-01
It is of interest to get appropriate information about the dynamic behaviour of rotating machinery parts in service. This paper presents an approach of optical vibration and deviation measurement of such parts. Essential of this method is an image derotator combined with a high speed camera or a laser doppler vibrometer (LDV).
The one-shot deviation principle for sequential rationality
Hendon, Ebbe; Whitta-Jacobsen, Hans Jørgen; Sloth, Birgitte
1996-01-01
We present a decentralization result which is useful for practical and theoretical work with sequential equilibrium, perfect Bayesian equilibrium, and related equilibrium concepts for extensive form games. A weak consistency condition is sufficient to obtain an analogy to the well known One-Stage......-Stage-Deviation Principle for subgame perfect equilibrium...
An experimental study of credible deviations and ACDC
de Groot Ruiz, A.; Offerman, T.; Onderstal, S.
2011-01-01
We test the Average Credible Deviation Criterion (ACDC), a stability measure and refinement for cheap talk equilibria introduced in De Groot Ruiz, Offerman & Onderstal (2011b). ACDC has been shown to be predictive under general conditions and to organize data well in previous experiments meant to te
International asset pricing under segmentation and PPP deviations
Chaieb, I.; Errunza, V.
2007-01-01
We analyze the impact of both purchasing power parity (PPP) deviations and market segmentation on asset pricing and investor's portfolio holdings. The freely traded securities command a world market risk premium and an inflation risk premium. The securities that can be held by only a subset of
Dispersion in Rectangular Networks: Effective Diffusivity and Large-Deviation Rate Function
Tzella, Alexandra; Vanneste, Jacques
2016-09-01
The dispersion of a diffusive scalar in a fluid flowing through a network has many applications including to biological flows, porous media, water supply, and urban pollution. Motivated by this, we develop a large-deviation theory that predicts the evolution of the concentration of a scalar released in a rectangular network in the limit of large time t ≫1 . This theory provides an approximation for the concentration that remains valid for large distances from the center of mass, specifically for distances up to O (t ) and thus much beyond the O (t1 /2) range where a standard Gaussian approximation holds. A byproduct of the approach is a closed-form expression for the effective diffusivity tensor that governs this Gaussian approximation. Monte Carlo simulations of Brownian particles confirm the large-deviation results and demonstrate their effectiveness in describing the scalar distribution when t is only moderately large.
Confusing Sterile Neutrinos with Deviation from Tribimaximal Mixing at Neutrino Telescopes
Awasthi, Ram Lal
2007-01-01
We expound the impact of extra sterile species on the ultra high energy neutrino fluxes in neutrino telescopes. We use three types of well-known flux ratios and compare the values of these flux ratios in presence of sterile neutrinos, with those predicted by deviation from the tribimaximal mixing scheme. We show that in neutrino telescopes, its easy to confuse between the signature of sterile neutrinos with that of the deviation from tribimaximal mixing. We also show that if the measured flux ratios acquire a value well outside the range predicted by the standard scenario with three active neutrinos only, it might be possible to tell the presence of extra sterile neutrinos by observing ultra high energy neutrinos in the upcoming neutrino telescopes.
MUSiC - A Generic Search for Deviations from Monte Carlo Predictions in CMS
Hof, Carsten
2009-05-01
We present a model independent analysis approach, systematically scanning the data for deviations from the Standard Model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of the event generators. Furthermore, due to the minimal theoretical bias this approach is sensitive to a variety of models of new physics, including those not yet thought of. Events are classified into event classes according to their particle content (muons, electrons, photons, jets and missing transverse energy). A broad scan of various distributions is performed, identifying significant deviations from the Monte Carlo simulation. We outline the importance of systematic uncertainties, which are taken into account rigorously within the algorithm. Possible detector effects and generator issues, as well as models involving supersymmetry and new heavy gauge bosons have been used as an input to the search algorithm.
Comparison of setup deviations for two thermoplastic immobilization masks in glottis cancer
Jung, Jae Hong [Dept. of Biomedical Engineering, College of Medicine, The Catholic University, Seoul (Korea, Republic of)
2017-03-15
The purpose of this study was compare to the patient setup deviation of two different type thermoplastic immobilization masks for glottis cancer in the intensity-modulated radiation therapy (IMRT). A total of 16 glottis cancer cases were divided into two groups based on applied mask type: standard or alternative group. The mean error (M), three-dimensional setup displacement error (3D-error), systematic error (Σ), random error (σ) were calculated for each group, and also analyzed setup margin (mm). The 3D-errors were 5.2 ± 1.3 mm and 5.9 ± 0.7 mm for the standard and alternative groups, respectively; the alternative group was 13.6% higher than the standard group. The systematic errors in the roll angle and the x, y, z directions were 0.8°, 1.7 mm, 1.0 mm, and 1.5 mm in the alternative group and 0.8°, 1.1 mm, 1.8 mm, and 2.0 mm in the alternative group. The random errors in the x, y, z directions were 10.9%, 1.7%, and 23.1% lower in the alternative group than in the standard group. However, absolute rotational angle (i.e., roll) in the alternative group was 12.4% higher than in the standard group. For calculated setup margin, the alternative group in x direction was 31.8% lower than in standard group. In contrast, the y and z direction were 52.6% and 21.6% higher than in the standard group. Although using a modified thermoplastic immobilization mask could be affect patient setup deviation in terms of numerical results, various point of view for an immobilization masks has need to research in terms of clinic issue.
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Lin, Jonathan K; Wheatley, Francis C; Handwerker, Jason; Harris, Norman J; Wong, Brian J F
2014-01-01
IMPORTANCE Accurately characterizing nasal septal deviations is valuable for surgical planning, classifying nasal septal deviations, providing a means to accurately perform outcomes research, and understanding the causes of chronic conditions. OBJECTIVE To determine and quantify regions of septal deformity that can be used to develop a comprehensive classification system. DESIGN, SETTING, AND PARTICIPANTS A retrospective case series study was conducted at an academic tertiary care hospital. Sixty-four participants were selected based on a convenience sample of computed tomography (CT) scans of the paranasal sinuses and midface available between June 29, 2011, and August 16, 2012. Exclusion criteria consisted of incomplete or inadequate CT series. The most recent CT scans were chosen for analyses regardless of the indication for imaging. Digital Imaging and Communications in Medicine format bitmap file–formatted data were obtained and analyzed using MATLAB and OsiriX. The line to curve ratio, deviation area, and root mean square (RMS) values of the septal contour vs the ideal straight septum fit were calculated. Analysis was performed to detect significant differences (P < .05) using the 3 measures.MAIN OUTCOMES AND MEASURES Quantitative analysis of nasal septal deviation.RESULTS The population consisted of 50 male and 14 female patients aged 3 to 83 years(mean, 42 years). Mean line to curve ratios, areas, and RMS values were highest in contours that intersected the perpendicular plate–vomer junction, with a mean line to curve ratio of1.04 and mean deviated area of 627.16 arbitrary units (P = .02). Maximal deviation areas were also seen midway from the perpendicular plate–vomer junction to the nasal spine with a mean area of 577.31 arbitrary units (P = .01). The RMS values were significantly elevated along the crista galli and perpendicular plate–vomer junction (P < .05).CONCLUSIONS AND RELEVANCE Maximum septal deviation is seen at the perpendicular plate
无
2011-01-01
Dairy quality standards trigger further controversy China’s dairy industry is once again being scrutinized as suspicions abound that major dairy enterprises played a hand in manipulating and lowering quality standards to save costs.The new standards released in March 2010 set the maximum safety limit for bacteria in raw milk at 2 million cells per milliliter, four times
Penalized maximum likelihood estimation and variable selection in geostatistics
Chu, Tingjin; Wang, Haonan; 10.1214/11-AOS919
2012-01-01
We consider the problem of selecting covariates in spatial linear models with Gaussian process errors. Penalized maximum likelihood estimation (PMLE) that enables simultaneous variable selection and parameter estimation is developed and, for ease of computation, PMLE is approximated by one-step sparse estimation (OSE). To further improve computational efficiency, particularly with large sample sizes, we propose penalized maximum covariance-tapered likelihood estimation (PMLE$_{\\mathrm{T}}$) and its one-step sparse estimation (OSE$_{\\mathrm{T}}$). General forms of penalty functions with an emphasis on smoothly clipped absolute deviation are used for penalized maximum likelihood. Theoretical properties of PMLE and OSE, as well as their approximations PMLE$_{\\mathrm{T}}$ and OSE$_{\\mathrm{T}}$ using covariance tapering, are derived, including consistency, sparsity, asymptotic normality and the oracle properties. For covariance tapering, a by-product of our theoretical results is consistency and asymptotic normal...
Geodesics and Geodesic Deviation for Impulsive Gravitational Waves
Steinbauer, R
1998-01-01
The geometry of impulsive pp-waves is explored via the analysis of the geodesic and geodesic deviation equation using the distributional form of the metric. The geodesic equation involves formally ill-defined products of distributions due to the nonlinearity of the equations and the presence of the Dirac delta distribution. Thus, strictly speaking, it cannot be treated within Schwartz's linear theory of distributions. To cope with this problem we proceed by first regularizing the delta singularity, then solving the regularized equation within classical smooth functions and, finally, obtaining a distributional, regularization-idependent limit as solution to the original problem. We also treat the Jacobi equation which, despite being linear in the deviation vector field, involves even more delicate singular expressions, like the ``square'' of the delta distribution. Again the same regularization procedure provides us with a perfectly well behaved smooth regularization and a regularization-independent distributi...
Large Deviations for the Macroscopic Motion of an Interface
Birmpa, P.; Dirr, N.; Tsagkarogiannis, D.
2017-03-01
We study the most probable way an interface moves on a macroscopic scale from an initial to a final position within a fixed time in the context of large deviations for a stochastic microscopic lattice system of Ising spins with Kac interaction evolving in time according to Glauber (non-conservative) dynamics. Such interfaces separate two stable phases of a ferromagnetic system and in the macroscopic scale are represented by sharp transitions. We derive quantitative estimates for the upper and the lower bound of the cost functional that penalizes all possible deviations and obtain explicit error terms which are valid also in the macroscopic scale. Furthermore, using the result of a companion paper about the minimizers of this cost functional for the macroscopic motion of the interface in a fixed time, we prove that the probability of such events can concentrate on nucleations should the transition happen fast enough.
Magnetic Elements at Finite Temperature and Large Deviation Theory
Kohn, R. V.; Reznikoff, M. G.; vanden-Eijnden, E.
2005-08-01
We investigate thermally activated phenomena in micromagnetics using large deviation theory and concepts from stochastic resonance. We give a natural mathematical definition of finite-temperature astroids, finite-temperature hysteresis loops, etc. Generically, these objects emerge when the (generalized) Arrhenius timescale governing the thermally activated barrier crossing event of magnetic switching matches the timescale at which the magnetic element is pulsed or ramped by an external field; in the special and physically relevant case of multiple-pulse experiments, on the other hand, short-time switching can lead to non-Arrhenius behavior. We show how large deviation theory can be used to explain some properties of the astroids, like their shrinking and sharpening as the number of applied pulses is increased. We also investigate the influence of the dynamics, in particular the relative importance of the gyromagnetic and the damping terms. Finally, we discuss some issues and open questions regarding spatially nonuniform magnetization.
Distributed Detection over Time Varying Networks: Large Deviations Analysis
Bajovic, Dragana; Xavier, Joao; Sinopoli, Bruno; Moura, Jose M F
2010-01-01
We apply large deviations theory to study asymptotic performance of running consensus distributed detection in sensor networks. Running consensus is a stochastic approximation type algorithm, recently proposed. At each time step k, the state at each sensor is updated by a local averaging of the sensor's own state and the states of its neighbors (consensus) and by accounting for the new observations (innovation). We assume Gaussian, spatially correlated observations. We allow the underlying network be time varying, provided that the graph that collects the union of links that are online at least once over a finite time window is connected. This paper shows through large deviations that, under stated assumptions on the network connectivity and sensors' observations, the running consensus detection asymptotically approaches in performance the optimal centralized detection. That is, the Bayes probability of detection error (with the running consensus detector) decays exponentially to zero as k goes to infinity at...
Contiguous Uniform Deviation for Multiple Linear Regression in Pattern Recognition
Andriana, A. S.; Prihatmanto, D.; Hidaya, E. M. I.; Supriana, I.; Machbub, C.
2017-01-01
Understanding images by recognizing its objects is still a challenging task. Face elements detection has been developed by researchers but not yet shows enough information (low resolution in information) needed for recognizing objects. Available face recognition methods still have error in classification and need a huge amount of examples which may still be incomplete. Another approach which is still rare in understanding images uses pattern structures or syntactic grammars describing shape detail features. Image pixel values are also processed as signal patterns which are approximated by mathematical function curve fitting. This paper attempts to add contiguous uniform deviation method to curve fitting algorithm to increase applicability in image recognition system related to object movement. The combination of multiple linear regression and contiguous uniform deviation method are applied to the function of image pixel values, and show results in higher resolution (more information) of visual object detail description in object movement.
Observable Deviations from Homogeneity in an Inhomogeneous Universe
Giblin, John T; Starkman, Glenn D
2016-01-01
How does inhomogeneity affect our interpretation of cosmological observations? It has long been wondered to what extent the observable properties of an inhomogeneous universe differ from those of a corresponding Friedman-Lemaitre-Robertson-Walker (FLRW) model, and how the inhomogeneities affect that correspondence. Here, we use numerical relativity to study the behavior of light beams traversing an inhomogeneous universe and construct the resulting Hubble diagrams. The universe that emerges exhibits an average FLRW behavior, but inhomogeneous structures contribute to deviations in observables across the observer's sky. We also investigate the relationship between angular diameter distance and the angular extent of a source, finding deviations that grow with source redshift. These departures from FLRW are important path-dependent effects with implications for using real observables in an inhomogeneous universe such as our own.
Large Deviation Principle for Benedicks-Carleson Quadratic Maps
Chung, Yong Moo; Takahasi, Hiroki
2012-11-01
Since the pioneering works of Jakobson and Benedicks & Carleson and others, it has been known that a positive measure set of quadratic maps admit invariant probability measures absolutely continuous with respect to Lebesgue. These measures allow one to statistically predict the asymptotic fate of Lebesgue almost every initial condition. Estimating fluctuations of empirical distributions before they settle to equilibrium requires a fairly good control over large parts of the phase space. We use the sub-exponential slow recurrence condition of Benedicks & Carleson to build induced Markov maps of arbitrarily small scale and associated towers, to which the absolutely continuous measures can be lifted. These various lifts together enable us to obtain a control of recurrence that is sufficient to establish a level 2 large deviation principle, for the absolutely continuous measures. This result encompasses dynamics far from equilibrium, and thus significantly extends presently known local large deviations results for quadratic maps.
Large deviations for stochastic flows and their applications
GAO; Fuqing(
2001-01-01
［1］Yoshida, N., A large deviation principle for (r,p)-capacities on the Wiener space, Proba. Th. Rel. Fields, 1993, 94:473-488.［2］Gao, F. Q., Large deviations of (r,p)-capacities for diffusion processes, Advances in Math. (in Chinese), 1996, 25:500-509.［3］Millet, A., Nualart, D., Sanz, M., Large deviations for a class of anticipating stochastic differential equations, Ann.Prob.. 1993, 20: 1902-1931.［4］Millet, A., Nualart, D., Sans, M., Composition of large deviation principles and applications, in Stochastic Analysis (ed.Mayer, E. ), San Diego: Academic Press, 1991, 383-395.［5］Ocone, D., Pardoux, E., A generalized Ito-Ventzell formula, Applications to a class of anticipating stochastic differentialequations, Ann. Inst. Poincaré, Sect. B, 1989, 25: 39-71.［6］Malliavin, P., Nualart, D., Quasi sure analysis of stochastic flows and Banach space valued smooth functionals on the Wiener space, J. Funct. Anal., 1993, 112: 287-317.［7］Huang, Z., Ren, J. , Quasi sure stochastic flows, Stoch. Stoch. Rep. , 1990, 33: 149-157.［8］Gao, E. Q., Large deviations for diffusion processes in Hslder norm, Advances in Math. (in Chinese), 1997, 26: 147-158.［9］Arous, B. G. , Ledoux, M., Grandes déviations sur la déviations de Freidlin-Wentzell en norme holderienne, 1994, Lecr.Notes in Math. , 1994, 987: 1583.［10］Baldi, P. , Sanz, M. , Une remarque sur la théorie des grandes deviations, Lect. Notes Math., 1991, 1485: 345-348.［11］Airault, H., Malliavin, P., Intégration géometrique sur l'espace de Wiener, Bull. Sci. Math., 1988, 112: 3-52.［12］Ikeda, N. , Watanabe, S., Stochastic Differential Equations and Diffusion Processes, 2nd ed., Amsterdam-Kodansha-Tokyo:North-Holland, 1988.［13］Malliavin, P., Stochastic Analysis, Grundlehren der Mathematischen Wissenschaften 313, Berlin: Springer-Verlag, 1997.［14］Brzezniak, Z., Elworthy, K. D., Stochastic flows of diffeomorphism. In Stochastic Analysis and Applications (eds. Davies,I. M.. Truman
Anderson, Carryn M., E-mail: carryn-anderson@uiowa.edu [Department of Radiation Oncology, University of Iowa, Iowa City, Iowa (United States); Chang, Tangel [Department of Radiation Oncology, University of Iowa, Iowa City, Iowa (United States); Graham, Michael M. [Department of Nuclear Medicine, University of Iowa, Iowa City, Iowa (United States); Marquardt, Michael D. [Department of Radiation Oncology, University of Iowa, Iowa City, Iowa (United States); Button, Anna; Smith, Brian J. [Department of Biostatistics, University of Iowa, Iowa City, Iowa (United States); Menda, Yusuf [Department of Nuclear Medicine, University of Iowa, Iowa City, Iowa (United States); Sun, Wenqing [Department of Radiation Oncology, University of Iowa, Iowa City, Iowa (United States); Pagedar, Nitin A. [Department of Otolaryngology—Head and Neck Surgery, University of Iowa, Iowa City, Iowa (United States); Buatti, John M. [Department of Radiation Oncology, University of Iowa, Iowa City, Iowa (United States)
2015-03-01
Purpose: To evaluate dynamic [{sup 18}F]-fluorodeoxyglucose (FDG) uptake methodology as a post–radiation therapy (RT) response assessment tool, potentially enabling accurate tumor and therapy-related inflammation differentiation, improving the posttherapy value of FDG–positron emission tomography/computed tomography (FDG-PET/CT). Methods and Materials: We prospectively enrolled head-and-neck squamous cell carcinoma patients who completed RT, with scheduled 3-month post-RT FDG-PET/CT. Patients underwent our standard whole-body PET/CT scan at 90 minutes, with the addition of head-and-neck PET/CT scans at 60 and 120 minutes. Maximum standardized uptake values (SUV{sub max}) of regions of interest were measured at 60, 90, and 120 minutes. The SUV{sub max} slope between 60 and 120 minutes and change of SUV{sub max} slope before and after 90 minutes were calculated. Data were analyzed by primary site and nodal site disease status using the Cox regression model and Wilcoxon rank sum test. Outcomes were based on pathologic and clinical follow-up. Results: A total of 84 patients were enrolled, with 79 primary and 43 nodal evaluable sites. Twenty-eight sites were interpreted as positive or equivocal (18 primary, 8 nodal, 2 distant) on 3-month 90-minute FDG-PET/CT. Median follow-up was 13.3 months. All measured SUV endpoints predicted recurrence. Change of SUV{sub max} slope after 90 minutes more accurately identified nonrecurrence in positive or equivocal sites than our current standard of SUV{sub max} ≥2.5 (P=.02). Conclusions: The positive predictive value of post-RT FDG-PET/CT may significantly improve using novel second derivative analysis of dynamic triphasic FDG-PET/CT SUV{sub max} slope, accurately distinguishing tumor from inflammation on positive and equivocal scans.
Moderate deviations for the eigenvalue counting function of Wigner matrices
Doering, Hanna
2011-01-01
We establish a moderate deviation principle (MDP) for the number of eigenvalues of a Wigner matrix in an interval. The proof relies on fine asymptotics of the variance of the eigenvalue counting function of GUE matrices due to Gustavsson. The extension to large families of Wigner matrices is based on the Tao and Vu Four Moment Theorem and applies localization results by Erd\\"os, Yau and Yin. Moreover we investigate families of covariance matrices as well.
Maritime Group Motion Analysis: Representation, Learning, Recognition, and Deviation Detection
2017-02-01
Maritime Group Motion Analysis : Representation, Learning, Recognition, and Deviation Detection § Allen Waxman MultiSensor Scientific, LLC...while the authors were employed by, or sub-contractors of, Intelligent Software Solutions, Inc., of Colorado Springs, CO, USA, funded under contract...reading the PDF file of this manuscript.) Abstract - This paper introduces new concepts and methods in the analysis of group motions over extended
OSMOSIS: A CAUSE OF APPARENT DEVIATIONS FROM DARCY'S LAW.
Olsen, Harold W.
1985-01-01
This review of the existing evidence shows that osmosis causes intercepts in flow rate versus hydraulic gradient relationships that are consistent with the observed deviations from Darcy's law at very low gradients. Moreover, it is suggested that a natural cause of osmosis in laboratory samples could be chemical reactions such as those involved in aging effects. This hypothesis is analogous to the previously proposed occurrence of electroosmosis in nature generated by geochemical weathering reactions. Refs.
Large Deviation Functional of the Weakly Asymmetric Exclusion Process
Enaud, C.; Derrida, B.
2004-02-01
We obtain the large deviation functional of a density profile for the asymmetric exclusion process of L sites with open boundary conditions when the asymmetry scales like 1/L. We recover as limiting cases the expressions derived recently for the symmetric (SSEP) and the asymmetric (ASEP) cases. In the ASEP limit, the non linear differential equation one needs to solve can be analysed by a method which resembles the WKB method.
Probing the deviation from maximal mixing of atmospheric neutrinos
Choubey, S; Choubey, Sandhya; Roy, Probir
2006-01-01
Pioneering atmospheric muon neutrino experiments have demonstrated the near-maximal magnitude of the flavor mixing angle $\\theta_{23}$. But the precise value of the deviation $D \\equiv 1/2 - \\sin^2 \\theta_{23}$ from maximality (if nonzero) needs to be known, being of great interest -- especially to builders of neutrino mass and mixing models. We quantitatively investigate in a three generation framework the feasibility of determining $D$ in a statistically significant manner from studies of the atmospheric $\
Predictive visual tracking based on least absolute deviation estimation
Rongtai Cai; Yanjie Wang
2008-01-01
To cope with the occlusion and intersection between targets and the environment, location prediction is employed in the visual tracking system. Target trace is fitted by sliding subsection polynomials based on least absolute deviation (LAD) estimation, and the future location of target is predicted with the fitted trace. Experiment results show that the proposed location prediction algorithm based on LAD estimation has significant robustness advantages over least square (LS) estimation, and it is more effective than LS-based methods in visual tracking.
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Detecting deviations from pure EOF during CE separations.
O'Grady, John F; Noonan, Kathryn Y; McDonnell, Patrick; Mancuso, Aaron J; Frederick, Kimberley A
2007-07-01
CE separations are known for their high separation efficiencies. In systems with EOF, the high efficiencies benefit from the flat, plug profile that is characteristic of EOF. When a velocity gradient is present, such as in separations which have nonuniform buffer ionic strength, surface adsorption or differences in the height of the ends of the capillary, a parabolic flow component is introduced. This deviation from purely EOF yields increased peak dispersion and a subsequent decrease in separation performance. This work details a rapid method for detecting deviations from ideal plug flow during the course of a separation using the radially averaged flow profile of a photobleached fluorophore added to the BGE. By comparing the ratio of two different data analysis procedures, deviations from ideal plug flow can be detected. This method allows rapid measurement of flow character and does not interfere with the concurrent separation. We demonstrate easy detection of the onset of hydrodynamic flow induced by both gravity siphoning and an ionic strength buffer discontinuity. A brief analysis of the radially averaged peak shapes is also presented.
Geometric deviation modeling by kinematic matrix based on Lagrangian coordinate
Liu, Weidong; Hu, Yueming; Liu, Yu; Dai, Wanyi
2015-09-01
Typical representation of dimension and geometric accuracy is limited to the self-representation of dimension and geometric deviation based on geometry variation thinking, yet the interactivity affection of geometric variation and gesture variation of multi-rigid body is not included. In this paper, a kinematic matrix model based on Lagrangian coordinate is introduced, with the purpose of unified model for geometric variation and gesture variation and their interactive and integrated analysis. Kinematic model with joint, local base and movable base is built. The ideal feature of functional geometry is treated as the base body; the fitting feature of functional geometry is treated as the adjacent movable body; the local base of the kinematic model is fixed onto the ideal geometry, and the movable base of the kinematic model is fixed onto the fitting geometry. Furthermore, the geometric deviation is treated as relative location or rotation variation between the movable base and the local base, and it's expressed by the Lagrangian coordinate. Moreover, kinematic matrix based on Lagrangian coordinate for different types of geometry tolerance zones is constructed, and total freedom for each kinematic model is discussed. Finally, the Lagrangian coordinate library, kinematic matrix library for geometric deviation modeling is illustrated, and an example of block and piston fits is introduced. Dimension and geometric tolerances of the shaft and hole fitting feature are constructed by kinematic matrix and Lagrangian coordinate, and the results indicate that the proposed kinematic matrix is capable and robust in dimension and geometric tolerances modeling.
PROBABILISTIC MEASURES FOR INTERESTINGNESS OF DEVIATIONS – A SURVEY
Adnan Masood
2013-03-01
Full Text Available Association rule mining has long being plagued with the problem of finding meaningful, actionable knowledge from the large set of rules. In this age of data deluge with modern computing capabilities, we gather, distribute, and store information in vast amounts from diverse data sources. With such data profusion, the core knowledge discovery problem becomes efficient data retrieval rather than simply finding heaps of information. The most common approach is to employ measures of rule interestingness to filter the results of the association rule generation process. However, study of literature suggests that interestingness is difficult to define quantitatively and can be best summarized as, a record or pattern is interesting if it suggests a change in an established model. Almost twenty years ago, Gregory Piatetsky-Shapiro and Christopher J. Matheus, in their paper, “The Interestingness of Deviations,” argued that deviations should be grouped together in a finding and that the interestingness of a finding is the estimated benefit from a possible action connected to it. Since then, this field has progressed and new data mining techniques have been introduced to address the subjective, objective, and semantic interestingness measures. In this brief survey, we review the current state of literature around interestingness of deviations, i.e. outliers with specific interest around probabilistic measures using Bayesian belief networks.
Quantifying prosthetic gait deviation using simple outcome measures
Kark, Lauren; Odell, Ross; McIntosh, Andrew S; Simmons, Anne
2016-01-01
AIM: To develop a subset of simple outcome measures to quantify prosthetic gait deviation without needing three-dimensional gait analysis (3DGA). METHODS: Eight unilateral, transfemoral amputees and 12 unilateral, transtibial amputees were recruited. Twenty-eight able-bodied controls were recruited. All participants underwent 3DGA, the timed-up-and-go test and the six-minute walk test (6MWT). The lower-limb amputees also completed the Prosthesis Evaluation Questionnaire. Results from 3DGA were summarised using the gait deviation index (GDI), which was subsequently regressed, using stepwise regression, against the other measures. RESULTS: Step-length (SL), self-selected walking speed (SSWS) and the distance walked during the 6MWT (6MWD) were significantly correlated with GDI. The 6MWD was the strongest, single predictor of the GDI, followed by SL and SSWS. The predictive ability of the regression equations were improved following inclusion of self-report data related to mobility and prosthetic utility. CONCLUSION: This study offers a practicable alternative to quantifying kinematic deviation without the need to conduct complete 3DGA. PMID:27335814
Tiago André Fontoura de Melo
2011-10-01
Full Text Available Introduction and objective: This study aimed to analyze the influence of root curvature’s initial position on apical deviation occurrence after oscillatory system preparation. Material and methods: For this purpose, we used twenty simulated root canals with 21 mm length and 30 degree angle, which were divided into two experimental groups according to curvature’s initial position: 8 mm (group A and 12 mm (group B short of the canal orifice. The canals were prepared using crown-down technique, and memory instrument was size #30. For apical deviation analysis, before and after preparation, canals were filled with Indian ink and standardly photographed with the aid of a platform. After that, the images were manipulated by Adobe Photoshop® software, through superimposing pre- and post-operative images. Deviation occurrence was measured 1 mm short of working length and at the middle of the curvature by using the ruler tool. Data were subjected to analysis of variance (ANOVA with significance level set at 5%. Results: Although group B showed a significantly greater deviation mean than group A, no significant interaction was verified between the analysis site and the experimental group. Conclusion: According to the present data, it could be observed that the smaller the curvature radius, the greater the deviation. Concerning to the analysis site, it could be noted that the area 1 mm short of working length presented a higher deviation than the point at the middle of the curvature.
The retest distribution of the visual field summary index mean deviation is close to normal.
Anderson, Andrew J; Cheng, Allan C Y; Lau, Samantha; Le-Pham, Anne; Liu, Victor; Rahman, Farahnaz
2016-09-01
When modelling optimum strategies for how best to determine visual field progression in glaucoma, it is commonly assumed that the summary index mean deviation (MD) is normally distributed on repeated testing. Here we tested whether this assumption is correct. We obtained 42 reliable 24-2 Humphrey Field Analyzer SITA standard visual fields from one eye of each of five healthy young observers, with the first two fields excluded from analysis. Previous work has shown that although MD variability is higher in glaucoma, the shape of the MD distribution is similar to that found in normal visual fields. A Shapiro-Wilks test determined any deviation from normality. Kurtosis values for the distributions were also calculated. Data from each observer passed the Shapiro-Wilks normality test. Bootstrapped 95% confidence intervals for kurtosis encompassed the value for a normal distribution in four of five observers. When examined with quantile-quantile plots, distributions were close to normal and showed no consistent deviations across observers. The retest distribution of MD is not significantly different from normal in healthy observers, and so is likely also normally distributed - or nearly so - in those with glaucoma. Our results increase our confidence in the results of influential modelling studies where a normal distribution for MD was assumed. © 2016 The Authors Ophthalmic & Physiological Optics © 2016 The College of Optometrists.
[Cephalometric standards of adult Greeks (Ricketts' ten factor analysis)].
Kavadia, S; Topouzelis, N; Sidiropoulou, S; Markovitsi, H; Kolokythas, G
1989-09-01
In this study the ten factors which compose the Ricketts' summary analysis were measured on 81 lateral skull radiographs of adult Greeks (41 males and 40 females) with normal occlusion and harmonious face to establish cephalometric standards. The mean value, standard deviation, standard error of the mean, minimum and maximum values as well as the range of each variable were found and discussed for each sex separately as well as for the whole sample. The main conclusion of the study is that adult Greeks with normal occlusion and harmonious face present: a tendency to the brachy facial vertical type, a small retroposition of the maxilla and of the lower lip and prominent and labialy proclined lower incisors.
Search for Standard Model Higgs boson in the two-photon final state in ATLAS
Davignon Olivier
2012-06-01
Full Text Available We report on the search for the Standard Model Higgs boson decaying into two photons based on proton-proton collision data with a center-of-mass energy of 7 TeV recorded by the ATLAS experiment at the LHC. The dataset has an integrated luminosity of about 1:08 fb−1. The expected cross section exclusion at 95% confidence level varies between 2:0 and 5:8 times the Standard Model cross section over the diphoton mass range 110 – 150 GeV. The maximum deviations from the background-only expectation are consistent with statistical fluctuations.
Experimental and numerical study on casing wear in highly deviated drilling for oil and gas
Hao Yu
2016-06-01
Full Text Available Aimed at studying the casing wear in the highly deviated well drilling, the experimental study on the casing wear was carried out in the first place. According to the test data and the linear wear model based on the energy dissipation proposed by White and Dawson, the tool joint–casing wear coefficient was obtained. The finite element model for casing wear mechanism research was established using ABAQUS. The nodal movement of the contact surface was employed to simulate the evolution of the wear depth, exploiting the Umeshmotion user subroutine. In addition, the time-dependent geometry of the contact surfaces between the tool joint and casing was being updated continuously. Consequently, the contact area and contact pressure were changed continuously during the casing wear process, which gives a more realistic simulation. Based on the shapes of worn casing, the numerical simulation research was carried out to determine the remaining collapse strength. Then the change curve of the maximum casing wear depth with time was obtained. Besides, the relationship between the maximum wear depth and remaining collapse strength was established to predict the maximum wear depth and the remaining strength of the casing after a period of accumulative wear, providing a theoretical basis for the safety assessment of worn casing.
C Van Der Horst
2003-08-01
Full Text Available PURPOSE: The Schroeder-Essed plication procedure is a standard technique for the correction of penile curvature. In a retrospective analysis we compared functional results and quality of life (LQ of the original technique with inverted sutures as described by Schroeder-Essed and our slight modification consisting of horizontal incisions into the tunica albuginea. MATERIALS AND METHODS: Twenty-six patients with congenital penis deviation were treated for penile deviation by the original Schroeder-Essed plication with inverted sutures (11 patients and by the described modification (15 patients. In case of modified technique, horizontal and parallel incisions 4 mm to 6 mm apart and about 8 mm - 10 mm long were made through the tunica albuginea. The outer edges of the incisions were then approximated with permanent inverted sutures (Gore-Tex® 3-0. Mean age was 21.6 years in the first group and 23.2 years in the second group. Average follow-up was 28 months and 13 months, respectively. The preoperative penile deviation angle was > 25º in all patients without difference between the 2 groups. RESULTS: All patients in both groups reported an improvement in their quality of life and full ability to engage in sexual intercourse. Nine patients (88% in the first group and 14 patients (93% in the second group were satisfied with the cosmetic result. In contrast, 10 patients (91% of the first and 13 patients (87% of the second group complained of penile shorting. Recurrence of deviation was only noticed in 2 males in the first group (18%. CONCLUSIONS: Our results indicate that this simple modification of the Schroeder-Essed plication offers good functional and cosmetic results. Most patients were satisfied with the penile angle correction results.
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
Elimination of Nonlinear Deviations in Thermal Lattice BGK Models
Chen, Y; Hongo, T; Chen, Yu; Ohashi, Hirotada; Akiyam, Mamoru
1993-01-01
Abstracet: We present a new thermal lattice BGK model in D-dimensional space for the numerical calculation of fluid dynamics. This model uses a higher order expansion of equilibrium distribution in Maxwellian type. In the mean time the lattice symmetry is upgraded to ensure the isotropy of 6th order tensor. These manipulations lead to macroscopic equations free from nonlinear deviations. We demonstrate the improvements by conducting classical Chapman-Enskog analysis and by numerical simulation of shear wave flow. The transport coefficients are measured numerically, too.
Brief Analysis of the Semantic Deviation in Oliver Twist
黄二靓
2016-01-01
As one of the foremost critical realist writers of the Victorian era, Charles Dickens is adept at using language to create all kinds of characters in a humorous or ironic tone. Therefore he received massive popularity for his unique style in storytelling. Oliver Twist is one of Charles Dickens's masterpieces and also the most appropriate choice for the stylistic study on Charles Dickens. This thesis endeavors to explore the aesthetic effect of semantic deviation appearing in Oliver Twist so that we can have a better comprehension about the excellent writing skill of Charles Dickens.
Quenched Large Deviations for Interacting Diffusions in Random Media
Luçon, Eric
2017-03-01
The aim of the paper is to establish a large deviation principle (LDP) for the empirical measure of mean-field interacting diffusions in a random environment. The point is to derive such a result once the environment has been frozen (quenched model). The main theorem states that a LDP holds for every sequence of environment satisfying appropriate convergence condition, with a rate function that does not depend on the disorder and is different from the rate function in the averaged model. Similar results concerning the empirical flow and local empirical measures are provided.
M. Mihelich
2014-11-01
Full Text Available We derive rigorous results on the link between the principle of maximum entropy production and the principle of maximum Kolmogorov–Sinai entropy using a Markov model of the passive scalar diffusion called the Zero Range Process. We show analytically that both the entropy production and the Kolmogorov–Sinai entropy seen as functions of f admit a unique maximum denoted fmaxEP and fmaxKS. The behavior of these two maxima is explored as a function of the system disequilibrium and the system resolution N. The main result of this article is that fmaxEP and fmaxKS have the same Taylor expansion at first order in the deviation of equilibrium. We find that fmaxEP hardly depends on N whereas fmaxKS depends strongly on N. In particular, for a fixed difference of potential between the reservoirs, fmaxEP(N tends towards a non-zero value, while fmaxKS(N tends to 0 when N goes to infinity. For values of N typical of that adopted by Paltridge and climatologists (N ≈ 10 ~ 100, we show that fmaxEP and fmaxKS coincide even far from equilibrium. Finally, we show that one can find an optimal resolution N* such that fmaxEP and fmaxKS coincide, at least up to a second order parameter proportional to the non-equilibrium fluxes imposed to the boundaries. We find that the optimal resolution N* depends on the non equilibrium fluxes, so that deeper convection should be represented on finer grids. This result points to the inadequacy of using a single grid for representing convection in climate and weather models. Moreover, the application of this principle to passive scalar transport parametrization is therefore expected to provide both the value of the optimal flux, and of the optimal number of degrees of freedom (resolution to describe the system.
Convex Hulls of Multiple Random Walks: A Large-Deviation Study
Dewenter, Timo; Hartmann, Alexander K; Majumdar, Satya N
2016-01-01
We study the polygons governing the convex hull of a point set created by the steps of $n$ independent two-dimensional random walkers. Each such walk consists of $T$ discrete time steps, where $x$ and $y$ increments are i.i.d. Gaussian. We analyze area $A$ and perimeter $L$ of the convex hulls. We obtain probability densities for these two quantities over a large range of the support by using a large-deviation approach allowing us to study densities below $10^{-900}$. We find that the densities exhibit a universal scaling behavior as a function of $A/T$ and $L/\\sqrt{T}$, respectively. As in the case of one walker ($n=1$), the densities follow Gaussian distributions for $L$ and $\\sqrt{A}$, respectively. We also obtained the rate functions for the area and perimeter, rescaled with the scaling behavior of their maximum possible values, and found limiting functions for $T \\rightarrow \\infty$, revealing that the densities follow the large-deviation principle. These rate functions can be described by a power law fo...
Casas-Castillo, M. Carmen; Rodríguez-Solà, Raúl; Navarro, Xavier; Russo, Beniamino; Lastra, Antonio; González, Paula; Redaño, Angel
2016-11-01
The fractal behavior of extreme rainfall intensities registered between 1940 and 2012 by the Retiro Observatory of Madrid (Spain) has been examined, and a simple scaling regime ranging from 25 min to 3 days of duration has been identified. Thus, an intensity-duration-frequency (IDF) master equation of the location has been constructed in terms of the simple scaling formulation. The scaling behavior of probable maximum precipitation (PMP) for durations between 5 min and 24 h has also been verified. For the statistical estimation of the PMP, an envelope curve of the frequency factor (k m ) based on a total of 10,194 station-years of annual maximum rainfall from 258 stations in Spain has been developed. This curve could be useful to estimate suitable values of PMP at any point of the Iberian Peninsula from basic statistical parameters (mean and standard deviation) of its rainfall series.
Pitch deviation analysis of pathological voice in connected speech.
Laflen, J Brandon; Lazarus, Cathy L; Amin, Milan R
2008-02-01
This study compares normal and pathologic voices using a novel voice analysis algorithm that examines pitch deviation during connected speech. The study evaluates the clinical potential of the algorithm as a mechanism to distinguish between normal and pathologic voices using connected speech. Adult vocalizations from normal subjects and patients with known benign free-edge vocal fold lesions were analyzed. Recordings had been previously obtained in quiet under controlled conditions. Two phrases and sustained /a/ were recorded per subject. The subject populations consisted of 10 normal and 31 abnormal subjects. The voice analysis algorithm generated 2-dimensional patterns that represent pitch deviation in time and under variable window widths. Measures were collected from these patterns for window widths between 10 and 250 ms. For comparison, jitter and shimmer measures were collected from sustained /a/ by means of the Computerized Speech Lab (CSL). A t-test and tests of sensitivity and specificity assessed discrimination between normal and abnormal populations. More than 58% of the measures collected from connected speech outperformed the CSL jitter and shimmer measures in population discrimination. Twenty-five percent of the experimental measures (including /a/) indicated significantly different populations (p connected speech.
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Variable Step Size Maximum Correntropy Criteria Based Adaptive Filtering Algorithm
S. Radhika
2016-04-01
Full Text Available Maximum correntropy criterion (MCC based adaptive filters are found to be robust against impulsive interference. This paper proposes a novel MCC based adaptive filter with variable step size in order to obtain improved performance in terms of both convergence rate and steady state error with robustness against impulsive interference. The optimal variable step size is obtained by minimizing the Mean Square Deviation (MSD error from one iteration to the other. Simulation results in the context of a highly impulsive system identification scenario show that the proposed algorithm has faster convergence and lesser steady state error than the conventional MCC based adaptive filters.
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Cohen, Mervyn D. [Indiana University School of Medicine, Department of Radiology, Riley Children' s Hospital, Indianapolis, IN (United States); Riley Hospital for Children, Department of Radiology, Indianapolis, IN (United States); Cooper, Matt L.; Piersall, Kelly [Indiana University School of Medicine, Department of Radiology, Riley Children' s Hospital, Indianapolis, IN (United States); Apgar, Bruce K. [Agfa HealthCare Corporation, Greenville, SC (United States)
2011-05-15
Many methods are used to track patient exposure during acquisition of plain film radiographs. A uniform international standard would aid this process. To evaluate and describe a new, simple quality-assurance method for monitoring patient exposure. This method uses the ''exposure index'' and the ''deviation index,'' recently developed by the International Electrotechnical Commission (IEC) and American Association of Physicists in Medicine (AAPM). The deviation index measures variation from an ideal target exposure index value. Our objective was to determine whether the exposure index and the deviation index can be used to monitor and control exposure drift over time. Our Agfa workstation automatically keeps a record of the exposure index for every patient. The exposure index and deviation index were calculated on 1,884 consecutive neonatal chest images. Exposure of a neonatal chest phantom was performed as a control. Acquisition of the exposure index and calculation of the deviation index was easily achieved. The weekly mean exposure index of the phantom and the patients was stable and showed <10% change during the study, indicating no exposure drift during the study period. The exposure index is an excellent tool to monitor the consistency of patient exposures. It does not indicate the exposure value used, but is an index to track compliance with a pre-determined target exposure. (orig.)
Kalman Filtering with Intermittent Observations: Weak Convergence and Moderate Deviations
Kar, Soummya
2009-01-01
The paper considers the problem of Kalman filtering with intermittent observations, where the observation packet arrival process is modeled as a Bernoulli process. We start by extending the results of \\cite{Riccati-weakconv} to show that the sequence of random conditional error covariance matrices converges in distribution to a unique invariant distribution $\\mathbb{\\mu}^{\\bar{\\gamma}}$, as long as the packet arrival probability $\\bar{\\gamma}>0$. We completely characterize the sequence ${\\mathbb{\\mu}^{\\bar{\\gamma}}}$ of invariant distributions as $\\bar{\\gamma}\\uparrow 1$, by showing that the sequence ${\\mathbb{\\mu}^{\\bar{\\gamma}}}$ satisfies a moderate deviations principle (MDP) with a good rate function $I$, which is explicitly characterized. We then study the sequence of invariant distributions ${\\mathbb{\\mu}^{\\bar{\\gamma}}}$ as $\\bar{\\gamma}\\uparrow 1$. We show that, as $\\bar{\\gamma}\\uparrow 1$, ...
Mod-ϕ convergence normality zones and precise deviations
Féray, Valentin; Nikeghbali, Ashkan
2016-01-01
The canonical way to establish the central limit theorem for i.i.d. random variables is to use characteristic functions and Lévy’s continuity theorem. This monograph focuses on this characteristic function approach and presents a renormalization theory called mod-ϕ convergence. This type of convergence is a relatively new concept with many deep ramifications, and has not previously been published in a single accessible volume. The authors construct an extremely flexible framework using this concept in order to study limit theorems and large deviations for a number of probabilistic models related to classical probability, combinatorics, non-commutative random variables, as well as geometric and number-theoretical objects. Intended for researchers in probability theory, the text is carefully well-written and well-structured, containing a great amount of detail and interesting examples. .
Prevalence of voice quality deviations in the normal adult population.
Brindle, B R; Morris, H L
1979-11-01
The purpose of this study was to determine the prevalence of voice quality deviations in a normal adult population. One-hundred twelve subjects, aged 17 to 80, read a short paragraph aloud into a high-fidelity tape recorder and completed a case history questionnaire. A group of 11 pretrained judges rated overall performance of each taped sample on a seven-point equal-appearing intervals scale, then designated those quality components which contributed toward deviant ratings. Eighty-two percent of the group received a mean severity rating lower than 1.99; 16% had a rating between 2.00 and 2.99; and 2% was assigned a mean rating higher than 3.00.
Lyapunov exponents of linear cocycles continuity via large deviations
Duarte, Pedro
2016-01-01
The aim of this monograph is to present a general method of proving continuity of Lyapunov exponents of linear cocycles. The method uses an inductive procedure based on a general, geometric version of the Avalanche Principle. The main assumption required by this method is the availability of appropriate large deviation type estimates for quantities related to the iterates of the base and fiber dynamics associated with the linear cocycle. We establish such estimates for various models of random and quasi-periodic cocycles. Our method has its origins in a paper of M. Goldstein and W. Schlag. Our present work expands upon their approach in both depth and breadth. We conclude this monograph with a list of related open problems, some of which may be treated using a similar approach.
Interpreting spacetimes of any dimension using geodesic deviation
Podolsky, Jiri
2012-01-01
We present a general method which can be used for geometrical and physical interpretation of an arbitrary spacetime in four or any higher number of dimensions. It is based on the systematic analysis of relative motion of free test particles. We demonstrate that local effect of the gravitational field on particles, as described by equation of geodesic deviation with respect to a natural orthonormal frame, can always be decomposed into a canonical set of transverse, longitudinal and Newton-Coulomb-type components, isotropic influence of a cosmological constant, and contributions arising from specific matter content of the universe. In particular, exact gravitational waves in Einstein's theory always exhibit themselves via purely transverse effects with D(D-3)/2 independent polarization states. To illustrate the utility of this approach we study the family of pp-wave spacetimes in higher dimensions and discuss specific measurable effects on a detector located in four spacetime dimensions. For example, the corres...
Geodesic deviation in Kundt spacetimes of any dimension
Svarc, Robert
2012-01-01
Using the invariant form of the equation of geodesic deviation, which describes relative motion of free test particles, we investigate a general family of D-dimensional Kundt spacetimes. We demonstrate that local influence of the gravitational field can be naturally decomposed into Newton-type tidal effects typical for type II spacetimes, longitudinal deformations mainly present in spacetimes of algebraic type III, and type N purely transverse effects corresponding to gravitational waves with D(D-3)/2 independent polarization states. We explicitly study the most important examples, namely exact pp-waves, gyratons, and VSI spacetimes. This analysis helps us to clarify the geometrical and physical interpretation of the Kundt class of nonexpanding, nontwisting and shearfree geometries.
Deviations from Wick's theorem in the canonical ensemble
Schönhammer, K.
2017-07-01
Wick's theorem for the expectation values of products of field operators for a system of noninteracting fermions or bosons plays an important role in the perturbative approach to the quantum many-body problem. A finite-temperature version holds in the framework of the grand canonical ensemble, but not for the canonical ensemble appropriate for systems with fixed particle number such as ultracold quantum gases in optical lattices. Here we present formulas for expectation values of products of field operators in the canonical ensemble using a method in the spirit of Gaudin's proof of Wick's theorem for the grand canonical case. The deviations from Wick's theorem are examined quantitatively for two simple models of noninteracting fermions.
A sella turcica bridge in subjects with severe craniofacial deviations.
Becktor, J P; Einersen, S; Kjaer, I
2000-02-01
In earlier studies, a sella turcica bridge was stated to occur in 1.75 to 6 per cent of the population. The occurrence of a sella turcica bridge has not previously been studied in a group of patients with craniofacial deviations treated by surgery. Profile radiographs from 177 individuals who had undergone combined orthodontic and surgical treatment at the Copenhagen School of Dentistry were studied. A sella turcica bridge was registered in those subjects where the radiograph revealed a continuous band of bony tissue from the anterior cranial fossa to the posterior cranial fossa across the sella turcica. Two types of sella turcica bridge were identified. A sella turcica bridge occurred in 18.6 per cent of the subjects.
Large Deviation Results for Generalized Compound Negative Binomial Risk Models
Fan-chao Kong; Chen Shen
2009-01-01
In this paper we extend and improve some results of the large deviation for random sums of random variables.Let {Xn;n≥1} be a sequence of non-negative,independent and identically distributed random variables with common heavy-tailed distribution function F and finite mean μ∈R+,{N(n);n≥0} be a sequence of negative binomial distributed random variables with a parameter p ∈(0,1),n≥0,let {M(n);n≥0} be a Poisson process with intensity λ0.Suppose {N(n);n≥0},{Xn;n≥1} and {M(n);n≥0} are mutually results.These results can be applied to certain problems in insurance and finance.
Maximum process problems in optimal control theory
Goran Peskir
2005-01-01
Full Text Available Given a standard Brownian motion (Btt≥0 and the equation of motion dXt=vtdt+2dBt, we set St=max0≤s≤tXs and consider the optimal control problem supvE(Sτ−Cτ, where c>0 and the supremum is taken over all admissible controls v satisfying vt∈[μ0,μ1] for all t up to τ=inf{t>0|Xt∉(ℓ0,ℓ1} with μ0g∗(St, where s↦g∗(s is a switching curve that is determined explicitly (as the unique solution to a nonlinear differential equation. The solution found demonstrates that the problem formulations based on a maximum functional can be successfully included in optimal control theory (calculus of variations in addition to the classic problem formulations due to Lagrange, Mayer, and Bolza.
Novel TPPO Based Maximum Power Point Method for Photovoltaic System
ABBASI, M. A.
2017-08-01
Full Text Available Photovoltaic (PV system has a great potential and it is installed more when compared with other renewable energy sources nowadays. However, the PV system cannot perform optimally due to its solid reliance on climate conditions. Due to this dependency, PV system does not operate at its maximum power point (MPP. Many MPP tracking methods have been proposed for this purpose. One of these is the Perturb and Observe Method (P&O which is the most famous due to its simplicity, less cost and fast track. But it deviates from MPP in continuously changing weather conditions, especially in rapidly changing irradiance conditions. A new Maximum Power Point Tracking (MPPT method, Tetra Point Perturb and Observe (TPPO, has been proposed to improve PV system performance in changing irradiance conditions and the effects on characteristic curves of PV array module due to varying irradiance are delineated. The Proposed MPPT method has shown better results in increasing the efficiency of a PV system.
Regulation on radial position deviation for vertical AMB systems
Tsai, Nan-Chyuan; Kuo, Chien-Hsien; Lee, Rong-Mao
2007-10-01
As a source of model uncertainty, gyroscopic effect, depending on rotor speed, is studied for the vertical active magnetic bearing (VAMB) systems which are increasingly used in various industries such as clean rooms, compressors and satellites. This research applies H∞ controller to regulate the rotor position deviations of the VAMB systems in four degrees of freedom. The performance of H∞ controller is examined by experimental simulations to inspect its closed-loop stiffness, rise time and capability to suppress the high frequency disturbances. Although the H∞ is inferior to the LQR in position deviation regulation, the required control current in the electromagnetic bearings is much less than that for LQR or PID and the performance robustness is well retained. In order to ensure the stability robustness of H∞ controller, two approaches, by Kharitonov polynomials and TITO (two inputs & two outputs) Nyquist Stability Criterion, are employed to synthesize the control feedback loop. A test rig is built to further verify the efficacy of the proposed H∞ controller experimentally. Two Eddy-current types of gap sensors, perpendicular to each other, are included to the realistic rotor-bearing system. A four-pole magnetic bearing is used as the actuator for generation of control force. The commercial I/O module unit with A/D and D/A converters, dSPACE DS1104, is integrated to the VAMB, gap sensors, power amplifiers and signal processing circuits. The H∞ is designed on the basis of rotor speed 10 K rpm but in fact it is significantly robust with respect to the rotor speed, varying from 6.5 to 13.5 K rpm.
Spine deviations and orthodontic treatment of asymmetric malocclusions in children
Lippold Carsten
2012-08-01
Full Text Available Abstract Background The aim of this randomized clinical trial was to assess the effect of early orthodontic treatment for unilateral posterior cross bite in the late deciduous and early mixed dentition using orthopedic parameters. Methods Early orthodontic treatment was performed by initial maxillary expansion and subsequent activator therapy (Münster treatment concept. The patient sample was initially comprised of 80 patients with unilateral posterior cross bite (mean age 7.3 years, SD 2.1 years. After randomization, 77 children attended the initial examination appointment (therapy = 37, control = 40; 31 children in the therapy group and 35 children in the control group were monitored at the follow-up examination (T2. The mean interval between T1 and T2 was 1.1 years (SD 0.2 years. Rasterstereography was used for back shape analysis at T1 and T2. Using the profile, the kyphotic and lordotic angle, the surface rotation, the lateral deviation, pelvic tilt and pelvic torsion, statistical differences at T1 and T2 between the therapy and control groups were calculated (t-test. Our working hypothesis was, that early orthodontic treatment can induce negative therapeutic changes in body posture through thoracic and lumbar position changes in preadolescents with uniltaral cross bite. Results No clinically relevant differences between the control and the therapy groups at T1 and T2 were found for the parameters of kyphotic and lordotic angle, the surface rotation, lateral deviation, pelvic tilt, and pelvic torsion. Conclusions Our working hypothesis was tested to be not correct (within the limitations of this study. This randomized clinical trial demonstrates that in a juvenile population with unilateral posterior cross bite the selected early orthodontic treatment protocol does not affect negatively the postural parameters. Trial registration DRKS00003497 on DRKS
Deviations in Ukrainian gender culture: social and systemologic aspect
I. O. Svyatnenko
2017-08-01
Full Text Available The article is devoted to the problem of understanding of traditional (internal deviations in Ukrainian gender culture, which are developing rapidly under the influence of the spread of foreign cultures’ (mainly European and American ones gender deviations. As a result of the study, the following conclusion has been made by the author that matriarchy, cultivated by mistrust of men to each other, their mutual demonization due to the idealization of mothers and devaluation of fathers, contributes to the growth of misandry and homophobia of non-sexual content. We are talking about fears associated with non-sexual (socio-cultural and socio-economic spheres, which, however, can sexualize and take the form of sexual homophobia. These fears relate mostly to various manifestations of lies and fraud, which become normal for men’s relationships precisely because of the inferiority of communications between them. It has been stated in the article that mizandry and homophobia in Ukrainian gender culture express the fears of men before the masculine manifestations of themselves in any sphere of activity (including the sexual sphere that are projected onto other men as external objects and cause social and behavioral reactions or other technologies of gender castration (cultural, social, mental or bodily. The society reacts to masculine men with gender repression as their behavior is interpreted by other participants of social interactions as carrying a threat to the developed scenarios of suppression of social aggression. The damage of these scenarios is that any constructive activity causes social feelings of suspicion and envy, which entails centrifugal social reactions in the form of isolation of the initiator of this activity or sabotage and social escapism. This defectiveness manifests itself in the predominant concealment of the motives and intentions of participants and the disparity of verbal behavior with real behavioral characteristics.
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
Azam Zaka
2014-10-01
Full Text Available This paper is concerned with the modifications of maximum likelihood, moments and percentile estimators of the two parameter Power function distribution. Sampling behavior of the estimators is indicated by Monte Carlo simulation. For some combinations of parameter values, some of the modified estimators appear better than the traditional maximum likelihood, moments and percentile estimators with respect to bias, mean square error and total deviation.
闫圆圆; 黄勇; 李文武; 白人驹; 付政; 穆殿斌; 郭洪波
2011-01-01
Objective To reveal the relationship among maximum FDG PET standardized uptake value ( SUVmax) , Ki-67 and pathological grading of esophageal carcinomas. Methods Fourty-seven patients with surgical resected esophageal carcinoma were enrolled in this study. 18F-FDG PET/CT examination was performed one week before operation and SUVmax was calculated. Specimens were obtained by surgical procedure. Then immunohistochemistry staining of Ki-67 was carried out and pathological grading was determined by HE staining. Relations among SUVmax , Ki-67 and pathological grading were analysed. Results (l)For all the 47 cases, the average FDG SUVmax and Ki-67 indexes was 12. 504 ±6. 805 and (7. 837 ±29. 798)% t respectively, which was positively correlated (r = 0. 581 ,P <0. 05). (2) Forty-seven specimens were obtained,including 13 well-differentiated squamous cell tumors, 16 moderately differentiated tumors and 18 poorly differentiated tumors. The mean SUVmax of well-differentiated,moderately differentiated and poorly differentiated tumors was 9.787 ± 1. 477,12. 313 ±0.479,15. 053 ±2. 147,respectively,and a significant difference could be determined between them by statistical analysis ( P =0. 000). Conclusions SUVmax may be used to indirectly evaluate the proliferative capacity of esophageal cancer. To some extent,SUVmax could reflect pathologic grading of tumor.%目的 探讨食管鳞癌FDC PET显像的最大标准摄取值(maximum FDG PET standardized uptake value,SUVmax)与肿瘤Ki-67表达及病理分级的关系.方法食管癌患者47例,于术前1周内行18F-FDG PET/CT检查,测得SUVmax.术后取得肿瘤标本,行Ki-67免疫组织化学染色,并HE染色确定病理分级,分析SUVmax、Ki-67、病理分级之间的关系.结果(1)47例患者中共47个食管鳞癌原发病灶,SUVmax为1.9 ～24.0,平均为12.504±6.805,Ki-67平均指数为(67.837±29.798)％,经统计学分析,SUVmax与Ki-67指数呈正相关,r值为0.581,P＜0.05.(2)47例中高分化鳞癌13
Nair, Vimoj J.; MacRae, Robert [Division of Radiation Oncology, University of Ottawa, Ottawa, Ontario (Canada); Ottawa Hospital Research Institute, Ottawa, Ontario (Canada); Sirisegaram, Abby [Ottawa Hospital Research Institute, Ottawa, Ontario (Canada); Pantarotto, Jason R., E-mail: jpantarotto@toh.on.ca [Division of Radiation Oncology, University of Ottawa, Ottawa, Ontario (Canada); Ottawa Hospital Research Institute, Ottawa, Ontario (Canada)
2014-02-01
Purpose: The aim of this study was to determine whether the preradiation maximum standardized uptake value (SUV{sub max}) of the primary tumor for [{sup 18}F]-fluoro-2-deoxy-glucose positron emission tomography (FDG-PET) has a prognostic significance in patients with Stage T1 or T2N0 non-small cell lung cancer (NSCLC) treated with curative radiation therapy, whether conventional or stereotactic body radiation therapy (SBRT). Methods and Materials: Between January 2007 and December 2011, a total of 163 patients (180 tumors) with medically inoperable histologically proven Stage T1 or T2N0 NSCLC and treated with radiation therapy (both conventional and SBRT) were entered in a research ethics board approved database. All patients received pretreatment FDG-PET / computed tomography (CT) at 1 institution with consistent acquisition technique. The medical records and radiologic images of these patients were analyzed. Results: The overall survival at 2 years and 3 years for the whole group was 76% and 67%, respectively. The mean and median SUV{sub max} were 8.1 and 7, respectively. Progression-free survival at 2 years with SUV{sub max} <7 was better than that of the patients with tumor SUV{sub max} ≥7 (67% vs 51%; P=.0096). Tumors with SUV{sub max} ≥7 were associated with a worse regional recurrence-free survival and distant metastasis-free survival. In the multivariate analysis, SUV{sub max} ≥7 was an independent prognostic factor for distant metastasis-free survival. Conclusion: In early-stage NSCLC managed with radiation alone, patients with SUV{sub max} ≥7 on FDG-PET / CT scan have poorer outcomes and high risk of progression, possibly because of aggressive biology. There is a potential role for adjuvant therapies for these high-risk patients with intent to improve outcomes.
Pudji Andayani
2006-10-01
Full Text Available This report aimed to assess mothers’ perceptions on normal and deviation of development in their children. The study was done in underfive children and their mothers from May 1st 1999 to June 30th 1999 who visited the Nutrition, Growth & Development Clinic of the Child Health Department, Sanglah Hospital, Denpasar. A total of 76 children between 2 and 59 months of age and their mothers were enrolled. Data were collected by interview with mothers concerning the following items: perception of their children development, age of child, sex, mother’s education, mother’s job, number of sibling, and mother ability in making referral decisions. Denver II screening test was administered to each child to identify of development status as a gold standard. Sixteen (21% children was identified as having developmental deviation (by mother’s perception and 21 (28% by authors using Denver II screening test. The mother’s perception sensitivity was 67% and specificity was 97%. There were no significant differences of development status perception according to child’s age, mother’s education, mother’s job, and number of sibling. Most of mother’s perceptions about normal development were if the body weight increased and had no disability. Most of the sources of information about development was from the relatives. Thirteen of 21 children who had developmental deviation were referred by mothers. We conclude that mother’s perception can be used as early detection of developmental problems. Mother’s concerns of their children growth development had focused on again body weight, physical developmental and gross motor skill.
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
Deviation from Power Law Behavior in Landslide Phenomenon
Li, L.; Lan, H.; Wu, Y.
2013-12-01
Power law distribution of magnitude is widely observed in many natural hazards (e.g., earthquake, floods, tornadoes, and forest fires). Landslide is unique as the size distribution of landslide is characterized by a power law decrease with a rollover in the small size end. Yet, the emergence of the rollover, i.e., the deviation from power law behavior for small size landslides, remains a mystery. In this contribution, we grouped the forces applied on landslide bodies into two categories: 1) the forces proportional to the volume of failure mass (gravity and friction), and 2) the forces proportional to the area of failure surface (cohesion). Failure occurs when the forces proportional to volume exceed the forces proportional to surface area. As such, given a certain mechanical configuration, the failure volume to failure surface area ratio must exceed a corresponding threshold to guarantee a failure. Assuming all landslides share a uniform shape, which means the volume to surface area ratio of landslide regularly increase with the landslide volume, a cutoff of landslide volume distribution in the small size end can be defined. However, in realistic landslide phenomena, where heterogeneities of landslide shape and mechanical configuration are existent, a simple cutoff of landslide volume distribution does not exist. The stochasticity of landslide shape introduce a probability distribution of the volume to surface area ratio with regard to landslide volume, with which the probability that the volume to surface ratio exceed the threshold can be estimated regarding values of landslide volume. An experiment based on empirical data showed that this probability can induce the power law distribution of landslide volume roll down in the small size end. We therefore proposed that the constraints on the failure volume to failure surface area ratio together with the heterogeneity of landslide geometry and mechanical configuration attribute for the deviation from power law
Amplification biases: possible differences among deviating gene expressions
Piumi Francois
2008-01-01
Full Text Available Abstract Background Gene expression profiling has become a tool of choice to study pathological or developmental questions but in most cases the material is scarce and requires sample amplification. Two main procedures have been used: in vitro transcription (IVT and polymerase chain reaction (PCR, the former known as linear and the latter as exponential. Previous reports identified enzymatic pitfalls in PCR and IVT protocols; however the possible differences between the sequences affected by these amplification defaults were only rarely explored. Results Screening a bovine cDNA array dedicated to embryonic stages with embryonic (n = 3 and somatic tissues (n = 2, we proceeded to moderate amplifications starting from 1 μg of total RNA (global PCR or IVT one round. Whatever the tissue, 16% of the probes were involved in deviating gene expressions due to amplification defaults. These distortions were likely due to the molecular features of the affected sequences (position within a gene, GC content, hairpin number but also to the relative abundance of these transcripts within the tissues. These deviating genes mainly encoded housekeeping genes from physiological or cellular processes (70% and constituted 2 subsets which did not overlap (molecular features, signal intensities, gene ID. However, the differential expressions identified between embryonic stages were both reliable (minor intersect with biased expressions and relevant (biologically validated. In addition, the relative expression levels of those genes were biologically similar between amplified and unamplified samples. Conclusion Conversely to the most recent reports which challenged the use of intense amplification procedures on minute amounts of RNA, we chose moderate PCR and IVT amplifications for our gene profiling study. Conclusively, it appeared that systematic biases arose even with moderate amplification procedures, independently of (i the sample used: brain, ovary or embryos, (ii
Maximum entropy, word-frequency, Chinese characters, and multiple meanings.
Yan, Xiaoyong; Minnhagen, Petter
2015-01-01
The word-frequency distribution of a text written by an author is well accounted for by a maximum entropy distribution, the RGF (random group formation)-prediction. The RGF-distribution is completely determined by the a priori values of the total number of words in the text (M), the number of distinct words (N) and the number of repetitions of the most common word (k(max)). It is here shown that this maximum entropy prediction also describes a text written in Chinese characters. In particular it is shown that although the same Chinese text written in words and Chinese characters have quite differently shaped distributions, they are nevertheless both well predicted by their respective three a priori characteristic values. It is pointed out that this is analogous to the change in the shape of the distribution when translating a given text to another language. Another consequence of the RGF-prediction is that taking a part of a long text will change the input parameters (M, N, k(max)) and consequently also the shape of the frequency distribution. This is explicitly confirmed for texts written in Chinese characters. Since the RGF-prediction has no system-specific information beyond the three a priori values (M, N, k(max)), any specific language characteristic has to be sought in systematic deviations from the RGF-prediction and the measured frequencies. One such systematic deviation is identified and, through a statistical information theoretical argument and an extended RGF-model, it is proposed that this deviation is caused by multiple meanings of Chinese characters. The effect is stronger for Chinese characters than for Chinese words. The relation between Zipf's law, the Simon-model for texts and the present results are discussed.
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
JIANG Tao
2008-01-01
We establish an asymptotic relation for the large-deviation probabilities of the maxima of sums of subexponential random variables centered by multiples of order statistics of i.i.d. standard uniform random variables. This extends a corresponding result of Korshunov. As an application, we generalize a result of Tang,the uniform asymptotic estimate for the finite-time ruin probability, to the whole strongly subexponential class.
2008-01-01
We establish an asymptotic relation for the large-deviation probabilities of the maxima of sums of subexponential random variables centered by multiples of order statistics of i.i.d.standard uniform random variables.This extends a corresponding result of Korshunov.As an application,we generalize a result of Tang,the uniform asymptotic estimate for the finite-time ruin probability,to the whole strongly subexponential class.
Salor, Oezguel [TUeBiTAK - Uzay, Power Electronics Group, METU Campus, TR 06531, Ankara (Turkey)
2009-07-15
In this paper, a spectral correction-based algorithm for interharmonic computation is proposed for especially highly fluctuating fundamental frequency cases in the power system. It has been observed and reported that fluctuating demands of some loads such as arc furnaces, or disturbances and subsequent system transients make the fundamental frequency of the power system deviate and this causes non-existing interharmonics to appear in the spectrum due to grid-effect when a standard window length is used for the entire FFT process. The proposed method uses a synthetic waveform produced at the fundamental frequency and amplitude to determine the amount of the leakage due to the grid-effect at each frequency. Then the leakage is subtracted from the original FFT of the signal to correct the frequency spectrum. It has been shown that the leakage effect caused by the fundamental frequency variation is avoided with a correction algorithm applied after FFT and the error in the first interharmonic computation due to frequency deviation is reduced to zero exactly if the fundamental frequency can be determined accurately. Both simulative and field data tests have been performed. The method does not require online sampling frequency or FFT window length adjustment and it is simple to implement. (author)
WKB theory of large deviations in stochastic populations
Assaf, Michael; Meerson, Baruch
2017-06-01
Stochasticity can play an important role in the dynamics of biologically relevant populations. These span a broad range of scales: from intra-cellular populations of molecules to population of cells and then to groups of plants, animals and people. Large deviations in stochastic population dynamics—such as those determining population extinction, fixation or switching between different states—are presently in a focus of attention of statistical physicists. We review recent progress in applying different variants of dissipative WKB approximation (after Wentzel, Kramers and Brillouin) to this class of problems. The WKB approximation allows one to evaluate the mean time and/or probability of population extinction, fixation and switches resulting from either intrinsic (demographic) noise, or a combination of the demographic noise and environmental variations, deterministic or random. We mostly cover well-mixed populations, single and multiple, but also briefly consider populations on heterogeneous networks and spatial populations. The spatial setting also allows one to study large fluctuations of the speed of biological invasions. Finally, we briefly discuss possible directions of future work.
Optimal aggregation of noisy observations: A large deviations approach
Murayama, Tatsuto; Davis, Peter, E-mail: murayama@cslab.kecl.ntt.co.j, E-mail: davis@cslab.kecl.ntt.co.j [NTT Communication Science Laboratories, NTT Corporation, 2-4, Hikaridai, Seika-cho, Keihanna, Kyoto 619-0237 (Japan)
2010-06-01
Sensing and data aggregation tasks in distributed systems should not be considered as separate issues. The quality of collective estimation involves a fundamental tradeoff between sensing quality, which can be increased by increasing the number of sensors, and aggregation quality under a given capacity of the network, which decreases if the number of sensors is too large. In this paper, we examine a system level strategy for optimal aggregation of data from an ensemble of independent sensors. In particular, we consider large scale aggregation from very many sensors, in which case the network capacity diverges to infinity. Then, by applying the large deviations techniques, we conclude the following significant result: larger scale aggregation always outperforms smaller scale aggregation at higher noise levels, while below a critical value of noise, there exist moderate scale aggregation levels at which optimal estimation is realized. At a critical value of noise, there is an abrupt change in the behavior of a parameter characterizing the aggregation strategy, similar to a phase transition in statistical physics.
Gait Deviations in Children with Autism Spectrum Disorders: A Review
Deirdre Kindregan
2015-01-01
Full Text Available In recent years, it has become clear that children with autism spectrum disorders (ASDs have difficulty with gross motor function and coordination, factors which influence gait. Knowledge of gait abnormalities may be useful for assessment and treatment planning. This paper reviews the literature assessing gait deviations in children with ASD. Five online databases were searched using keywords “gait” and “autism,” and 11 studies were found which examined gait in childhood ASD. Children with ASD tend to augment their walking stability with a reduced stride length, increased step width and therefore wider base of support, and increased time in the stance phase. Children with ASD have reduced range of motion at the ankle and knee during gait, with increased hip flexion. Decreased peak hip flexor and ankle plantar flexor moments in children with ASD may imply weakness around these joints, which is further exhibited by a reduction in ground reaction forces at toe-off in children with ASD. Children with ASD have altered gait patterns to healthy controls, widened base of support, and reduced range of motion. Several studies refer to cerebellar and basal ganglia involvement as the patterns described suggest alterations in those areas of the brain. Further research should compare children with ASD to other clinical groups to improve assessment and treatment planning.
Geometry of River Networks; 1, Scaling, Fluctuations, and Deviations
Dodds, P S; Dodds, Peter Sheridan; Rothman, Daniel H.
2000-01-01
This article is the first in a series of three papers investigating the detailed geometry of river networks. Large-scale river networks mark an important class of two-dimensional branching networks, being not only of intrinsic interest but also a pervasive natural phenomenon. In the description of river network structure, scaling laws are uniformly observed. Reported values of scaling exponents vary suggesting that no unique set of scaling exponents exists. To improve this current understanding of scaling in river networks and to provide a fuller description of branching network structure, we report here a theoretical and empirical study of fluctuations about and deviations from scaling. We examine data for continent-scale river networks such as the Mississippi and the Amazon and draw inspiration from a simple model of directed, random networks. We center our investigations on the scaling of the length of sub-basin's dominant stream with its area, a characterization of basin shape known as Hack's law. We gene...
Large deviations of ergodic counting processes: a statistical mechanics approach.
Budini, Adrián A
2011-07-01
The large-deviation method allows to characterize an ergodic counting process in terms of a thermodynamic frame where a free energy function determines the asymptotic nonstationary statistical properties of its fluctuations. Here we study this formalism through a statistical mechanics approach, that is, with an auxiliary counting process that maximizes an entropy function associated with the thermodynamic potential. We show that the realizations of this auxiliary process can be obtained after applying a conditional measurement scheme to the original ones, providing is this way an alternative measurement interpretation of the thermodynamic approach. General results are obtained for renewal counting processes, that is, those where the time intervals between consecutive events are independent and defined by a unique waiting time distribution. The underlying statistical mechanics is controlled by the same waiting time distribution, rescaled by an exponential decay measured by the free energy function. A scale invariance, shift closure, and intermittence phenomena are obtained and interpreted in this context. Similar conclusions apply for nonrenewal processes when the memory between successive events is induced by a stochastic waiting time distribution.
Testing large-angle deviation from Gaussianity in CMB maps
Bernui, A; Teixeira, A F F
2010-01-01
A detection of the level of non-Gaussianity in the CMB data is essential to discriminate among inflationary models and also to test alternative primordial scenarios. However, the extraction of primordial non-Gaussianity is a difficult endeavor since several effects of non-primordial nature can produce non-Gaussianity. On the other hand, different statistical tools can in principle provide information about distinct forms of non-Gaussianity. Thus, any single statistical estimator cannot be sensitive to all possible forms of non-Gaussianity. In this context, to shed some light in the potential sources of deviation from Gaussianity in CMB data it is important to use different statistical indicators. In a recent paper we proposed two new large-angle non-Gaussianity indicators which provide measures of the departure from Gaussianity on large angular scales. We used these indicators to carry out analyses of non-Gaussianity of the bands and of the foreground-reduced WMAP maps with and without the KQ75 mask. Here we ...
Characterizing pathological deviations from normality using constrained manifold-learning.
Duchateau, Nicolas; De Craene, Mathieu; Piella, Gemma; Frangi, Alejandro F
2011-01-01
We propose a technique to represent a pathological pattern as a deviation from normality along a manifold structure. Each subject is represented by a map of local motion abnormalities, obtained from a statistical atlas of motion built from a healthy population. The algorithm learns a manifold from a set of patients with varying degrees of the same pathology. The approach extends recent manifold-learning techniques by constraining the manifold to pass by a physiologically meaningful origin representing a normal motion pattern. Individuals are compared to the manifold population through a distance that combines a mapping to the manifold and the path along the manifold to reach its origin. The method is applied in the context of cardiac resynchronization therapy (CRT), focusing on a specific motion pattern of intra-ventricular dyssynchrony called septal flash (SF). We estimate the manifold from 50 CRT candidates with SF and test it on 38 CRT candidates and 21 healthy volunteers. Experiments highlight the need of nonlinear techniques to learn the studied data, and the relevance of the computed distance for comparing individuals to a specific pathological pattern.
Inertial Manifold and Large Deviations Approach to Reduced PDE Dynamics
Cardin, Franco; Favretti, Marco; Lovison, Alberto
2017-09-01
In this paper a certain type of reaction-diffusion equation—similar to the Allen-Cahn equation—is the starting point for setting up a genuine thermodynamic reduction i.e. involving a finite number of parameters or collective variables of the initial system. We firstly operate a finite Lyapunov-Schmidt reduction of the cited reaction-diffusion equation when reformulated as a variational problem. In this way we gain a finite-dimensional ODE description of the initial system which preserves the gradient structure of the original one and that is exact for the static case and only approximate for the dynamic case. Our main concern is how to deal with this approximate reduced description of the initial PDE. To start with, we note that our approximate reduced ODE is similar to the approximate inertial manifold introduced by Temam and coworkers for Navier-Stokes equations. As a second approach, we take into account the uncertainty (loss of information) introduced with the above mentioned approximate reduction by considering the stochastic version of the ODE. We study this reduced stochastic system using classical tools from large deviations, viscosity solutions and weak KAM Hamilton-Jacobi theory. In the last part we suggest a possible use of a result of our approach in the comprehensive treatment non equilibrium thermodynamics given by Macroscopic Fluctuation Theory.
Spatio-temporal observations of tertiary ozone maximum
V. F. Sofieva
2009-03-01
Full Text Available We present spatio-temporal distributions of tertiary ozone maximum (TOM, based on GOMOS (Global Ozone Monitoring by Occultation of Stars ozone measurements in 2002–2006. The tertiary ozone maximum is typically observed in the high-latitude winter mesosphere at altitude ~72 km. Although the explanation for this phenomenon has been found recently – low concentrations of odd-hydrogen cause the subsequent decrease in odd-oxygen losses – models have had significant deviations from existing observations until recently. Good coverage of polar night regions by GOMOS data has allowed for the first time obtaining spatial and temporal observational distributions of night-time ozone mixing ratio in the mesosphere.
The distributions obtained from GOMOS data have specific features, which are variable from year to year. In particular, due to a long lifetime of ozone in polar night conditions, the downward transport of polar air by the meridional circulation is clearly observed in the tertiary ozone maximum time series. Although the maximum tertiary ozone mixing ratio is achieved close to the polar night terminator (as predicted by the theory, TOM can be observed also at very high latitudes, not only in the beginning and at the end, but also in the middle of winter. We have compared the observational spatio-temporal distributions of tertiary ozone maximum with that obtained using WACCM (Whole Atmosphere Community Climate Model and found that the specific features are reproduced satisfactorily by the model.
Since ozone in the mesosphere is very sensitive to HO_{x} concentrations, energetic particle precipitation can significantly modify the shape of the ozone profiles. In particular, GOMOS observations have shown that the tertiary ozone maximum was temporarily destroyed during the January 2005 and December 2006 solar proton events as a result of the HO_{x} enhancement from the increased ionization.
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
Closed form maximum likelihood estimator of conditional random fields
Zhu, Zhemin; Hiemstra, Djoerd; Apers, Peter M.G.; Wombacher, Andreas
2013-01-01
Training Conditional Random Fields (CRFs) can be very slow for big data. In this paper, we present a new training method for CRFs called {\\em Empirical Training} which is motivated by the concept of co-occurrence rate. We show that the standard training (unregularized) can have many maximum likeliho
Maximum likelihood estimation of the attenuated ultrasound pulse
Rasmussen, Klaus Bolding
1994-01-01
The attenuated ultrasound pulse is divided into two parts: a stationary basic pulse and a nonstationary attenuation pulse. A standard ARMA model is used for the basic pulse, and a nonstandard ARMA model is derived for the attenuation pulse. The maximum likelihood estimator of the attenuated...
Back in the saddle: Large-deviation statistics of the cosmic log-density field
Uhlemann, Cora; Pichon, Christophe; Bernardeau, Francis; Reimberg, Paulo
2015-01-01
We present a first principle approach to obtain analytical predictions for spherically-averaged cosmic densities in the mildly non-linear regime that go well beyond what is usually achieved by standard perturbation theory. A large deviation principle allows us to compute the leading-order cumulants of average densities in concentric cells. In this symmetry, the spherical collapse model leads to cumulant generating functions that are robust for finite variances and free of critical points when logarithmic density transformations are implemented. They yield in turn accurate density probability distribution functions (PDFs) from a straightforward saddle-point approximation valid for all density values. Based on this easy-to-implement modification, explicit analytic formulas for the evaluation of the one- and two-cell PDF are provided. The theoretical predictions obtained for the PDFs are accurate to a few percent compared to the numerical integration, regardless of the density under consideration and in excellen...
Classical diffusion and quantum level velocities: systematic deviations from random matrix theory.
Lakshminarayan, A; Cerruti, N R; Tomsovic, S
1999-10-01
We study the response of the quasienergy levels in the context of quantized chaotic systems through the level velocity variance and relate them to classical diffusion coefficients using detailed semiclassical analysis. The systematic deviations from random matrix theory, assuming independence of eigenvectors from eigenvalues, are shown to be connected to classical higher-order time correlations of the chaotic system. We study the standard map as a specific example, and thus the well-known oscillatory behavior of the diffusion coefficient with respect to the parameter is reflected exactly in the oscillations of the variance of the level velocities. We study the case of mixed phase-space dynamics as well and note a transition in the scaling properties of the variance that occurs along with the classical transition to chaos.
Search for the Standard Model Scalar Decaying to Fermions at CMS
Dutta, Valentina
2013-01-01
The latest results of the search for the standard model scalar boson in fermionic decay channels at the CMS experiment are presented. The dataset used corresponds to an integrated luminosity of 5 $fb^{-1}$ of proton-proton collision data collected at $\\sqrt{s}$ = 7 TeV and up to 19.4 $fb^{-1}$ collected at $\\sqrt{s}$ = 8 TeV. The analyses described include the searches for the standard model scalar decaying to tau pairs and to a pair of b-quarks. In the tau-pair nal state, an excess of events is observed over a broad range of SM scalar mass hypotheses, with a maximum local signicance of 2.93 standard deviations at $m_H$ = 120 GeV. The excess is compatible with the presence of a standard model scalar boson of mass 125 GeV.
B. N. Patel, S. S. Jaiwar, N. A. Patel, V. R. Akbari and P. B. Dave
2014-12-01
Full Text Available A line x tester analysis was undertaken to estimates the magnitude of heterosis and dominance deviation in Gossypium hirsutum L. for yield, its components and other matricate characters in 60 test entries including (44 F1s along with 15 parents and 1 standard check hybrid. Analysis of variance indicated the significant difference among the parents and hybrids for all 12 characters studied which revealed existence of variability among the genotypes. Studies revealed that out of 44 cross combinations, only 3 hybrids viz., BC-68-2 x MCU 11, BC-68-2 x AC 738 and BN 1 x Reba-B-50 depicted significant and positive heterosis over standard check hybrid G. Cot. Hy. 12. The hybrid BC-68-2 x MCU 11 exhibited significant positive standard heterosis for seed cotton yield per plant and other attributing characters i.e. total number of bolls per plant, average boll weight, lint yield per plant and lint index. The mean values of potence ratio in all twelve characters suggested that degree of dominance was governed by over dominance genes for the expression of all the characters under study.
Outcomes of minimally invasive strabismus surgery for horizontal deviation.
Merino, P; Blanco Domínguez, I; Gómez de Liaño, P
2016-02-01
To study the outcomes of minimally invasive strabismus surgery (MISS) for treating horizontal deviation Case Series of the first 26 consecutive patients operated on using the MISS technique in our hospital from February 2010 to March 2014. A total of 40 eyes were included: 26 patients (mean age: 7.7 years old ± 4.9); 34.61%: male. A total of 43 muscles were operated on: 20 medial, and 23 lateral recti; 28 recessions (range: 3-7.5mm), 6 resections (6-7 mm), and 9 plications (6.5-7.5 mm) were performed. No significant difference was found (P>0.05) for visual acuity at postoperative day 1, and 6 months after surgery. A mild hyperaemia was observed in 29.27%, moderate in 48.78%, and severe in 21.95% at postoperative day 1 and in 63.41%, 31.70% and 4.87%, respectively, at 4 days after surgery. The complications observed were 4 intraoperative conjunctival haemorrhages, 1 scleral perforation, and 2 Tenon's prolapses. A conversion from MISS to a fornix approach was necessary in 1 patient because of bad visualization. The operating time range decreased from 30 to 15 minutes. The MISS technique has obtained good results in horizontal strabismus surgery. The conjunctival inflammation was mild in most of the cases at postoperative day 4. The visual acuity was stable during follow-up, and operating time decreased after a 4-year learning curve. Copyright © 2015 Sociedad Española de Oftalmología. Published by Elsevier España, S.L.U. All rights reserved.
Large-deviation statistics of vorticity stretching in isotropic turbulence.
Johnson, Perry L; Meneveau, Charles
2016-03-01
A key feature of three-dimensional fluid turbulence is the stretching and realignment of vorticity by the action of the strain rate. It is shown in this paper, using the cumulant-generating function, that the cumulative vorticity stretching along a Lagrangian path in isotropic turbulence obeys a large deviation principle. As a result, the relevant statistics can be described by the vorticity stretching Cramér function. This function is computed from a direct numerical simulation data set at a Taylor-scale Reynolds number of Re(λ)=433 and compared to those of the finite-time Lyapunov exponents (FTLE) for material deformation. As expected, the mean cumulative vorticity stretching is slightly less than that of the most-stretched material line (largest FTLE), due to the vorticity's preferential alignment with the second-largest eigenvalue of strain rate and the material line's preferential alignment with the largest eigenvalue. However, the vorticity stretching tends to be significantly larger than the second-largest FTLE, and the Cramér functions reveal that the statistics of vorticity stretching fluctuations are more similar to those of the largest FTLE. In an attempt to relate the vorticity stretching statistics to the vorticity magnitude probability density function in statistically stationary conditions, a model Kramers-Moyal equation is constructed using the statistics encoded in the Cramér function. The model predicts a stretched-exponential tail for the vorticity magnitude probability density function, with good agreement for the exponent but significant difference (35%) in the prefactor.
Symphysis-fundal height curve in the diagnosis of fetal growth deviations
Djacyr Magna Cabral Freire
2010-12-01
Full Text Available OBJECTIVE: To validate a new symphysis-fundal curve for screening fetal growth deviations and to compare its performance with the standard curve adopted by the Brazilian Ministry of Health. METHODS: Observational study including a total of 753 low-risk pregnant women with gestational age above 27 weeks between March to October 2006 in the city of João Pessoa, Northeastern Brazil. Symphisys-fundal was measured using a standard technique recommended by the Brazilian Ministry of Health. Estimated fetal weight assessed through ultrasound using the Brazilian fetal weight chart for gestational age was the gold standard. A subsample of 122 women with neonatal weight measurements was taken up to seven days after estimated fetal weight measurements and symphisys-fundal classification was compared with Lubchenco growth reference curve as gold standard. Sensitivity, specificity, positive and negative predictive values were calculated. The McNemar χ2 test was used for comparing sensitivity of both symphisys-fundal curves studied. RESULTS: The sensitivity of the new curve for detecting small for gestational age fetuses was 51.6% while that of the Brazilian Ministry of Health reference curve was significantly lower (12.5%. In the subsample using neonatal weight as gold standard, the sensitivity of the new reference curve was 85.7% while that of the Brazilian Ministry of Health was 42.9% for detecting small for gestational age. CONCLUSIONS: The diagnostic performance of the new curve for detecting small for gestational age fetuses was significantly higher than that of the Brazilian Ministry of Health reference curve.
[Study on the correlation between chronic sinusitis with nasal septum deviation].
Ji, Xiaoqing; Fu, Hongjuan; Song, Aiqin
2015-06-01
Study on the correlation between chronic sinusitis with nasal septum deviation. Randomly selected 722 patients with coronal sinuses CT, statistics the number of cases of nasal septum deviation, cases of nasal septum deviation with chronic sinusitis, the wide and narrow side cases of nasal septum deviation complicated with sinusitis. The number of sinusitis without deviation, and paired test. The incidence of sinusitis between deviation of nasal septum and non deviation were 54. 13% and 44. 66%, the difference between two groups was statistically significant (Psinusitis with nasal septum deviation of wide and narrow side were 31. 65% and 32. 12%, no significant difference between the two groups (P>0. 01). The incidence of sinusitis high deviation and non high deviation were 59. 54% and 46. 97%, the difference between the two groups was statistically significant (Psinusitis was 54, the narrow side was 66, there is no significant difference between the two groups (P>0. 05). The deviation of nasal septum is associated with the formation of chronic sinusitis, the high deviation is more prone to sinusitis, The incidence of sinusitis and nasal septum deviation on both sides was no different.
丁小霞; 李培武; 周海燕; 白艺珍; 印南日
2011-01-01
2009-2010年对全国产后花生黄曲霉毒素污染进行了普查监测,根据检测结果,利用蒙特卡罗方法,开展了我国与国际食品法典委员会(CAC)、欧盟、日本、澳大利亚和新西兰等国际组织和主要贸易国不同花生黄曲霉毒素限量对我国人群直接食用产后花生的致癌风险和对产业影响的研究.结果表明,不同花生黄曲霉毒素限量标准对我国人群摄入产后花生导致的原发性肝细胞癌年发生率影响差异不显著,但对经济和产业造成的影响差异显著.研究结果为制定我国花生黄曲霉毒素限量标准以及促进花生生产、贸易提供技术参考.%Based on the survey of aflatoxins contamination in post - harvest peanut in China, the cancer risk of Chinese populations for direct consumption of peanuts was estimated under currently existing maximum levels (MLs) in China, CAC (Codex Alimentarius Commission), EU, Japan, Australia and New Zealand, and different scenarios were simulated by Monte Carlo method. The results implied that there were no significant differences in the yearly - incidence of primary hepatocellular carcinoma of Chinese populations under different MLs in peanut and peanut products, whereas significant differences were found for the influence of different MLs on economic interest and the development of peanut industry. This study provided a base for setting up the MLs standards of aflatoxins in China and classification of peanut products.
Drinking Water Maximum Contaminant Levels (MCLs)
U.S. Environmental Protection Agency — National Primary Drinking Water Regulations (NPDWRs or primary standards) are legally enforceable standards that apply to public water systems. Primary standards...
Li, Yongqiang; Hsi, Wen C.
2017-04-01
To analyze measurement deviations of patient-specific quality assurance (QA) using intensity-modulated spot-scanning particle beams, a commercial radiation dosimeter using 24 pinpoint ionization chambers was utilized. Before the clinical trial, validations of the radiation dosimeter and treatment planning system were conducted. During the clinical trial 165 measurements were performed on 36 enrolled patients. Two or three fields of particle beam were used for each patient. Measurements were typically performed with the dosimeter placed at special regions of dose distribution along depth and lateral profiles. In order to investigate the dosimeter accuracy, repeated measurements with uniform dose irradiations were also carried out. A two-step approach was proposed to analyze 24 sampling points over a 3D treatment volume. The mean value and the standard deviation of each measurement did not exceed 5% for all measurements performed on patients with various diseases. According to the defined intervention thresholds of mean deviation and the distance-to-agreement concept with a Gamma index analysis using criteria of 3.0% and 2 mm, a decision could be made regarding whether the dose distribution was acceptable for the patient. Based measurement results, deviation analysis was carried out. In this study, the dosimeter was used for dose verification and provided a safety guard to assure precise dose delivery of highly modulated particle therapy. Patient-specific QA will be investigated in future clinical operations.
Moderate deviations for the quenched mean of the super-Brownian motion with random immigration
2008-01-01
Moderate deviations for the quenched mean of the super-Brownian motion with random immigration are proved for 3≤d≤6, which fills in the gap between central limit theorem(CLT)and large deviation principle(LDP).
MODERATE DEVIATIONS FROM HYDRODYNAMIC LIMIT OF A GINZBURG-LANDAU MODEL
无
2006-01-01
The authors consider the moderate deviations of hydrodynamic limit for Ginzburg-Landau models. The moderate deviation principle of hydrodynamic limit for a specific Ginzburg-Landau model is obtained and an explicit formula of the rate function is derived.
Outcomes of Surgical Treatment in Cases of Dissociated Vertical Deviation
Serpil Akar
2014-03-01
Full Text Available Objectives: To investigate the results of different surgical techniques for treating cases of dissociated vertical deviation (DVD. Materials and Methods: A retrospective review of medical records was performed, including 94 eyes of 47 patients who had undergone bilateral superior rectus (SR recessions (Group 1, bilateral SR recession with posterior fixation sutures (Group 2, or bilateral inferior oblique (IO anterior transposition surgery (Group 3 for treatment of DVD. Nineteen patients underwent secondary procedures (SR weakening or IO anterior transposition because of unsatisfactory results. The amount of the DVD in primary position before and after surgery, postoperative success ratios, and probable complications were evaluated. The Wilcoxon signed ranks test and chi-squared test were used for statistical evaluations. Results: In 69% of the 32 eyes in group 1, 65% of the 20 eyes in group 2, and 79% of the 42 eyes in group 3, satisfactory control of the DVD in primary position was achieved. All eyes undergoing both SR weakening and IO anterior transposition had a residual DVD of less than 5 prism diopters (pd. Of the total of 94 eyes, in 26 (89.6% of 29 eyes that had a preoperative DVD angle of more than 15 pd [ten eyes from group 1, seven eyes from group 2, and nine eyes from group 3], the residual DVD angle after surgery was more than 5 pd. However, in the 65 eyes with preoperative DVD of 15 pd or less (21from Group 1, 12 from Group 2, and 32 from Group 3, the residual DVD angle after the operation was less than 5 pd. Two eyes of 2 patients had -1 limitation to elevation after surgery. Conclusion: Only IO anterior transposition or SR weakening surgery appear to be a successful surgical approaches in the management of patients with mild- and moderate-angle (≤15 pd DVD. Weakening both the SR and IO muscles yield a greater success in the management of patients with large-angle (>15 pd DVD. (Turk J Ophthalmol 2014; 44: 132-7
无
2010-01-01
This paper studies the moderate deviations of real-valued extended negatively dependent(END) random variables with consistently varying tails.The moderate deviations of partial sums are first given.The results are then used to establish the necessary and sufficient conditions for the moderate deviations of random sums under certain circumstances.
Parameterization for Neutrino Mixing Matrix with Deviated Unitarity
LU Lei; WANG Wen-Yu; XIONG Zhao-Hua
2009-01-01
Neutrino oscillation experiments provide the first evidence on non-zero neutrino masses and indicate new physics beyond the standard model.With Majorana neutrinos introduced to acquire tiny neutrino maases,it leads to the existence of more than three neutrino species,implying that the ordinary neutrino mixing matrix is only a part of the whole extended unitary mixing matrix and thus no longer unitary.We give a parameterization for a non-unitary neutrino mixing matrix under seesaw framework and further present a method to test the unitarity of the ordinary neutrino mixing matrix.
Influence of Pareto optimality on the maximum entropy methods
Peddavarapu, Sreehari; Sunil, Gujjalapudi Venkata Sai; Raghuraman, S.
2017-07-01
Galerkin meshfree schemes are emerging as a viable substitute to finite element method to solve partial differential equations for the large deformations as well as crack propagation problems. However, the introduction of Shanon-Jayne's entropy principle in to the scattered data approximation has deviated from the trend of defining the approximation functions, resulting in maximum entropy approximants. Further in addition to this, an objective functional which controls the degree of locality resulted in Local maximum entropy approximants. These are based on information-theoretical Pareto optimality between entropy and degree of locality that are defining the basis functions to the scattered nodes. The degree of locality in turn relies on the choice of locality parameter and prior (weight) function. The proper choices of both plays vital role in attain the desired accuracy. Present work is focused on the choice of locality parameter which defines the degree of locality and priors: Gaussian, Cubic spline and quartic spline functions on the behavior of local maximum entropy approximants.
Abraha, Iosief; Cozzolino, Francesco; Orso, Massimiliano; Marchesi, Mauro; Germani, Antonella; Lombardo, Guido; Eusebi, Paolo; De Florio, Rita; Luchetta, Maria Laura; Iorio, Alfonso; Montedori, Alessandro
2017-04-01
To describe the characteristics, and estimate the incidence, of trials included in systematic reviews deviating from the intention-to-treat (ITT) principle. A 5% random sample of reviews were selected (Medline 2006-2010). Trials from reviews were classified based on the ITT: (1) ITT trials (trials reporting standard ITT analyses); (2) modified ITT (mITT) trials (modified ITT; trials deviating from standard ITT); or (3) no ITT trials. Of 222 reviews, 81 (36%) included at least one mITT trial. Reviews with mITT trials were more likely to contain trials that used placebo, that investigated drugs, and that reported favorable results. The incidence of reviews with mITT trial ranged from 29% (17/58) to 48% (23/48). Of the 2,349 trials, 597 (25.4%) were classified as ITT trials, 323 (13.8%) as mITT trials, and 1,429 (60.8%) as no ITT trials. The mITT trials were more likely to have reported exclusions compared to studies classified as ITT trials and to have received funding. The reporting of the type of ITT may differ according to the clinical area and the type of intervention. Deviation from ITT in randomized controlled trials is a widespread phenomenon that significantly affects systematic reviews. Copyright © 2017 Elsevier Inc. All rights reserved.
CORRECTION OF THE SPINAL DEVIATIONS USING THE CYRIAX METHOD
Gabriela OCHIANĂ
2011-07-01
Full Text Available musculoskeletal and other functions (respiration,circulation, digestion, metabolic exchanges, etc.. Inthis study, I have decided to check two aspects, checkto what extend Cyriax method specific techniques arean effective treatment means of problems caused bylesions of intervertebral disc (correcting suchdeficiencies column and whether prophylacticcompliance measures within this method can preventthe disc lesions.Research was conducted on 6 subjects with deviationsof the spine in the sagittal plane (cifoza and lordosisfor a period of 4 months. To assess and compare theresults obtained we use the evaluation form Cyriax,mobility tests and pain scale. From Cyriax method weused deep transverse massage and manipulationtechniques at the spine level and the extremities.- The results, materialized by the disappearance ofpain and correction of bias, confirm that thetechniques of the Cyriax method are an appropriatestrategy for treatment and prophylaxis in spinedisorders. Even if some people have incorrectpositions of the spinal column, (after the age of 25,the sign which forces the patient to correct his/herposition is the pain. The use of the manipulationtechniques within this method must be known andapplied only by people who have attended specializedcourses, in order to prevent any other subsequentcomplications. For the amelioration of the discprotrusions, the disappearance of the pain and thecorrection of the spinal position, maximum 10sessions (1 session every 2-3 days are used, and theperson treated must follow the prophylaxis measuresmentioned above all his/her life.
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Large deviations for Markov chains in the positive quadrant
Borovkov, A. A.; Mogul'skii, A. A.
2001-10-01
The paper deals with so-called N-partially space-homogeneous time-homogeneous Markov chains X(y,n), n=0,1,2,\\dots, X(y,0)=y, in the positive quadrant \\mathbb R^{2+}=\\{x=(x_2,x_2):x_1\\geqslant0,\\ x_2\\geqslant0\\}. These Markov chains are characterized by the following property of the transition probabilities P(y,A)=\\mathsf P(X(y,1)\\in A): for some N\\geqslant 0 the measure P(y,dx) depends only on x_2, y_2, and x_1-y_1 in the domain x_1>N, y_1>N, and only on x_1, y_1, and x_2-y_2 in the domain x_2>N, y_2>N. For such chains the asymptotic behaviour of \\displaystyle \\ln\\mathsf P\\Bigl(\\frac 1sX(y,n)\\in B\\Bigr), \\qquad \\ln\\mathsf P\\bigl(X(y,n)\\in x+B\\bigr) is found for a fixed set B as s\\to\\infty, \\vert x\\vert\\to\\infty, and n\\to\\infty. Some other conditions on the growth of parameters are also considered, for example, \\vert x-y\\vert\\to\\infty, \\vert y\\vert\\to\\infty. A study is made of the structure of the most probable trajectories, which give the main contribution to this asymptotics, and a number of other results pertaining to the topic are established. Similar results are obtained for the narrower class of 0-partially homogeneous ergodic chains under less restrictive moment conditions on the transition probabilities P(y,dx). Moreover, exact asymptotic expressions for the probabilities \\mathsf P(X(0,n)\\in x+B) are found for 0-partially homogeneous ergodic chains under some additional conditions. The interest in partially homogeneous Markov chains in positive octants is due to the mathematical aspects (new and interesting problems arise in the framework of general large deviation theory) as well as applied issues, for such chains prove to be quite accurate mathematical models for numerous basic types of queueing and communication networks such as the widely known Jackson networks, polling systems, or communication networks associated with the ALOHA algorithm. There is a vast literature dealing with the analysis of these objects. The
Roy Choudhury, Kingshuk; O'Sullivan, Finbarr; Kasman, Ian; Plowman, Greg D
2012-12-20
Measurements in tumor growth experiments are stopped once the tumor volume exceeds a preset threshold: a mechanism we term volume endpoint censoring. We argue that this type of censoring is informative. Further, least squares (LS) parameter estimates are shown to suffer a bias in a general parametric model for tumor growth with an independent and identically distributed measurement error, both theoretically and in simulation experiments. In a linear growth model, the magnitude of bias in the LS growth rate estimate increases with the growth rate and the standard deviation of measurement error. We propose a conditional maximum likelihood estimation procedure, which is shown both theoretically and in simulation experiments to yield approximately unbiased parameter estimates in linear and quadratic growth models. Both LS and maximum likelihood estimators have similar variance characteristics. In simulation studies, these properties appear to extend to the case of moderately dependent measurement error. The methodology is illustrated by application to a tumor growth study for an ovarian cancer cell line.
F. M. R. Mesquita
2012-09-01
Full Text Available Viscosities of four binaries mixtures [soybean biodiesel + diesel oil (or n-hexadecane and coconut biodiesel + diesel oil (or n-hexadecane] have been determined at T = (293.15, 313.15, 333.15, 353.15, 373.15 K and atmospheric pressure over the entire composition range. Experimental data were fitted to the Andrade equation and the adjustable parameters and the standard deviations between experimental and calculated values were estimated. From the experimental data, the viscosity deviations, , were calculated by using the Redlich - Kister polynomial equation. The comparison between experimental data determined in this work and four predictive methods used for the estimation of viscosities of biodiesel fuels (based on their fatty acid composition is discussed.
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
崔永刚; 廖栩鹤; 王荣福; 范岩; 邸丽娟; 刘红洁; 赵媛
2012-01-01
[Purpose] To evaluate the correlations of the maximum standardized uptake value (SUVmax) of 'T-FDG PET/CT and the short diameters of pulmonary lesions with the pathological types of lung cancer,and to assess the feasibility of using SUVmax as an important evaluation parameter for lung cancer diagnosis. [Methods] One hundred twenty-seven cases with clinically suspected lung cancer undergoing 18F-FDG PET/CT from July 2010 to February 2012,were retrospectively reviewed. All PET/CT images were analyzed visually and semiquantitatively by 2 physicians. In each case,the SUVmax and the short diameter of the lesions were calculated from the PET/CT images. All data were analyzed by statistical software. [ Results ] Positive correlation of the SUVmax and short diameter of the lesions in malignant group or benign group was found. A significant difference of SUVmax between malignant group and benign group was observed (P=0.0002), but not of the short diameters of the lesions (P=0.0938). The short diameter of squamous cell carcinoma group was significantly different from that of adenocarcinoma group (P=0.0059). However, there was no significant differences of SUVmax or short diameters between non-small cell lung cancer (NSCLC) group and small cell lung cancer group respectively (P=0.8932 and P=0.6355). [Conclusion] 18F-FDG PET/CT SUVmax might be used as an important parameter to differentiate malignant tumors from benign ones,contributing to the diagnosis and differential diagnosis for pulmonary lesions.%[目的]探讨肺部病灶18F-FD GPET/CT最大标准化摄取值(SUVmax)与病灶短径以及肺癌病理类型之间的相关性,以评估SUVmax诊断肺癌的价值.[方法]回顾性分析2010年7月至2012年2月127例行18-FD GPET/CT显像的肺部占位患者,在PET/CT图像上测算肺部病灶的SUVmax值及最短径,并进行统计学分析.[结果]肺癌组和良性组的短径与SUVmax之间分别均呈正相关；肺癌组与良性组的SUVmax存在统计学差异(P=0
RELIGIOUS ANOMIE AS THE DEVIATION CATALYST IN THE MODERN SOCIETY
Alexander Vladislavovich Pletnev
2015-11-01
Full Text Available In the article features of influence of religion on the individual in the modern society are considered. In the XXI century the religion shows weak ability to execute functions of social control. At the same time the religion remains the major psychological factor which in many respects defines the maintenance of the vital world of individuals. Strengthening of influence of religion as psychological factor allows to study social consequences of this influence. If the classical sociological theory considers religion as a factor certainly constraining an anomy, in modern conditions the religion has rather opposite effect. The Christian religion shows the highest, unrealizable requirements to the individual.As a result of it the individuals strongly subject to influence of Christian values feel ina-bility to correspond to Christian model of ideal human behavior. The variety of religions in the modern society of the western type and conducting interreligious dialogue is other reason of development of religious anomy. The mutual erosion of the valuable and standard bases of each religion turns out to be consequence of this dialogue. In addition, modern society is characterized by essential distinction in understanding individuals of norms and principles of that religion which supporters they are.As for change of functioning of religion as social institute, in this aspect the increasing reorientation of religious institutes of the western society to the market purposes and values is observed. The specified processes in general will lead to increase of deviant behavior due to development of religious anomy.
Propane spectral resolution enhancement by the maximum entropy method
Bonavito, N. L.; Stewart, K. P.; Hurley, E. J.; Yeh, K. C.; Inguva, R.
1990-01-01
The Burg algorithm for maximum entropy power spectral density estimation is applied to a time series of data obtained from a Michelson interferometer and compared with a standard FFT estimate for resolution capability. The propane transmittance spectrum was estimated by use of the FFT with a 2 to the 18th data sample interferogram, giving a maximum unapodized resolution of 0.06/cm. This estimate was then interpolated by zero filling an additional 2 to the 18th points, and the final resolution was taken to be 0.06/cm. Comparison of the maximum entropy method (MEM) estimate with the FFT was made over a 45/cm region of the spectrum for several increasing record lengths of interferogram data beginning at 2 to the 10th. It is found that over this region the MEM estimate with 2 to the 16th data samples is in close agreement with the FFT estimate using 2 to the 18th samples.
Quality, precision and accuracy of the maximum No. 40 anemometer
Obermeir, J. [Otech Engineering, Davis, CA (United States); Blittersdorf, D. [NRG Systems Inc., Hinesburg, VT (United States)
1996-12-31
This paper synthesizes available calibration data for the Maximum No. 40 anemometer. Despite its long history in the wind industry, controversy surrounds the choice of transfer function for this anemometer. Many users are unaware that recent changes in default transfer functions in data loggers are producing output wind speed differences as large as 7.6%. Comparison of two calibration methods used for large samples of Maximum No. 40 anemometers shows a consistent difference of 4.6% in output speeds. This difference is significantly larger than estimated uncertainty levels. Testing, initially performed to investigate related issues, reveals that Gill and Maximum cup anemometers change their calibration transfer functions significantly when calibrated in the open atmosphere compared with calibration in a laminar wind tunnel. This indicates that atmospheric turbulence changes the calibration transfer function of cup anemometers. These results call into question the suitability of standard wind tunnel calibration testing for cup anemometers. 6 refs., 10 figs., 4 tabs.
Payoff-monotonic game dynamics and the maximum clique problem.
Pelillo, Marcello; Torsello, Andrea
2006-05-01
Evolutionary game-theoretic models and, in particular, the so-called replicator equations have recently proven to be remarkably effective at approximately solving the maximum clique and related problems. The approach is centered around a classic result from graph theory that formulates the maximum clique problem as a standard (continuous) quadratic program and exploits the dynamical properties of these models, which, under a certain symmetry assumption, possess a Lyapunov function. In this letter, we generalize previous work along these lines in several respects. We introduce a wide family of game-dynamic equations known as payoff-monotonic dynamics, of which replicator dynamics are a special instance, and show that they enjoy precisely the same dynamical properties as standard replicator equations. These properties make any member of this family a potential heuristic for solving standard quadratic programs and, in particular, the maximum clique problem. Extensive simulations, performed on random as well as DIMACS benchmark graphs, show that this class contains dynamics that are considerably faster than and at least as accurate as replicator equations. One problem associated with these models, however, relates to their inability to escape from poor local solutions. To overcome this drawback, we focus on a particular subclass of payoff-monotonic dynamics used to model the evolution of behavior via imitation processes and study the stability of their equilibria when a regularization parameter is allowed to take on negative values. A detailed analysis of these properties suggests a whole class of annealed imitation heuristics for the maximum clique problem, which are based on the idea of varying the parameter during the imitation optimization process in a principled way, so as to avoid unwanted inefficient solutions. Experiments show that the proposed annealing procedure does help to avoid poor local optima by initially driving the dynamics toward promising regions in
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Aab, A.; Abreu, P.; Aglietta, M.; Ahn, E. J.; Al Samarai, I.; Albuquerque, I. F. M.; Allekotte, I.; Allen, J.; Allison, P.; Almela, A.; Alvarez Castillo, J.; Alvarez-Muñiz, J.; Alves Batista, R.; Ambrosio, M.; Aminaei, A.; Anchordoqui, L.; Andringa, S.; Aramo, C.; Aranda, V. M.; Arqueros, F.; Asorey, H.; Assis, P.; Aublin, J.; Ave, M.; Avenier, M.; Avila, G.; Awal, N.; Badescu, A. M.; Barber, K. B.; Bäuml, J.; Baus, C.; Beatty, J. J.; Becker, K. H.; Bellido, J. A.; Berat, C.; Bertaina, M. E.; Bertou, X.; Biermann, P. L.; Billoir, P.; Blaess, S.; Blanco, M.; Bleve, C.; Blümer, H.; Boháčová, M.; Boncioli, D.; Bonifazi, C.; Bonino, R.; Borodai, N.; Brack, J.; Brancus, I.; Bridgeman, A.; Brogueira, P.; Brown, W. C.; Buchholz, P.; Bueno, A.; Buitink, S.; Buscemi, M.; Caballero-Mora, K. S.; Caccianiga, B.; Caccianiga, L.; Candusso, M.; Caramete, L.; Caruso, R.; Castellina, A.; Cataldi, G.; Cazon, L.; Cester, R.; Chavez, A. G.; Chiavassa, A.; Chinellato, J. A.; Chudoba, J.; Cilmo, M.; Clay, R. W.; Cocciolo, G.; Colalillo, R.; Coleman, A.; Collica, L.; Coluccia, M. R.; Conceição, R.; Contreras, F.; Cooper, M. J.; Cordier, A.; Coutu, S.; Covault, C. E.; Cronin, J.; Curutiu, A.; Dallier, R.; Daniel, B.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; De Domenico, M.; de Jong, S. J.; de Mello Neto, J. R. T.; De Mitri, I.; de Oliveira, J.; de Souza, V.; del Peral, L.; Deligny, O.; Dembinski, H.; Dhital, N.; Di Giulio, C.; Di Matteo, A.; Diaz, J. C.; Díaz Castro, M. L.; Diogo, F.; Dobrigkeit, C.; Docters, W.; D'Olivo, J. C.; Dorofeev, A.; Dorosti Hasankiadeh, Q.; Dova, M. T.; Ebr, J.; Engel, R.; Erdmann, M.; Erfani, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; Facal San Luis, P.; Falcke, H.; Fang, K.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Ferguson, A. P.; Fernandes, M.; Fick, B.; Figueira, J. M.; Filevich, A.; Filipčič, A.; Fox, B. D.; Fratu, O.; Fröhlich, U.; Fuchs, B.; Fujii, T.; Gaior, R.; García, B.; Garcia Roca, S. T.; Garcia-Gamez, D.; Garcia-Pinto, D.; Garilli, G.; Gascon Bravo, A.; Gate, F.; Gemmeke, H.; Ghia, P. L.; Giaccari, U.; Giammarchi, M.; Giller, M.; Glaser, C.; Glass, H.; Gómez Berisso, M.; Gómez Vitale, P. F.; Gonçalves, P.; Gonzalez, J. G.; González, N.; Gookin, B.; Gordon, J.; Gorgi, A.; Gorham, P.; Gouffon, P.; Grebe, S.; Griffith, N.; Grillo, A. F.; Grubb, T. D.; Guarino, F.; Guedes, G. P.; Hampel, M. R.; Hansen, P.; Harari, D.; Harrison, T. A.; Hartmann, S.; Harton, J. L.; Haungs, A.; Hebbeker, T.; Heck, D.; Heimann, P.; Herve, A. E.; Hill, G. C.; Hojvat, C.; Hollon, N.; Holt, E.; Homola, P.; Hörandel, J. R.; Horvath, P.; Hrabovský, M.; Huber, D.; Huege, T.; Insolia, A.; Isar, P. G.; Jandt, I.; Jansen, S.; Jarne, C.; Josebachuili, M.; Kääpä, A.; Kambeitz, O.; Kampert, K. H.; Kasper, P.; Katkov, I.; Kégl, B.; Keilhauer, B.; Keivani, A.; Kemp, E.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Krause, R.; Krohm, N.; Krömer, O.; Kruppke-Hansen, D.; Kuempel, D.; Kunka, N.; LaHurd, D.; Latronico, L.; Lauer, R.; Lauscher, M.; Lautridou, P.; Le Coz, S.; Leão, M. S. A. B.; Lebrun, D.; Lebrun, P.; Leigui de Oliveira, M. A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; López, R.; Lopez Agüera, A.; Louedec, K.; Lozano Bahilo, J.; Lu, L.; Lucero, A.; Ludwig, M.; Malacari, M.; Maldera, S.; Mallamaci, M.; Maller, J.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Marin, V.; Mariş, I. C.; Marsella, G.; Martello, D.; Martin, L.; Martinez, H.; Martínez Bravo, O.; Martraire, D.; Masías Meza, J. J.; Mathes, H. J.; Mathys, S.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Maurel, D.; Maurizio, D.; Mayotte, E.; Mazur, P. O.; Medina, C.; Medina-Tanco, G.; Meissner, R.; Melissas, M.; Melo, D.; Menshikov, A.; Messina, S.; Meyhandan, R.; Mićanović, S.; Micheletti, M. I.; Middendorf, L.; Minaya, I. A.; Miramonti, L.; Mitrica, B.; Molina-Bueno, L.; Mollerach, S.; Monasor, M.; Monnier Ragaigne, D.; Montanet, F.; Morello, C.; Mostafá, M.; Moura, C. A.; Muller, M. A.; Müller, G.; Müller, S.; Münchmeyer, M.; Mussa, R.; Navarra, G.; Navas, S.; Necesal, P.; Nellen, L.; Nelles, A.; Neuser, J.; Nguyen, P.; Niechciol, M.; Niemietz, L.; Niggemann, T.; Nitz, D.; Nosek, D.; Novotny, V.; Nožka, L.; Ochilo, L.; Olinto, A.; Oliveira, M.; Pacheco, N.; Pakk Selmi-Dei, D.; Palatka, M.; Pallotta, J.; Palmieri, N.; Papenbreer, P.; Parente, G.; Parra, A.; Paul, T.; Pech, M.; PÈ©kala, J.; Pelayo, R.; Pepe, I. M.; Perrone, L.; Petermann, E.; Peters, C.; Petrera, S.; Petrov, Y.; Phuntsok, J.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Plum, M.; Porcelli, A.; Porowski, C.; Prado, R. R.; Privitera, P.; Prouza, M.; Purrello, V.; Quel, E. J.; Querchfeld, S.; Quinn, S.; Rautenberg, J.; Ravel, O.; Ravignani, D.; Revenu, B.; Ridky, J.; Riggi, S.; Risse, M.; Ristori, P.; Rizi, V.; Rodrigues de Carvalho, W.; Rodriguez Cabo, I.; Rodriguez Fernandez, G.; Rodriguez Rojo, J.; Rodríguez-Frías, M. D.; Rogozin, D.; Ros, G.; Rosado, J.; Rossler, T.; Roth, M.; Roulet, E.; Rovero, A. C.; Saffi, S. J.; Saftoiu, A.; Salamida, F.; Salazar, H.; Saleh, A.; Salesa Greus, F.; Salina, G.; Sánchez, F.; Sanchez-Lucas, P.; Santo, C. E.; Santos, E.; Santos, E. M.; Sarazin, F.; Sarkar, B.; Sarmento, R.; Sato, R.; Scharf, N.; Scherini, V.; Schieler, H.; Schiffer, P.; Schmidt, D.; Scholten, O.; Schoorlemmer, H.; Schovánek, P.; Schulz, A.; Schulz, J.; Schumacher, J.; Sciutto, S. J.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sidelnik, I.; Sigl, G.; Sima, O.; Śmiałkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sorokin, J.; Squartini, R.; Srivastava, Y. N.; Stanič, S.; Stapleton, J.; Stasielak, J.; Stephan, M.; Stutz, A.; Suarez, F.; Suomijärvi, T.; Supanitsky, A. D.; Sutherland, M. S.; Swain, J.; Szadkowski, Z.; Szuba, M.; Taborda, O. A.; Tapia, A.; Tartare, M.; Tepe, A.; Theodoro, V. M.; Timmermans, C.; Todero Peixoto, C. J.; Toma, G.; Tomankova, L.; Tomé, B.; Tonachini, A.; Torralba Elipe, G.; Torres Machado, D.; Travnicek, P.; Trovato, E.; Tueros, M.; Ulrich, R.; Unger, M.; Urban, M.; Valdés Galicia, J. F.; Valiño, I.; Valore, L.; van Aar, G.; van Bodegom, P.; van den Berg, A. M.; van Velzen, S.; van Vliet, A.; Varela, E.; Vargas Cárdenas, B.; Varner, G.; Vázquez, J. R.; Vázquez, R. A.; Veberič, D.; Verzi, V.; Vicha, J.; Videla, M.; Villaseñor, L.; Vlcek, B.; Vorobiov, S.; Wahlberg, H.; Wainberg, O.; Walz, D.; Watson, A. A.; Weber, M.; Weidenhaupt, K.; Weindl, A.; Werner, F.; Widom, A.; Wiencke, L.; Wilczyńska, B.; Wilczyński, H.; Will, M.; Williams, C.; Winchen, T.; Wittkowski, D.; Wundheiler, B.; Wykes, S.; Yamamoto, T.; Yapici, T.; Yuan, G.; Yushkov, A.; Zamorano, B.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zaw, I.; Zepeda, A.; Zhou, J.; Zhu, Y.; Zimbres Silva, M.; Ziolkowski, M.; Zuccarello, F.; Pierre Auger Collaboration
2014-12-01
We report a study of the distributions of the depth of maximum, Xmax, of extensive air-shower profiles with energies above 1 017.8 eV as observed with the fluorescence telescopes of the Pierre Auger Observatory. The analysis method for selecting a data sample with minimal sampling bias is described in detail as well as the experimental cross-checks and systematic uncertainties. Furthermore, we discuss the detector acceptance and the resolution of the Xmax measurement and provide parametrizations thereof as a function of energy. The energy dependence of the mean and standard deviation of the Xmax distributions are compared to air-shower simulations for different nuclear primaries and interpreted in terms of the mean and variance of the logarithmic mass distribution at the top of the atmosphere.
An application of Hamiltonian neurodynamics using Pontryagin's Maximum (Minimum) Principle.
Koshizen, T; Fulcher, J
1995-12-01
Classical optimal control methods, notably Pontryagin's Maximum (Minimum) Principle (PMP) can be employed, together with Hamiltonians, to determine optimal system weights in Artificial Neural dynamical systems. A new learning rule based on weight equations derived using PMP is shown to be suitable for both discrete- and continuous-time systems, and moreover, can also be applied to feedback networks. Preliminary testing shows that this PMP learning rule compares favorably with Standard BackPropagations (SBP) on the XOR problem.
P. I. Azubuike
2012-01-01
Full Text Available Problem statement: The Shewhart and S control charts, in the literature, were combined to evaluate the stability of a process. These charts were based on the fundamental assumption of normality of the quality characteristics under investigation. Approach: In practice, the normality assumption was often violated by real life data, therefore, use of the Shewhart and S control charts on real life data might leads to misplacement of control limits. There were many alternatives in the literature to handle non-normality of quality characteristics. The Median Absolute Deviation (MAD claimed in the literature to be the best estimate when the data under consideration is non-normal. Thus in this study, we derived the control limits for the-control chart using the median absolute deviation for monitoring process stability when the quality characteristic under investigation was non-normal. Results: The derived control limits were compared with the control limits when the sample standard deviation was used as a measure of controlling the process variability using manufacturing process (real life data. Furthermore, a simulation study was carried out to evaluate the performance of the proposed MAD based control charts on both normal and non-normal process. Conclusion: The obtained results show that the derived control limit is an improvement on the control limit of the Shewhart and that the MAD control charts performed better for non-normal process than for normal process.
A correlational study of scoliosis and trunk balance in adult patients with mandibular deviation.
Zhou, Shuncheng; Yan, Juanjuan; Da, Hu; Yang, Yang; Wang, Na; Wang, Wenyong; Ding, Yin; Sun, Shiyao
2013-01-01
Previous studies have confirmed that patients with mandibular deviation often have abnormal morphology of their cervical vertebrae. However, the relationship between mandibular deviation, scoliosis, and trunk balance has not been studied. Currently, mandibular deviation is usually treated as a single pathology, which leads to poor clinical efficiency. We investigated the relationship of spine coronal morphology and trunk balance in adult patients with mandibular deviation, and compared the finding to those in healthy volunteers. 35 adult patients with skeletal mandibular deviation and 10 healthy volunteers underwent anterior X-ray films of the head and posteroanterior X-ray films of the spine. Landmarks and lines were drawn and measured on these films. The axis distance method was used to measure the degree of scoliosis and the balance angle method was used to measure trunk balance. The relationship of mandibular deviation, spine coronal morphology and trunk balance was evaluated with the Pearson correlation method. The spine coronal morphology of patients with mandibular deviation demonstrated an "S" type curve, while a straight line parallel with the gravity line was found in the control group (significant difference, pbalance of patients with mandibular deviation was disturbed (imbalance angle >1°), while the control group had a normal trunk balance (imbalance angle <1°). There was a significant difference between the two groups (p<0.01). The degree of scoliosis and shoulder imbalance correlated with the degree of mandibular deviation, and presented a linear trend. The direction of mandibular deviation was the same as that of the lateral bending of thoracolumbar vertebrae, which was opposite to the direction of lateral bending of cervical vertebrae. Our study shows the degree of mandibular deviation has a high correlation with the degree of scoliosis and trunk imbalance, all the three deformities should be clinically evaluated in the management of mandibular
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
ZHANG Hui; ZHANG Shu-Yi; FAN Li
2009-01-01
A model of high-overtone bulk acoustic resonators is used to study the effects of thickness deviation of elastic plates on resonance frequency spectra in planar multi-layered systems. The resonance frequency shifts induced by the thickness deviations of the elastic plates periodically vary with the resonance order, which depends on the acoustic impedance ratios of the elastic plates to piezoelectric patches. Additionally, the center lines of the frequency shift oscillations Hnearly change with the orders of the resonance modes, and their slopes are sensitive to the thickness deviations of the plates, which can be used to quantitatively evaluate the thickness deviations.
An Analysis of the Linguistic Deviation in Chapter X of Oliver Twist
刘聪
2013-01-01
Charles Dickens is one of the greatest critical realist writers of the Victorian Age. In language, he is often compared with William Shakespeare for his adeptness with the vernacular and large vocabulary. Charles Dickens achieved a recognizable place among English writers through the use of the stylistic features in his fictional language. Oliver Twist is the best representative of Charles Dickens’style, which makes it the most appropriate choice for the present stylistic study on Charles Dickens. No one who has ever read the dehumanizing workhouse scenes of Oliver Twist and the dark, criminal underworld life can forget them. This thesis attempts to investigate Oliver Twist through the approach of modern stylistics, particularly the theory of linguistic devia-tion. This thesis consists of an introduction, the main body and a conclusion. The introduction offers a brief summary of the com-ments on Charles Dickens and Chapter X of Oliver Twist, introduces the newly rising linguistic deviation theories, and brings about the theories on which this thesis settles. The main body explores the deviation effects produced from four aspects: lexical deviation, grammatical deviation, graphological deviation, and semantic deviation. It endeavors to show Dickens ’manipulating language and the effects achieved through this manipulation. The conclusion mainly sums up the previous analysis, and reveals the theme of the novel, positive effect of linguistic deviation and significance of deviation application.
Asaranti Kar
2015-01-01
Full Text Available Introduction: Neural tube defects (NTD are a group of serious birth defects occurring due to defective closure of neural tube during embryonic development. It comprises of anencephaly, encephalocele and spina bifida. We conducted this prospective fetal autopsy series to study the rate and distribution of NTD, analyze the reproductive factors and risk factors, note any associated anomalies and evaluate the organ weights and their deviation from normal. Materials and Methods: This was a prospective study done over a period of 6 years from August, 2007 to July, 2013. All cases of NTDs delivered as abortion, still born and live born were included. The reproductive and risk factors like age, parity, multiple births, previous miscarriage, obesity, diabetes mellitus, socioeconomic status and use of folic acid during pregnancy were collected.Autopsy was performed according to Virchow′s technique. Detail external and internal examination were carried out to detect any associated anomalies. Gross and microscopic examination of organs were done. Results: Out of 210 cases of fetal and perinatal autopsy done, 72 (34.28% had NTD constituting 49 cases of anencephaly, 16 spina bifida and 7 cases of encephalocele. The mothers in these cases predominantly were within 25-29 years (P = 0.02 and primy (P = 0.01. Female sex was more commonly affected than males (M:F = 25:47, P = 0.0005 There was no history of folate use in majority of cases. Organ weight deviations were >2 standard deviation low in most of the cases. Most common associated anomalies were adrenal hypoplasia and thymic hyperplasia. Conclusion: The authors have made an attempt to study NTD cases in respect to maternal reproductive and risk factors and their association with NTD along with the organ weight deviation and associated anomalies. This so far in our knowledge is an innovative study which was not found in literature even after extensive search.
Griffin, James M.; Diaz, Fernanda; Geerling, Edgar; Clasing, Matias; Ponce, Vicente; Taylor, Chris; Turner, Sam; Michael, Ernest A.; Patricio Mena, F.; Bronfman, Leonardo
2017-02-01
By using acoustic emission (AE) it is possible to control deviations and surface quality during micro milling operations. The method of micro milling is used to manufacture a submillimetre waveguide where micro machining is employed to achieve the required superior finish and geometrical tolerances. Submillimetre waveguide technology is used in deep space signal retrieval where highest detection efficiencies are needed and therefore every possible signal loss in the receiver has to be avoided and stringent tolerances achieved. With a sub-standard surface finish the signals travelling along the waveguides dissipate away faster than with perfect surfaces where the residual roughness becomes comparable with the electromagnetic skin depth. Therefore, the higher the radio frequency the more critical this becomes. The method of time-frequency analysis (STFT) is used to transfer raw AE into more meaningful salient signal features (SF). This information was then correlated against the measured geometrical deviations and, the onset of catastrophic tool wear. Such deviations can be offset from different AE signals (different deviations from subsequent tests) and feedback for a final spring cut ensuring the geometrical accuracies are met. Geometrical differences can impact on the required transfer of AE signals (change in cut off frequencies and diminished SNR at the interface) and therefore errors have to be minimised to within 1 μm. Rules based on both Classification and Regression Trees (CART) and Neural Networks (NN) were used to implement a simulation displaying how such a control regime could be used as a real time controller, be it corrective measures (via spring cuts) over several initial machining passes or, with a micron cut introducing a level plain measure for allowing setup corrective measures (similar to a spirit level).
Jahangir Vajed Samiei
2015-06-01
Full Text Available With on-going climate change, coral susceptibility to thermal stress constitutes a central concern in reefconservation. In the Persian Gulf, coral reefs are confronted with a high seasonal variability in water temperature, and both hot and cold extremes have been associated with episodes of coral bleaching and mortality. Using physiological performance as a measure of coral health, we investigated the thermal susceptibility of the common acroporid, Acropora downingi, near Hengam Island where the temperature oscillates seasonally in the range 20.2–34.2 °C. In a series of two short-term experiments comparing coral response in summer versus winter conditions, we exposed corals during each season (1 to the corresponding seasonal average and extreme temperature levels in a static thermal environment, and (2 to a progressive temperature deviation from the annual mean toward the corresponding extreme seasonal value and beyond in a dynamic thermal environment. We monitored four indictors of coral physiological performance: net photosynthesis (Pn, dark respiration (R, autotrophic capability (Pn/R, and survival. Corals exposed to warming during summer showed a decrease in net photosynthesis and ultimately died, while corals exposed to cooling during winter were not affected in their photosynthetic performance and survival. Coral autotrophic capability Pn/R was lower at the warmer thermal level within eachseason, and during summer compared to winter. Corals exposed to the maximum temperature of summer displayed Pn/R < 1, inferring that photosynthetic performance could not support basal metabolic needs under this environment. Our results suggest that the autotrophic performance of the Persian Gulf A. downingi is sensitive to the extreme temperatures endured in summer, and therefore its populations may be impacted by future increases in water temperature.
Performance Evaluation of Five Turbidity Sensors in Three Primary Standards
Snazelle, Teri T.
2015-10-28
Open-File Report 2015-1172 is temporarily unavailable.Five commercially available turbidity sensors were evaluated by the U.S. Geological Survey, Hydrologic Instrumentation Facility (HIF) for accuracy and precision in three types of turbidity standards; formazin, StablCal, and AMCO Clear (AMCO–AEPA). The U.S. Environmental Protection Agency (EPA) recognizes all three turbidity standards as primary standards, meaning they are acceptable for reporting purposes. The Forrest Technology Systems (FTS) DTS-12, the Hach SOLITAX sc, the Xylem EXO turbidity sensor, the Yellow Springs Instrument (YSI) 6136 turbidity sensor, and the Hydrolab Series 5 self-cleaning turbidity sensor were evaluated to determine if turbidity measurements in the three primary standards are comparable to each other, and to ascertain if the primary standards are truly interchangeable. A formazin 4000 nephelometric turbidity unit (NTU) stock was purchased and dilutions of 40, 100, 400, 800, and 1000 NTU were made fresh the day of testing. StablCal and AMCO Clear (for Hach 2100N) standards with corresponding concentrations were also purchased for the evaluation. Sensor performance was not evaluated in turbidity levels less than 40 NTU due to the unavailability of polymer-bead turbidity standards rated for general use. The percent error was calculated as the true (not absolute) difference between the measured turbidity and the standard value, divided by the standard value.The sensors that demonstrated the best overall performance in the evaluation were the Hach SOLITAX and the Hydrolab Series 5 turbidity sensor when the operating range (0.001–4000 NTU for the SOLITAX and 0.1–3000 NTU for the Hydrolab) was considered in addition to sensor accuracy and precision. The average percent error in the three standards was 3.80 percent for the SOLITAX and -4.46 percent for the Hydrolab. The DTS-12 also demonstrated good accuracy with an average percent error of 2.02 percent and a maximum relative standard
A correlational study of scoliosis and trunk balance in adult patients with mandibular deviation.
Shuncheng Zhou
Full Text Available Previous studies have confirmed that patients with mandibular deviation often have abnormal morphology of their cervical vertebrae. However, the relationship between mandibular deviation, scoliosis, and trunk balance has not been studied. Currently, mandibular deviation is usually treated as a single pathology, which leads to poor clinical efficiency. We investigated the relationship of spine coronal morphology and trunk balance in adult patients with mandibular deviation, and compared the finding to those in healthy volunteers. 35 adult patients with skeletal mandibular deviation and 10 healthy volunteers underwent anterior X-ray films of the head and posteroanterior X-ray films of the spine. Landmarks and lines were drawn and measured on these films. The axis distance method was used to measure the degree of scoliosis and the balance angle method was used to measure trunk balance. The relationship of mandibular deviation, spine coronal morphology and trunk balance was evaluated with the Pearson correlation method. The spine coronal morphology of patients with mandibular deviation demonstrated an "S" type curve, while a straight line parallel with the gravity line was found in the control group (significant difference, p1°, while the control group had a normal trunk balance (imbalance angle <1°. There was a significant difference between the two groups (p<0.01. The degree of scoliosis and shoulder imbalance correlated with the degree of mandibular deviation, and presented a linear trend. The direction of mandibular deviation was the same as that of the lateral bending of thoracolumbar vertebrae, which was opposite to the direction of lateral bending of cervical vertebrae. Our study shows the degree of mandibular deviation has a high correlation with the degree of scoliosis and trunk imbalance, all the three deformities should be clinically evaluated in the management of mandibular deviation.
Maximum entropy reconstruction of spin densities involving non uniform prior
Schweizer, J.; Ressouche, E. [DRFMC/SPSMS/MDN CEA-Grenoble (France); Papoular, R.J. [CEA-Saclay, Gif sur Yvette (France). Lab. Leon Brillouin; Tasset, F. [Inst. Laue Langevin, Grenoble (France); Zheludev, A.I. [Brookhaven National Lab., Upton, NY (United States). Physics Dept.
1997-09-01
Diffraction experiments give microscopic information on structures in crystals. A method which uses the concept of maximum of entropy (MaxEnt), appears to be a formidable improvement in the treatment of diffraction data. This method is based on a bayesian approach: among all the maps compatible with the experimental data, it selects that one which has the highest prior (intrinsic) probability. Considering that all the points of the map are equally probable, this probability (flat prior) is expressed via the Boltzman entropy of the distribution. This method has been used for the reconstruction of charge densities from X-ray data, for maps of nuclear densities from unpolarized neutron data as well as for distributions of spin density. The density maps obtained by this method, as compared to those resulting from the usual inverse Fourier transformation, are tremendously improved. In particular, any substantial deviation from the background is really contained in the data, as it costs entropy compared to a map that would ignore such features. However, in most of the cases, before the measurements are performed, some knowledge exists about the distribution which is investigated. It can range from the simple information of the type of scattering electrons to an elaborate theoretical model. In these cases, the uniform prior which considers all the different pixels as equally likely, is too weak a requirement and has to be replaced. In a rigorous bayesian analysis, Skilling has shown that prior knowledge can be encoded into the Maximum Entropy formalism through a model m({rvec r}), via a new definition for the entropy given in this paper. In the absence of any data, the maximum of the entropy functional is reached for {rho}({rvec r}) = m({rvec r}). Any substantial departure from the model, observed in the final map, is really contained in the data as, with the new definition, it costs entropy. This paper presents illustrations of model testing.
Large Deviations for Parameter Estimators of Some Time Inhomogeneous Diffusion Process
Shou Jiang ZHAO; Fu Qing GAO
2011-01-01
The goal of this paper is to study large deviations for estimator and score function of some time inhomogeneous diffusion process.Large deviation in the non-steepness case with explicit rate functions is obtained by using parameter-dependent change of measure.
Moderate Deviations for M-estimators in Linear Models with φ-mixing Errors
Jun FAN
2012-01-01
In this paper,the moderate deviations for the M-estimators of regression parameter in a linear model are obtained when the errors form a strictly stationary φ-mixing sequence.The results are applied to study many different types of M-estimators such as Huber's estimator,Lp-regression estimator,least squares estimator and least absolute deviation estimator.
Yi Wen JIANG; Li Ming WU
2005-01-01
All known results on large deviations of occupation measures of Markov processes are based on the assumption of (essential) irreducibility. In this paper we establish the weak* large deviation principle of occupation measures for any countable Markov chain with arbitrary initial measures. The new rate function that we obtain is not convex and depends on the initial measure, contrary to the (essentially) irreducible case.
Dynamical Gibbs-non-Gibbs transitions : a study via coupling and large deviations
Wang, Feijia
2012-01-01
In this thesis we use both the two-layer and the large-deviation approach to study the conservation and loss of the Gibbs property for both lattice and mean-field spin systems. Chapter 1 gives general backgrounds on Gibbs and non-Gibbs measures and outlines the the two-layer and the large-deviation
Litvin, Faydor L.; Kuan, Chihping; Zhang, YI
1991-01-01
A numerical method is developed for the minimization of deviations of real tooth surfaces from the theoretical ones. The deviations are caused by errors of manufacturing, errors of installment of machine-tool settings and distortion of surfaces by heat-treatment. The deviations are determined by coordinate measurements of gear tooth surfaces. The minimization of deviations is based on the proper correction of initially applied machine-tool settings. The contents of accomplished research project cover the following topics: (1) Descriptions of the principle of coordinate measurements of gear tooth surfaces; (2) Deviation of theoretical tooth surfaces (with examples of surfaces of hypoid gears and references for spiral bevel gears); (3) Determination of the reference point and the grid; (4) Determination of the deviations of real tooth surfaces at the points of the grid; and (5) Determination of required corrections of machine-tool settings for minimization of deviations. The procedure for minimization of deviations is based on numerical solution of an overdetermined system of n linear equations in m unknowns (m much less than n ), where n is the number of points of measurements and m is the number of parameters of applied machine-tool settings to be corrected. The developed approach is illustrated with numerical examples.
Zhong Hao XU; Dong HAN
2011-01-01
We model an epidemic with a class of nonhomogeneous Markov chains on the supercritical percolation network on Zd.The large deviations law for the Markov chain is given.Explicit expression of the rate function for large deviation is obtained.
On Translation of Language Deviation from the Perspective of Peter Newmark’s Translation Dichotomy
胡娜; 延宏
2016-01-01
Language deviation can bring fresh vitality to literary creation. In literary works translation,through adoption of proper translation strategies, the effects created by language deviation in the original text can be efficiently reproduced. Here, the writer attempts to make a narration and relevant analyses on it.
Large Deviations Methods and the Join-the-Shortest-Queue Model
Ridder, Ad; Shwartz, Adam
2005-01-01
We develop a methodology for studying ''large deviations type'' questions. Our approach does not require that the large deviations principle holds, and is thus applicable to a larg class of systems. We study a system of queues with exponential servers, which share an arrival stream. Arrivals are rou
2013-09-24
... Program; Single- Case Deviation From Competition Requirements AGENCY: Health Resources and Services Administration (HRSA), Department of Health and Human Services (HHS). ACTION: Notice of Single-Case Deviation... purpose to improve the health of all mothers and children, a key objective of the Title V MCH Block Grant...
Martya Rahmaniati; Tris Eryando; Dewi Susanna; Dian Pratiwi; Fajar Nugraha; Andri Ruliansah; Muhammad Umar Riandi
2014-01-01
Dengue Fever Disease is still regarded as an endemic disease in Banjar City. Information is still required to map dengue fever case distribution, mean center of case distribution, and the direction of dengue fever case dispersion in order to support the surveillance program in the relation to the vast area of the dengue fever disease control program. The objective of the research is to obtain information regarding the area of dengue fever disease distribution in Banjar City by utilizing the S...
Rodbard, David
2012-10-01
We describe a new approach to estimate the risks of hypo- and hyperglycemia based on the mean and SD of the glucose distribution using optional transformations of the glucose scale to achieve a more nearly symmetrical and Gaussian distribution, if necessary. We examine the correlation of risks of hypo- and hyperglycemia calculated using different glucose thresholds and the relationships of these risks to the mean glucose, SD, and percentage coefficient of variation (%CV). Using representative continuous glucose monitoring datasets, one can predict the risk of glucose values above or below any arbitrary threshold if the glucose distribution is Gaussian or can be transformed to be Gaussian. Symmetry and gaussianness can be tested objectively and used to optimize the transformation. The method performs well with excellent correlation of predicted and observed risks of hypo- or hyperglycemia for individual subjects by time of day or for a specified range of dates. One can compare observed and calculated risks of hypo- and hyperglycemia for a series of thresholds considering their uncertainties. Thresholds such as 80 mg/dL can be used as surrogates for thresholds such as 50 mg/dL. We observe a high correlation of risk of hypoglycemia with %CV and illustrate the theoretical basis for that relationship. One can estimate the historical risks of hypo- and hyperglycemia by time of day, date, day of the week, or range of dates, using any specified thresholds. Risks of hypoglycemia with one threshold (e.g., 80 mg/dL) can be used as an effective surrogate marker for hypoglycemia at other thresholds (e.g., 50 mg/dL). These estimates of risk can be useful in research studies and in the clinical care of patients with diabetes.
Salomons, E.M.; Janssen, S.A.; Verhagen, H.L.M.; Wessels, P.W.
2014-01-01
Annoyance and sleep disturbance by road and rail traffic noise in an urban area are investigated. Noise levels Lden and Lnight are determined with an engineering noise model that is optimized for the local situation, based on local noise measurements. The noise levels are combined with responses of
OPINIONSCONCERNINGTHE ORGANIZATIONOF STANDARD COSTS ACCOUNTANCY
Ion Ionescu
2015-10-01
Full Text Available The main purpose of this research is to present a way for organizing the accountancy under the conditions of applying the method of standard costs, such that to allow both the registration of standard and effective costs and the separate registration of the deviations from standard costs. Making some pertinent and performance decisions is mainly influenced by the quality of the information provided to managers and by the promptitude they are sent. This desiderate is not possible if using classical methods for calculating costs, reason for which it is mandatory to organize and implement a managerial accountancy, based on using a modern method, namely the method of standard costs. The main implications of this method consist of establishing a pertinent cost, orientated towards the entity management, regardless the activity domain where it is implemented. The carried out study concerns only one of the phases performed for applying the method of standard cost, respectively the organization of the standard costs accountancy.
Sanfilippo, Paul G; Hammond, Christopher J; Staffieri, Sandra E; Kearns, Lisa S; Melissa Liew, S H; Barbour, Julie M; Hewitt, Alex W; Ge, Dongliang; Snieder, Harold; Mackinnon, Jane R; Brown, Shayne A; Lorenz, Birgit; Spector, Tim D; Martin, Nicholas G; Wilmer, Jeremy B; Mackey, David A
2012-10-01
Strabismus represents a complex oculomotor disorder characterized by the deviation of one or both eyes and poor vision. A more sophisticated understanding of the genetic liability of strabismus is required to guide searches for associated molecular variants. In this classical twin study of 1,462 twin pairs, we examined the relative influence of genes and environment in comitant strabismus, and the degree to which these influences can be explained by factors in common with refractive error. Participants were examined for the presence of latent ('phoria') and manifest ('tropia') strabismus using cover-uncover and alternate cover tests. Two phenotypes were distinguished: eso-deviation (esophoria and esotropia) and exo-deviation (exophoria and exotropia). Structural equation modeling was subsequently employed to partition the observed phenotypic variation in the twin data into specific variance components. The prevalence of eso-deviation and exo-deviation was 8.6% and 20.7%, respectively. For eso-deviation, the polychoric correlation was significantly greater in monozygotic (MZ) (r = 0.65) compared to dizygotic (DZ) twin pairs (r = 0.33), suggesting a genetic role (p = .003). There was no significant difference in polychoric correlation between MZ (r = 0.55) and DZ twin pairs (r = 0.53) for exo-deviation (p = .86), implying that genetic factors do not play a significant role in the etiology of exo-deviation. The heritability of an eso-deviation was 0.64 (95% CI 0.50-0.75). The additive genetic correlation for eso-deviation and refractive error was 0.13 and the bivariate heritability (i.e., shared variance) was less than 1%, suggesting negligible shared genetic effect. This study documents a substantial heritability of 64% for eso-deviation, yet no corresponding heritability for exo-deviation, suggesting that the genetic contribution to strabismus may be specific to eso-deviation. Future studies are now needed to identify the genes associated with eso-deviation and
A dual method for maximum entropy restoration
Smith, C. B.
1979-01-01
A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
Photoemission spectromicroscopy with MAXIMUM at Wisconsin
Ng, W.; Ray-Chaudhuri, A.K.; Cole, R.K.; Wallace, J.; Crossley, S.; Crossley, D.; Chen, G.; Green, M.; Guo, J.; Hansen, R.W.C.; Cerrina, F.; Margaritondo, G. (Dept. of Electrical Engineering, Dept. of Physics and Synchrotron Radiation Center, Univ. of Wisconsin, Madison (USA)); Underwood, J.H.; Korthright, J.; Perera, R.C.C. (Center for X-ray Optics, Accelerator and Fusion Research Div., Lawrence Berkeley Lab., CA (USA))
1990-06-01
We describe the development of the scanning photoemission spectromicroscope MAXIMUM at the Wisoncsin Synchrotron Radiation Center, which uses radiation from a 30-period undulator. The article includes a discussion of the first tests after the initial commissioning. (orig.).
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
The role of septal surgery in management of the deviated nose.
Foda, Hossam M T
2005-02-01
The deviated nose represents a complex cosmetic and functional problem. Septal surgery plays a central role in the successful management of the externally deviated nose. This study included 260 patients seeking rhinoplasty to correct external nasal deviations; 75 percent of them had various degrees of nasal obstruction. Septal surgery was necessary in 232 patients (89 percent), not only to improve breathing but also to achieve a straight, symmetrical, external nose as well. A graduated surgical approach was adopted to allow correction of the dorsal and caudal deviations of the nasal septum without weakening its structural support to the dorsum or nasal tip. The approach depended on full mobilization of deviated cartilage, followed by straightening of the cartilage and its fixation in the corrected position by using bony splinting grafts through an external rhinoplasty approach.
Bongiorno, C; Lillo, F; Mantegna, R N; Miccichè, S
2016-01-01
Understanding the relation between planned and realized flight trajectories and the determinants of flight deviations is of great importance in air traffic management. In this paper we perform an in depth investigation of the statistical properties of planned and realized air traffic on the German airspace during a 28 day periods, corresponding to an AIRAC cycle. We find that realized trajectories are on average shorter than planned ones and this effect is stronger during night-time than daytime. Flights are more frequently deviated close to the departure airport and at a relatively large angle to destination. Moreover, the probability of a deviation is higher in low traffic phases. All these evidences indicate that deviations are mostly used by controllers to give directs to flights when traffic conditions allow it. Finally we introduce a new metric, termed difork, which is able to characterize navigation points according to the likelihood that a deviation occurs there. Difork allows to identify in a statist...
Benkler, Erik; Sterr, Uwe
2015-01-01
The power spectral density in Fourier frequency domain, and the different variants of the Allan deviation (ADEV) in dependence on the averaging time are well established tools to analyse the fluctuation properties and frequency instability of an oscillatory signal. It is often supposed that the statistical uncertainty of a measured average frequency is given by the ADEV at a well considered averaging time. However, this approach requires further mathematical justification and refinement, which has already been done regarding the original ADEV for certain noise types. Here we provide the necessary background to use the modified Allan deviation (modADEV) and other two-sample deviations to determine the uncertainty of weighted frequency averages. The type of two-sample deviation used to determine the uncertainty depends on the method used for determination of the average. We find that the modADEV, which is connected with $\\Lambda$-weighted averaging, and the two sample deviation associated to a linear phase regr...
['Gold standard', not 'golden standard'
Claassen, J.A.H.R.
2005-01-01
In medical literature, both 'gold standard' and 'golden standard' are employed to describe a reference test used for comparison with a novel method. The term 'gold standard' in its current sense in medical research was coined by Rudd in 1979, in reference to the monetary gold standard. In the same w
['Gold standard', not 'golden standard'
Claassen, J.A.H.R.
2005-01-01
In medical literature, both 'gold standard' and 'golden standard' are employed to describe a reference test used for comparison with a novel method. The term 'gold standard' in its current sense in medical research was coined by Rudd in 1979, in reference to the monetary gold standard. In the same
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
19 CFR 114.23 - Maximum period.
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Maximum period. 114.23 Section 114.23 Customs... CARNETS Processing of Carnets § 114.23 Maximum period. (a) A.T.A. carnet. No A.T.A. carnet with a period of validity exceeding 1 year from date of issue shall be accepted. This period of validity cannot be...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
Sacanna, S.; Rossi, L.; Wouterse, A.; Philipse, A.P.
2007-01-01
We have measured the random packing density of monodisperse colloidal silica ellipsoids with a well-defined shape, gradually deviating from a sphere shape up to prolates with aspect ratios of about 5, to find for a colloidal system the first experimental observation for the density maximum (at an as
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70
USL/DBMS NASA/PC R and D project C programming standards
Dominick, Wayne D. (Editor); Moreau, Dennis R.
1984-01-01
A set of programming standards intended to promote reliability, readability, and portability of C programs written for PC research and development projects is established. These standards must be adhered to except where reasons for deviation are clearly identified and approved by the PC team. Any approved deviation from these standards must also be clearly documented in the pertinent source code.
Yu, Hwa-Lung; Wang, Chih-Hsin
2013-02-05
Understanding the daily changes in ambient air quality concentrations is important to the assessing human exposure and environmental health. However, the fine temporal scales (e.g., hourly) involved in this assessment often lead to high variability in air quality concentrations. This is because of the complex short-term physical and chemical mechanisms among the pollutants. Consequently, high heterogeneity is usually present in not only the averaged pollution levels, but also the intraday variance levels of the daily observations of ambient concentration across space and time. This characteristic decreases the estimation performance of common techniques. This study proposes a novel quantile-based Bayesian maximum entropy (QBME) method to account for the nonstationary and nonhomogeneous characteristics of ambient air pollution dynamics. The QBME method characterizes the spatiotemporal dependence among the ambient air quality levels based on their location-specific quantiles and accounts for spatiotemporal variations using a local weighted smoothing technique. The epistemic framework of the QBME method can allow researchers to further consider the uncertainty of space-time observations. This study presents the spatiotemporal modeling of daily CO and PM10 concentrations across Taiwan from 1998 to 2009 using the QBME method. Results show that the QBME method can effectively improve estimation accuracy in terms of lower mean absolute errors and standard deviations over space and time, especially for pollutants with strong nonhomogeneous variances across space. In addition, the epistemic framework can allow researchers to assimilate the site-specific secondary information where the observations are absent because of the common preferential sampling issues of environmental data. The proposed QBME method provides a practical and powerful framework for the spatiotemporal modeling of ambient pollutants.
Climate change uncertainty for daily minimum and maximum temperatures: a model inter-comparison
Lobell, D; Bonfils, C; Duffy, P
2006-11-09
Several impacts of climate change may depend more on changes in mean daily minimum (T{sub min}) or maximum (T{sub max}) temperatures than daily averages. To evaluate uncertainties in these variables, we compared projections of T{sub min} and T{sub max} changes by 2046-2065 for 12 climate models under an A2 emission scenario. Average modeled changes in T{sub max} were slightly lower in most locations than T{sub min}, consistent with historical trends exhibiting a reduction in diurnal temperature ranges. However, while average changes in T{sub min} and T{sub max} were similar, the inter-model variability of T{sub min} and T{sub max} projections exhibited substantial differences. For example, inter-model standard deviations of June-August T{sub max} changes were more than 50% greater than for T{sub min} throughout much of North America, Europe, and Asia. Model differences in cloud changes, which exert relatively greater influence on T{sub max} during summer and T{sub min} during winter, were identified as the main source of uncertainty disparities. These results highlight the importance of considering separately projections for T{sub max} and T{sub min} when assessing climate change impacts, even in cases where average projected changes are similar. In addition, impacts that are most sensitive to summertime T{sub min} or wintertime T{sub max} may be more predictable than suggested by analyses using only projections of daily average temperatures.
Brodsky, Stanley J; Wu, Xing-Gang
2012-07-27
It is conventional to choose a typical momentum transfer of the process as the renormalization scale and take an arbitrary range to estimate the uncertainty in the QCD prediction. However, predictions using this procedure depend on the renormalization scheme, leave a nonconvergent renormalon perturbative series, and moreover, one obtains incorrect results when applied to QED processes. In contrast, if one fixes the renormalization scale using the principle of maximum conformality (PMC), all nonconformal {β(i)} terms in the perturbative expansion series are summed into the running coupling, and one obtains a unique, scale-fixed, scheme-independent prediction at any finite order. The PMC scale μ(R)(PMC) and the resulting finite-order PMC prediction are both to high accuracy independent of the choice of initial renormalization scale μ(R)(init), consistent with renormalization group invariance. As an application, we apply the PMC procedure to obtain next-to-next-to-leading-order (NNLO) predictions for the tt-pair production at the Tevatron and LHC colliders. The PMC prediction for the total cross section σ(tt) agrees well with the present Tevatron and LHC data. We also verify that the initial scale independence of the PMC prediction is satisfied to high accuracy at the NNLO level: the total cross section remains almost unchanged even when taking very disparate initial scales μ(R)(init) equal to m(t), 20m(t), and √s. Moreover, after PMC scale setting, we obtain A(FB)(tt)≃12.5%, A(FB)(pp)≃8.28% and A(FB)(tt)(M(tt)>450 GeV)≃35.0%. These predictions have a 1σ deviation from the present CDF and D0 measurements; the large discrepancy of the top quark forward-backward asymmetry between the standard model estimate and the data are, thus, greatly reduced.
Maximum-likelihood fits to histograms for improved parameter estimation
Fowler, Joseph W
2013-01-01
Straightforward methods for adapting the familiar chi^2 statistic to histograms of discrete events and other Poisson distributed data generally yield biased estimates of the parameters of a model. The bias can be important even when the total number of events is large. For the case of estimating a microcalorimeter's energy resolution at 6 keV from the observed shape of the Mn K-alpha fluorescence spectrum, a poor choice of chi^2 can lead to biases of at least 10% in the estimated resolution when up to thousands of photons are observed. The best remedy is a Poisson maximum-likelihood fit, through a simple modification of the standard Levenberg-Marquardt algorithm for chi^2 minimization. Where the modification is not possible, another approach allows iterative approximation of the maximum-likelihood fit.
A New Detection Approach Based on the Maximum Entropy Model
DONG Xiaomei; XIANG Guang; YU Ge; LI Xiaohua
2006-01-01
The maximum entropy model was introduced and a new intrusion detection approach based on the maximum entropy model was proposed. The vector space model was adopted for data presentation. The minimal entropy partitioning method was utilized for attribute discretization. Experiments on the KDD CUP 1999 standard data set were designed and the experimental results were shown. The receiver operating characteristic(ROC) curve analysis approach was utilized to analyze the experimental results. The analysis results show that the proposed approach is comparable to those based on support vector machine(SVM) and outperforms those based on C4.5 and Naive Bayes classifiers. According to the overall evaluation result, the proposed approach is a little better than those based on SVM.
Time series analysis by the Maximum Entropy method
Kirk, B.L.; Rust, B.W.; Van Winkle, W.
1979-01-01
The principal subject of this report is the use of the Maximum Entropy method for spectral analysis of time series. The classical Fourier method is also discussed, mainly as a standard for comparison with the Maximum Entropy method. Examples are given which clearly demonstrate the superiority of the latter method over the former when the time series is short. The report also includes a chapter outlining the theory of the method, a discussion of the effects of noise in the data, a chapter on significance tests, a discussion of the problem of choosing the prediction filter length, and, most importantly, a description of a package of FORTRAN subroutines for making the various calculations. Cross-referenced program listings are given in the appendices. The report also includes a chapter demonstrating the use of the programs by means of an example. Real time series like the lynx data and sunspot numbers are also analyzed. 22 figures, 21 tables, 53 references.
Roubik, G.J.
1990-09-12
The purpose of this work is to develop a set of Titanium areal density standards for calibration and maintenance of the Fischer`s X-ray Fluorescence measurement system characterization curve program. The electron microprobe was calibrated for Titanium films on ceramic substrates using an existing set of laboratory standards (Quantity: 6 Range: 0.310 to 1.605). Fourteen source assemblies were measured and assigned values. These values are based on a mean calculation, of five separate readings, from best curve fit equations developed form the plot of the laboratory standards areal density (Source Measure) versus electron microprobe measurement (reading). The best fit equations were determined using the SAS General Linear Modeling (GLM) procedure. Four separate best fit equations were evaluated (Linear, Quadratic, Cubic and Exponential). Areal density values for the Fischer Standards appear here ordered by best fit equation based on maximum R{sup 2}.
Stellinga, B.; Mügge, D.
2014-01-01
The European and global regulation of accounting standards have witnessed remarkable changes over the past twenty years. In the early 1990s, EU accounting practices were fragmented along national lines and US accounting standards were the de facto global standards. Since 2005, all EU listed companie
Direct maximum parsimony phylogeny reconstruction from genotype data
Ravi R
2007-12-01
Full Text Available Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. Results In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Conclusion Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.
A contribution to large deviations for heavy-tailed random sums
SU; Chun
2001-01-01
［1］ Nagaev, A. V., Integral limit theorems for large deviations when Cramer's condition is not fulfilled I, II, Theory Prob. Appl., 1969, 14: 51-64, 193-208.［2］ Nagaev, A. V., Limit theorems for large deviations where Cramer's conditions are violated (In Russian), Izv. Akad. Nauk USSR Ser., Fiz-Mat Nauk., 1969, 7: 17.［3］ Heyde, C. C., A contribution to the theory of large deviations for sums of independent random variables, Z. Wahrscheinlichkeitsth, 1967, 7: 303.［4］ Heyde, C. C., On large deviation probabilities for sums of random variables which are not attracted to the normal law, Ann. Math. Statist., 1967, 38: 1575.［5］ Heyde, C. C., On large deviation probabilities in the case of attraction to a nonnormal stable law, Sanky, 1968, 30: 253.［6］ Nagaev, S. V., Large deviations for sums of independent random variables, in Sixth Prague Conf. on Information Theory, Random Processes and Statistical Decision Functions, Prague: Academic, 1973, 657674.［7］ Nagaev, S. V., Large deviations of sums of independent random variables, Ann. Prob., 1979, 7: 745.［8］ Embrechts, P., Klüppelberg, C., Mikosch, T., Modelling Extremal Events for Insurance and Finance, Berlin-Heidelberg: Springer-Verlag, 1997.［9］ Cline, D. B. H., Hsing, T., Large deviation probabilities for sums and maxima of random variables with heavy or subexponential tails, Preprint, Texas A&M University, 1991.［10］ Klüppelberg, C., Mikosch, T., Large deviations of heavy-tailed random sums with applications to insurance and finance, J. Appl. Prob., 1997, 34: 293.
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.
2014-06-01
Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum magnitude earthquakes induced by fluid injection
McGarr, A.
2014-02-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
A large deviations approach to limit theory for heavy-tailed time series
Mikosch, Thomas Valentin; Wintenberger, Olivier
2016-01-01
In this paper we propagate a large deviations approach for proving limit theory for (generally) multivariate time series with heavy tails. We make this notion precise by introducing regularly varying time series. We provide general large deviation results for functionals acting on a sample path...... and vanishing in some neighborhood of the origin. We study a variety of such functionals, including large deviations of random walks, their suprema, the ruin functional, and further derive weak limit theory for maxima, point processes, cluster functionals and the tail empirical process. One of the main results...
Method for solving fully fuzzy linear programming problems using deviation degree measure
Haifang Cheng; Weilai Huang; Jianhu Cai
2013-01-01
A new ful y fuzzy linear programming (FFLP) prob-lem with fuzzy equality constraints is discussed. Using deviation degree measures, the FFLP problem is transformed into a crispδ-parametric linear programming (LP) problem. Giving the value of deviation degree in each constraint, the δ-fuzzy optimal so-lution of the FFLP problem can be obtained by solving this LP problem. An algorithm is also proposed to find a balance-fuzzy optimal solution between two goals in conflict: to improve the va-lues of the objective function and to decrease the values of the deviation degrees. A numerical example is solved to il ustrate the proposed method.
Non-equilibrium steady states: fluctuations and large deviations of the density and of the current
Derrida, Bernard
2007-07-01
These lecture notes give a short review of methods such as the matrix ansatz, the additivity principle or the macroscopic fluctuation theory, developed recently in the theory of non-equilibrium phenomena. They show how these methods allow us to calculate the fluctuations and large deviations of the density and the current in non-equilibrium steady states of systems like exclusion processes. The properties of these fluctuations and large deviation functions in non-equilibrium steady states (for example, non-Gaussian fluctuations of density or non-convexity of the large deviation function which generalizes the notion of free energy) are compared with those of systems at equilibrium.
Saito, Takuya
2017-09-01
We discuss a deviation of the fluctuation-dissipation relation (FDR) in a driven superdiffusive system as exemplified by polymer stretching. The superdiffusion is found by monitoring momentum transfer to a tracer, which is a conjugate observable with the position. Molecular-dynamics simulation demonstrates that the FDR deviates during the nonequilibrium transient process. We then propose nonequilibrium mode analysis for superdiffusion, which is a counterpart to that for driven subdiffusion. The mode analysis yields results that are in qualitative agreement with the simulation results, suggesting that the fluctuations of the stiffness in the system from initial equilibrium to stretching account for the FDR deviation.