A two-point diagnostic for the H II galaxy Hubble diagram
Leaf, Kyle; Melia, Fulvio
2018-03-01
A previous analysis of starburst-dominated H II galaxies and H II regions has demonstrated a statistically significant preference for the Friedmann-Robertson-Walker cosmology with zero active mass, known as the Rh = ct universe, over Λcold dark matter (ΛCDM) and its related dark-matter parametrizations. In this paper, we employ a two-point diagnostic with these data to present a complementary statistical comparison of Rh = ct with Planck ΛCDM. Our two-point diagnostic compares, in a pairwise fashion, the difference between the distance modulus measured at two redshifts with that predicted by each cosmology. Our results support the conclusion drawn by a previous comparative analysis demonstrating that Rh = ct is statistically preferred over Planck ΛCDM. But we also find that the reported errors in the H II measurements may not be purely Gaussian, perhaps due to a partial contamination by non-Gaussian systematic effects. The use of H II galaxies and H II regions as standard candles may be improved even further with a better handling of the systematics in these sources.
Pilot points method for conditioning multiple-point statistical facies simulation on flow data
Ma, Wei; Jafarpour, Behnam
2018-05-01
We propose a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and then are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) is adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at selected locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.
Milewski, Emil G
2012-01-01
REA's Essentials provide quick and easy access to critical information in a variety of different fields, ranging from the most basic to the most advanced. As its name implies, these concise, comprehensive study guides summarize the essentials of the field covered. Essentials are helpful when preparing for exams, doing homework and will remain a lasting reference source for students, teachers, and professionals. Statistics II discusses sampling theory, statistical inference, independent and dependent variables, correlation theory, experimental design, count data, chi-square test, and time se
THE GROWTH POINTS OF STATISTICAL METHODS
Orlov A. I.
2014-01-01
On the basis of a new paradigm of applied mathematical statistics, data analysis and economic-mathematical methods are identified; we have also discussed five topical areas in which modern applied statistics is developing as well as the other statistical methods, i.e. five "growth points" – nonparametric statistics, robustness, computer-statistical methods, statistics of interval data, statistics of non-numeric data
Statistical aspects of determinantal point processes
DEFF Research Database (Denmark)
Lavancier, Frédéric; Møller, Jesper; Rubak, Ege
The statistical aspects of determinantal point processes (DPPs) seem largely unexplored. We review the appealing properties of DDPs, demonstrate that they are useful models for repulsiveness, detail a simulation procedure, and provide freely available software for simulation and statistical infer...
Pseudo-dynamic source modelling with 1-point and 2-point statistics of earthquake source parameters
Song, S. G.
2013-12-24
Ground motion prediction is an essential element in seismic hazard and risk analysis. Empirical ground motion prediction approaches have been widely used in the community, but efficient simulation-based ground motion prediction methods are needed to complement empirical approaches, especially in the regions with limited data constraints. Recently, dynamic rupture modelling has been successfully adopted in physics-based source and ground motion modelling, but it is still computationally demanding and many input parameters are not well constrained by observational data. Pseudo-dynamic source modelling keeps the form of kinematic modelling with its computational efficiency, but also tries to emulate the physics of source process. In this paper, we develop a statistical framework that governs the finite-fault rupture process with 1-point and 2-point statistics of source parameters in order to quantify the variability of finite source models for future scenario events. We test this method by extracting 1-point and 2-point statistics from dynamically derived source models and simulating a number of rupture scenarios, given target 1-point and 2-point statistics. We propose a new rupture model generator for stochastic source modelling with the covariance matrix constructed from target 2-point statistics, that is, auto- and cross-correlations. Our sensitivity analysis of near-source ground motions to 1-point and 2-point statistics of source parameters provides insights into relations between statistical rupture properties and ground motions. We observe that larger standard deviation and stronger correlation produce stronger peak ground motions in general. The proposed new source modelling approach will contribute to understanding the effect of earthquake source on near-source ground motion characteristics in a more quantitative and systematic way.
Pseudo-dynamic source modelling with 1-point and 2-point statistics of earthquake source parameters
Song, S. G.; Dalguer, L. A.; Mai, Paul Martin
2013-01-01
statistical framework that governs the finite-fault rupture process with 1-point and 2-point statistics of source parameters in order to quantify the variability of finite source models for future scenario events. We test this method by extracting 1-point
Statistical aspects of determinantal point processes
DEFF Research Database (Denmark)
Lavancier, Frédéric; Møller, Jesper; Rubak, Ege Holger
The statistical aspects of determinantal point processes (DPPs) seem largely unexplored. We review the appealing properties of DDPs, demonstrate that they are useful models for repulsiveness, detail a simulation procedure, and provide freely available software for simulation and statistical...... inference. We pay special attention to stationary DPPs, where we give a simple condition ensuring their existence, construct parametric models, describe how they can be well approximated so that the likelihood can be evaluated and realizations can be simulated, and discuss how statistical inference...
Parametric statistical change point analysis
Chen, Jie
2000-01-01
This work is an in-depth study of the change point problem from a general point of view and a further examination of change point analysis of the most commonly used statistical models Change point problems are encountered in such disciplines as economics, finance, medicine, psychology, signal processing, and geology, to mention only several The exposition is clear and systematic, with a great deal of introductory material included Different models are presented in each chapter, including gamma and exponential models, rarely examined thus far in the literature Other models covered in detail are the multivariate normal, univariate normal, regression, and discrete models Extensive examples throughout the text emphasize key concepts and different methodologies are used, namely the likelihood ratio criterion, and the Bayesian and information criterion approaches A comprehensive bibliography and two indices complete the study
International Nuclear Information System (INIS)
Yamaoka, Naoto; Watanabe, Wataru; Hontani, Hidekata
2010-01-01
Most of the time when we construct statistical point cloud model, we need to calculate the corresponding points. Constructed statistical model will not be the same if we use different types of method to calculate the corresponding points. This article proposes the effect to statistical model of human organ made by different types of method to calculate the corresponding points. We validated the performance of statistical model by registering a surface of an organ in a 3D medical image. We compare two methods to calculate corresponding points. The first, the 'Generalized Multi-Dimensional Scaling (GMDS)', determines the corresponding points by the shapes of two curved surfaces. The second approach, the 'Entropy-based Particle system', chooses corresponding points by calculating a number of curved surfaces statistically. By these methods we construct the statistical models and using these models we conducted registration with the medical image. For the estimation, we use non-parametric belief propagation and this method estimates not only the position of the organ but also the probability density of the organ position. We evaluate how the two different types of method that calculates corresponding points affects the statistical model by change in probability density of each points. (author)
Capturing rogue waves by multi-point statistics
International Nuclear Information System (INIS)
Hadjihosseini, A; Wächter, Matthias; Peinke, J; Hoffmann, N P
2016-01-01
As an example of a complex system with extreme events, we investigate ocean wave states exhibiting rogue waves. We present a statistical method of data analysis based on multi-point statistics which for the first time allows the grasping of extreme rogue wave events in a highly satisfactory statistical manner. The key to the success of the approach is mapping the complexity of multi-point data onto the statistics of hierarchically ordered height increments for different time scales, for which we can show that a stochastic cascade process with Markov properties is governed by a Fokker–Planck equation. Conditional probabilities as well as the Fokker–Planck equation itself can be estimated directly from the available observational data. With this stochastic description surrogate data sets can in turn be generated, which makes it possible to work out arbitrary statistical features of the complex sea state in general, and extreme rogue wave events in particular. The results also open up new perspectives for forecasting the occurrence probability of extreme rogue wave events, and even for forecasting the occurrence of individual rogue waves based on precursory dynamics. (paper)
History Matching Through a Smooth Formulation of Multiple-Point Statistics
DEFF Research Database (Denmark)
Melnikova, Yulia; Zunino, Andrea; Lange, Katrine
2014-01-01
and the mismatch with multiple-point statistics. As a result, in the framework of the Bayesian approach, such a solution belongs to a high posterior region. The methodology, while applicable to any inverse problem with a training-image-based prior, is especially beneficial for problems which require expensive......We propose a smooth formulation of multiple-point statistics that enables us to solve inverse problems using gradient-based optimization techniques. We introduce a differentiable function that quantifies the mismatch between multiple-point statistics of a training image and of a given model. We...... show that, by minimizing this function, any continuous image can be gradually transformed into an image that honors the multiple-point statistics of the discrete training image. The solution to an inverse problem is then found by minimizing the sum of two mismatches: the mismatch with data...
Discussion of "Modern statistics for spatial point processes"
DEFF Research Database (Denmark)
Jensen, Eva Bjørn Vedel; Prokesová, Michaela; Hellmund, Gunnar
2007-01-01
ABSTRACT. The paper ‘Modern statistics for spatial point processes’ by Jesper Møller and Rasmus P. Waagepetersen is based on a special invited lecture given by the authors at the 21st Nordic Conference on Mathematical Statistics, held at Rebild, Denmark, in June 2006. At the conference, Antti...
Some properties of point processes in statistical optics
International Nuclear Information System (INIS)
Picinbono, B.; Bendjaballah, C.
2010-01-01
The analysis of the statistical properties of the point process (PP) of photon detection times can be used to determine whether or not an optical field is classical, in the sense that its statistical description does not require the methods of quantum optics. This determination is, however, more difficult than ordinarily admitted and the first aim of this paper is to illustrate this point by using some results of the PP theory. For example, it is well known that the analysis of the photodetection of classical fields exhibits the so-called bunching effect. But this property alone cannot be used to decide the nature of a given optical field. Indeed, we have presented examples of point processes for which a bunching effect appears and yet they cannot be obtained from a classical field. These examples are illustrated by computer simulations. Similarly, it is often admitted that for fields with very low light intensity the bunching or antibunching can be described by using the statistical properties of the distance between successive events of the point process, which simplifies the experimental procedure. We have shown that, while this property is valid for classical PPs, it has no reason to be true for nonclassical PPs, and we have presented some examples of this situation also illustrated by computer simulations.
Statistical representation of a spray as a point process
International Nuclear Information System (INIS)
Subramaniam, S.
2000-01-01
The statistical representation of a spray as a finite point process is investigated. One objective is to develop a better understanding of how single-point statistical information contained in descriptions such as the droplet distribution function (ddf), relates to the probability density functions (pdfs) associated with the droplets themselves. Single-point statistical information contained in the droplet distribution function (ddf) is shown to be related to a sequence of single surrogate-droplet pdfs, which are in general different from the physical single-droplet pdfs. It is shown that the ddf contains less information than the fundamental single-point statistical representation of the spray, which is also described. The analysis shows which events associated with the ensemble of spray droplets can be characterized by the ddf, and which cannot. The implications of these findings for the ddf approach to spray modeling are discussed. The results of this study also have important consequences for the initialization and evolution of direct numerical simulations (DNS) of multiphase flows, which are usually initialized on the basis of single-point statistics such as the droplet number density in physical space. If multiphase DNS are initialized in this way, this implies that even the initial representation contains certain implicit assumptions concerning the complete ensemble of realizations, which are invalid for general multiphase flows. Also the evolution of a DNS initialized in this manner is shown to be valid only if an as yet unproven commutation hypothesis holds true. Therefore, it is questionable to what extent DNS that are initialized in this manner constitute a direct simulation of the physical droplets. Implications of these findings for large eddy simulations of multiphase flows are also discussed. (c) 2000 American Institute of Physics
Statistical properties of several models of fractional random point processes
Bendjaballah, C.
2011-08-01
Statistical properties of several models of fractional random point processes have been analyzed from the counting and time interval statistics points of view. Based on the criterion of the reduced variance, it is seen that such processes exhibit nonclassical properties. The conditions for these processes to be treated as conditional Poisson processes are examined. Numerical simulations illustrate part of the theoretical calculations.
International Nuclear Information System (INIS)
Hufnagel, Heike; Pennec, Xavier; Ayache, Nicholas; Ehrhardt, Jan; Handels, Heinz
2008-01-01
Identification of point correspondences between shapes is required for statistical analysis of organ shapes differences. Since manual identification of landmarks is not a feasible option in 3D, several methods were developed to automatically find one-to-one correspondences on shape surfaces. For unstructured point sets, however, one-to-one correspondences do not exist but correspondence probabilities can be determined. A method was developed to compute a statistical shape model based on shapes which are represented by unstructured point sets with arbitrary point numbers. A fundamental problem when computing statistical shape models is the determination of correspondences between the points of the shape observations of the training data set. In the absence of landmarks, exact correspondences can only be determined between continuous surfaces, not between unstructured point sets. To overcome this problem, we introduce correspondence probabilities instead of exact correspondences. The correspondence probabilities are found by aligning the observation shapes with the affine expectation maximization-iterative closest points (EM-ICP) registration algorithm. In a second step, the correspondence probabilities are used as input to compute a mean shape (represented once again by an unstructured point set). Both steps are unified in a single optimization criterion which depe nds on the two parameters 'registration transformation' and 'mean shape'. In a last step, a variability model which best represents the variability in the training data set is computed. Experiments on synthetic data sets and in vivo brain structure data sets (MRI) are then designed to evaluate the performance of our algorithm. The new method was applied to brain MRI data sets, and the estimated point correspondences were compared to a statistical shape model built on exact correspondences. Based on established measures of ''generalization ability'' and ''specificity'', the estimates were very satisfactory
Sears Point Tidal Marsh Restoration Project: Phase II
Information about the SFBWQP Sears Point Tidal Marsh Restoration Project: Phase II, part of an EPA competitive grant program to improve SF Bay water quality focused on restoring impaired waters and enhancing aquatic resources.
Exploring Foundation Concepts in Introductory Statistics Using Dynamic Data Points
Ekol, George
2015-01-01
This paper analyses introductory statistics students' verbal and gestural expressions as they interacted with a dynamic sketch (DS) designed using "Sketchpad" software. The DS involved numeric data points built on the number line whose values changed as the points were dragged along the number line. The study is framed on aggregate…
Summary statistics for end-point conditioned continuous-time Markov chains
DEFF Research Database (Denmark)
Hobolth, Asger; Jensen, Jens Ledet
Continuous-time Markov chains are a widely used modelling tool. Applications include DNA sequence evolution, ion channel gating behavior and mathematical finance. We consider the problem of calculating properties of summary statistics (e.g. mean time spent in a state, mean number of jumps between...... two states and the distribution of the total number of jumps) for discretely observed continuous time Markov chains. Three alternative methods for calculating properties of summary statistics are described and the pros and cons of the methods are discussed. The methods are based on (i) an eigenvalue...... decomposition of the rate matrix, (ii) the uniformization method, and (iii) integrals of matrix exponentials. In particular we develop a framework that allows for analyses of rather general summary statistics using the uniformization method....
Visualizing Type-II Weyl Points in Tungsten Ditelluride by Quasiparticle Interference.
Lin, Chun-Liang; Arafune, Ryuichi; Liu, Ro-Ya; Yoshimura, Masato; Feng, Baojie; Kawahara, Kazuaki; Ni, Zeyuan; Minamitani, Emi; Watanabe, Satoshi; Shi, Youguo; Kawai, Maki; Chiang, Tai-Chang; Matsuda, Iwao; Takagi, Noriaki
2017-11-28
Weyl semimetals (WSMs) are classified into two types, type I and II, according to the topology of the Weyl point, where the electron and hole pockets touch each other. Tungsten ditelluride (WTe 2 ) has garnered a great deal of attention as a strong candidate to be a type-II WSM. However, the Weyl points for WTe 2 are located above the Fermi level, which has prevented us from identifying the locations and the connection to the Fermi arc surface states by using angle-resolved photoemission spectroscopy. Here, we present experimental proof that WTe 2 is a type-II WSM. We measured energy-dependent quasiparticle interference patterns with a cryogenic scanning tunneling microscope, revealing the position of the Weyl point and its connection with the Fermi arc surface states, in agreement with prior theoretical predictions. Our results provide an answer to this crucial question and stimulate further exploration of the characteristics of WSMs.
The statistics of the points where nodal lines intersect a reference curve
International Nuclear Information System (INIS)
Aronovitch, Amit; Smilansky, Uzy
2007-01-01
We study the intersection points of a fixed planar curve Γ with the nodal set of a translationally invariant and isotropic Gaussian random field Ψ(r) and the zeros of its normal derivative across the curve. The intersection points form a discrete random process which is the object of this study. The field probability distribution function is completely specified by the correlation G(|r - r'|) = (Ψ(r)Ψ(r')). Given an arbitrary G(|r - r'|), we compute the two-point correlation function of the point process on the line, and derive other statistical measures (repulsion, rigidity) which characterize the short- and long-range correlations of the intersection points. We use these statistical measures to quantitatively characterize the complex patterns displayed by various kinds of nodal networks. We apply these statistics in particular to nodal patterns of random waves and of eigenfunctions of chaotic billiards. Of special interest is the observation that for monochromatic random waves, the number variance of the intersections with long straight segments grows like Lln L, as opposed to the linear growth predicted by the percolation model, which was successfully used to predict other long-range nodal properties of that field
Statistical methods for change-point detection in surface temperature records
Pintar, A. L.; Possolo, A.; Zhang, N. F.
2013-09-01
We describe several statistical methods to detect possible change-points in a time series of values of surface temperature measured at a meteorological station, and to assess the statistical significance of such changes, taking into account the natural variability of the measured values, and the autocorrelations between them. These methods serve to determine whether the record may suffer from biases unrelated to the climate signal, hence whether there may be a need for adjustments as considered by M. J. Menne and C. N. Williams (2009) "Homogenization of Temperature Series via Pairwise Comparisons", Journal of Climate 22 (7), 1700-1717. We also review methods to characterize patterns of seasonality (seasonal decomposition using monthly medians or robust local regression), and explain the role they play in the imputation of missing values, and in enabling robust decompositions of the measured values into a seasonal component, a possible climate signal, and a station-specific remainder. The methods for change-point detection that we describe include statistical process control, wavelet multi-resolution analysis, adaptive weights smoothing, and a Bayesian procedure, all of which are applicable to single station records.
Nakamura, Tatsuya; Matsumoto, Masakazu; Yagasaki, Takuma; Tanaka, Hideki
2016-03-03
We investigate why no hydrogen-disordered form of ice II has been found in nature despite the fact that most of hydrogen-ordered ices have hydrogen-disordered counterparts. The thermodynamic stability of a set of hydrogen-ordered ice II variants relative to ice II is evaluated theoretically. It is found that ice II is more stable than the disordered variants so generated as to satisfy the simple ice rule due to the lower zero-point energy as well as the pair interaction energy. The residual entropy of the disordered ice II phase gradually compensates the unfavorable free energy with increasing temperature. The crossover, however, occurs at a high temperature well above the melting point of ice III. Consequently, the hydrogen-disordered phase does not exist in nature. The thermodynamic stability of partially hydrogen-disordered ices is also scrutinized by examining the free-energy components of several variants obtained by systematic inversion of OH directions in ice II. The potential energy of one variant is lower than that of the ice II structure, but its Gibbs free energy is slightly higher than that of ice II due to the zero-point energy. The slight difference in the thermodynamic stability leaves the possibility of the partial hydrogen-disorder in real ice II.
International Nuclear Information System (INIS)
Saito, Toki; Nakajima, Yoshikazu; Sugita, Naohiko; Mitsuishi, Mamoru; Hashizume, Hiroyuki; Kuramoto, Kouichi; Nakashima, Yosio
2011-01-01
Statistical deformable model based two-dimensional/three-dimensional (2-D/3-D) registration is a promising method for estimating the position and shape of patient bone in the surgical space. Since its accuracy depends on the statistical model capacity, we propose a method for accurately generating a statistical bone model from a CT volume. Our method employs the Sphere-Attribute-Image (SAI) and has improved the accuracy of corresponding point search in statistical model generation. At first, target bone surfaces are extracted as SAIs from the CT volume. Then the textures of SAIs are classified to some regions using Maximally-stable-extremal-regions methods. Next, corresponding regions are determined using Normalized cross-correlation (NCC). Finally, corresponding points in each corresponding region are determined using NCC. The application of our method to femur bone models was performed, and worked well in the experiments. (author)
The validity of multiphase DNS initialized on the basis of single--point statistics
Subramaniam, Shankar
1999-11-01
A study of the point--process statistical representation of a spray reveals that single--point statistical information contained in the droplet distribution function (ddf) is related to a sequence of single surrogate--droplet pdf's, which are in general different from the physical single--droplet pdf's. The results of this study have important consequences for the initialization and evolution of direct numerical simulations (DNS) of multiphase flows, which are usually initialized on the basis of single--point statistics such as the average number density in physical space. If multiphase DNS are initialized in this way, this implies that even the initial representation contains certain implicit assumptions concerning the complete ensemble of realizations, which are invalid for general multiphase flows. Also the evolution of a DNS initialized in this manner is shown to be valid only if an as yet unproven commutation hypothesis holds true. Therefore, it is questionable to what extent DNS that are initialized in this manner constitute a direct simulation of the physical droplets.
DEFF Research Database (Denmark)
Herrmann, Ivan Tengbjerg; Henningsen, Geraldine; Wood, Christian D.
2013-01-01
quantitative methods exist for evaluating uncertainty—for example, Monte Carlo simulation—and such methods work very well when the AN is in full control of the data collection and model-building processes. In many cases, however, the AN is not in control of these processes. In this article we develop a simple...... method that a DM can employ in order to evaluate the process of decision support from a statistical point-of-view. We call this approach the “Statistical Value Chain” (SVC): a consecutive benchmarking checklist with eight steps that can be used to evaluate decision support seen from a statistical point-of-view....
Gaussian point count statistics for families of curves over a fixed finite field
Kurlberg, Par; Wigman, Igor
2010-01-01
We produce a collection of families of curves, whose point count statistics over F_p becomes Gaussian for p fixed. In particular, the average number of F_p points on curves in these families tends to infinity.
Determining decoupling points in a supply chain networks using NSGA II algorithm
Energy Technology Data Exchange (ETDEWEB)
Ebrahimiarjestan, M.; Wang, G.
2017-07-01
Purpose: In the model, we used the concepts of Lee and Amaral (2002) and Tang and Zhou (2009) and offer a multi-criteria decision-making model that identify the decoupling points to aim to minimize production costs, minimize the product delivery time to customer and maximize their satisfaction. Design/methodology/approach: We encounter with a triple-objective model that meta-heuristic method (NSGA II) is used to solve the model and to identify the Pareto optimal points. The max (min) method was used. Findings: Our results of using NSGA II to find Pareto optimal solutions demonstrate good performance of NSGA II to extract Pareto solutions in proposed model that considers determining of decoupling point in a supply network. Originality/value: So far, several approaches to model the future have been proposed, of course, each of them modeled a part of this concept. This concept has been considered more general in the model that defined in follow. In this model, we face with a multi-criteria decision problem that includes minimization of the production costs and product delivery time to customers as well as customer consistency maximization.
Determining decoupling points in a supply chain networks using NSGA II algorithm
International Nuclear Information System (INIS)
Ebrahimiarjestan, M.; Wang, G.
2017-01-01
Purpose: In the model, we used the concepts of Lee and Amaral (2002) and Tang and Zhou (2009) and offer a multi-criteria decision-making model that identify the decoupling points to aim to minimize production costs, minimize the product delivery time to customer and maximize their satisfaction. Design/methodology/approach: We encounter with a triple-objective model that meta-heuristic method (NSGA II) is used to solve the model and to identify the Pareto optimal points. The max (min) method was used. Findings: Our results of using NSGA II to find Pareto optimal solutions demonstrate good performance of NSGA II to extract Pareto solutions in proposed model that considers determining of decoupling point in a supply network. Originality/value: So far, several approaches to model the future have been proposed, of course, each of them modeled a part of this concept. This concept has been considered more general in the model that defined in follow. In this model, we face with a multi-criteria decision problem that includes minimization of the production costs and product delivery time to customers as well as customer consistency maximization.
A Statistical Study of Interplanetary Type II Bursts: STEREO Observations
Krupar, V.; Eastwood, J. P.; Magdalenic, J.; Gopalswamy, N.; Kruparova, O.; Szabo, A.
2017-12-01
Coronal mass ejections (CMEs) are the primary cause of the most severe and disruptive space weather events such as solar energetic particle (SEP) events and geomagnetic storms at Earth. Interplanetary type II bursts are generated via the plasma emission mechanism by energetic electrons accelerated at CME-driven shock waves and hence identify CMEs that potentially cause space weather impact. As CMEs propagate outward from the Sun, radio emissions are generated at progressively at lower frequencies corresponding to a decreasing ambient solar wind plasma density. We have performed a statistical study of 153 interplanetary type II bursts observed by the two STEREO spacecraft between March 2008 and August 2014. These events have been correlated with manually-identified CMEs contained in the Heliospheric Cataloguing, Analysis and Techniques Service (HELCATS) catalogue. Our results confirm that faster CMEs are more likely to produce interplanetary type II radio bursts. We have compared observed frequency drifts with white-light observations to estimate angular deviations of type II burst propagation directions from radial. We have found that interplanetary type II bursts preferably arise from CME flanks. Finally, we discuss a visibility of radio emissions in relation to the CME propagation direction.
Pharyngeal airway dimensions in skeletal class II: A cephalometric growth study
International Nuclear Information System (INIS)
Uslu-Akcam, Ozge
2017-01-01
This retrospective study aimed to evaluate the nasopharyngeal and oropharyngeal dimensions of individuals with skeletal class II, division 1 and division 2 patterns during the pre-peak, peak, and post-peak growth periods for comparison with a skeletal class I control group. Totally 124 lateral cephalograms (47 for skeletal class I; 45 for skeletal class II, division 1; and 32 for skeletal class II, division 2) in pre-peak, peak, and post-peak growth periods were selected from the department archives. Thirteen landmarks, 4 angular and 4 linear measurements, and 4 proportional calculations were obtained. The ANOVA and Duncan test were applied to compare the differences among the study groups during the growth periods. Statistically significant differences were found between the skeletal class II, division 2 group and other groups for the gonion-gnathion/sella-nasion angle. The sella-nasion-B-point angle was different among the groups, while the A-point-nasion-B-point angle was significantly different for all 3 groups. The nasopharyngeal airway space showed a statistically significant difference among the groups throughout the growth periods. The interaction among the growth periods and study groups was statistically significant regarding the upper oropharyngeal airway space measurement. The lower oropharyngeal airway space measurement showed a statistically significant difference among the groups, with the smallest dimension observed in the skeletal class II, division 2 group. The naso-oropharyngeal airway dimensions showed a statistically significant difference among the class II, division 1; class II, division 2; and class I groups during different growth periods
Pharyngeal airway dimensions in skeletal class II: A cephalometric growth study
Energy Technology Data Exchange (ETDEWEB)
Uslu-Akcam, Ozge [Clinic of Orthodontics, Ministry of Health, Tepebasi Oral and Dental Health Hospital, Ankara (Turkmenistan)
2017-03-15
This retrospective study aimed to evaluate the nasopharyngeal and oropharyngeal dimensions of individuals with skeletal class II, division 1 and division 2 patterns during the pre-peak, peak, and post-peak growth periods for comparison with a skeletal class I control group. Totally 124 lateral cephalograms (47 for skeletal class I; 45 for skeletal class II, division 1; and 32 for skeletal class II, division 2) in pre-peak, peak, and post-peak growth periods were selected from the department archives. Thirteen landmarks, 4 angular and 4 linear measurements, and 4 proportional calculations were obtained. The ANOVA and Duncan test were applied to compare the differences among the study groups during the growth periods. Statistically significant differences were found between the skeletal class II, division 2 group and other groups for the gonion-gnathion/sella-nasion angle. The sella-nasion-B-point angle was different among the groups, while the A-point-nasion-B-point angle was significantly different for all 3 groups. The nasopharyngeal airway space showed a statistically significant difference among the groups throughout the growth periods. The interaction among the growth periods and study groups was statistically significant regarding the upper oropharyngeal airway space measurement. The lower oropharyngeal airway space measurement showed a statistically significant difference among the groups, with the smallest dimension observed in the skeletal class II, division 2 group. The naso-oropharyngeal airway dimensions showed a statistically significant difference among the class II, division 1; class II, division 2; and class I groups during different growth periods.
Pairwise contact energy statistical potentials can help to find probability of point mutations.
Saravanan, K M; Suvaithenamudhan, S; Parthasarathy, S; Selvaraj, S
2017-01-01
To adopt a particular fold, a protein requires several interactions between its amino acid residues. The energetic contribution of these residue-residue interactions can be approximated by extracting statistical potentials from known high resolution structures. Several methods based on statistical potentials extracted from unrelated proteins are found to make a better prediction of probability of point mutations. We postulate that the statistical potentials extracted from known structures of similar folds with varying sequence identity can be a powerful tool to examine probability of point mutation. By keeping this in mind, we have derived pairwise residue and atomic contact energy potentials for the different functional families that adopt the (α/β) 8 TIM-Barrel fold. We carried out computational point mutations at various conserved residue positions in yeast Triose phosphate isomerase enzyme for which experimental results are already reported. We have also performed molecular dynamics simulations on a subset of point mutants to make a comparative study. The difference in pairwise residue and atomic contact energy of wildtype and various point mutations reveals probability of mutations at a particular position. Interestingly, we found that our computational prediction agrees with the experimental studies of Silverman et al. (Proc Natl Acad Sci 2001;98:3092-3097) and perform better prediction than i Mutant and Cologne University Protein Stability Analysis Tool. The present work thus suggests deriving pairwise contact energy potentials and molecular dynamics simulations of functionally important folds could help us to predict probability of point mutations which may ultimately reduce the time and cost of mutation experiments. Proteins 2016; 85:54-64. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
ERROR DISTRIBUTION EVALUATION OF THE THIRD VANISHING POINT BASED ON RANDOM STATISTICAL SIMULATION
Directory of Open Access Journals (Sweden)
C. Li
2012-07-01
Full Text Available POS, integrated by GPS / INS (Inertial Navigation Systems, has allowed rapid and accurate determination of position and attitude of remote sensing equipment for MMS (Mobile Mapping Systems. However, not only does INS have system error, but also it is very expensive. Therefore, in this paper error distributions of vanishing points are studied and tested in order to substitute INS for MMS in some special land-based scene, such as ground façade where usually only two vanishing points can be detected. Thus, the traditional calibration approach based on three orthogonal vanishing points is being challenged. In this article, firstly, the line clusters, which parallel to each others in object space and correspond to the vanishing points, are detected based on RANSAC (Random Sample Consensus and parallelism geometric constraint. Secondly, condition adjustment with parameters is utilized to estimate nonlinear error equations of two vanishing points (VX, VY. How to set initial weights for the adjustment solution of single image vanishing points is presented. Solving vanishing points and estimating their error distributions base on iteration method with variable weights, co-factor matrix and error ellipse theory. Thirdly, under the condition of known error ellipses of two vanishing points (VX, VY and on the basis of the triangle geometric relationship of three vanishing points, the error distribution of the third vanishing point (VZ is calculated and evaluated by random statistical simulation with ignoring camera distortion. Moreover, Monte Carlo methods utilized for random statistical estimation are presented. Finally, experimental results of vanishing points coordinate and their error distributions are shown and analyzed.
Error Distribution Evaluation of the Third Vanishing Point Based on Random Statistical Simulation
Li, C.
2012-07-01
POS, integrated by GPS / INS (Inertial Navigation Systems), has allowed rapid and accurate determination of position and attitude of remote sensing equipment for MMS (Mobile Mapping Systems). However, not only does INS have system error, but also it is very expensive. Therefore, in this paper error distributions of vanishing points are studied and tested in order to substitute INS for MMS in some special land-based scene, such as ground façade where usually only two vanishing points can be detected. Thus, the traditional calibration approach based on three orthogonal vanishing points is being challenged. In this article, firstly, the line clusters, which parallel to each others in object space and correspond to the vanishing points, are detected based on RANSAC (Random Sample Consensus) and parallelism geometric constraint. Secondly, condition adjustment with parameters is utilized to estimate nonlinear error equations of two vanishing points (VX, VY). How to set initial weights for the adjustment solution of single image vanishing points is presented. Solving vanishing points and estimating their error distributions base on iteration method with variable weights, co-factor matrix and error ellipse theory. Thirdly, under the condition of known error ellipses of two vanishing points (VX, VY) and on the basis of the triangle geometric relationship of three vanishing points, the error distribution of the third vanishing point (VZ) is calculated and evaluated by random statistical simulation with ignoring camera distortion. Moreover, Monte Carlo methods utilized for random statistical estimation are presented. Finally, experimental results of vanishing points coordinate and their error distributions are shown and analyzed.
Exit points, on plasma, of lost fast ions during NBI in TJ-II
International Nuclear Information System (INIS)
Guasp, J.
1995-09-01
The distribution of the exit points, on plasma border, for the lost fast ions during tangential balanced NBI in TJ-II helical axis Stellarator is theoretically analysed, as well for direct as for delayed losses. The link between the position of those exit points and the corresponding at birth, orbits and drifts is analysed also, it is shown that such relation is rather independent of beam energy and plasma density and is mainly related to the magnetic configuration characteristics. This study is a needed intermediate step to the analysis of impacts of those ions on the vacuum vessel of TJ-II
Gooya, Ali; Lekadir, Karim; Alba, Xenia; Swift, Andrew J; Wild, Jim M; Frangi, Alejandro F
2015-01-01
Construction of Statistical Shape Models (SSMs) from arbitrary point sets is a challenging problem due to significant shape variation and lack of explicit point correspondence across the training data set. In medical imaging, point sets can generally represent different shape classes that span healthy and pathological exemplars. In such cases, the constructed SSM may not generalize well, largely because the probability density function (pdf) of the point sets deviates from the underlying assumption of Gaussian statistics. To this end, we propose a generative model for unsupervised learning of the pdf of point sets as a mixture of distinctive classes. A Variational Bayesian (VB) method is proposed for making joint inferences on the labels of point sets, and the principal modes of variations in each cluster. The method provides a flexible framework to handle point sets with no explicit point-to-point correspondences. We also show that by maximizing the marginalized likelihood of the model, the optimal number of clusters of point sets can be determined. We illustrate this work in the context of understanding the anatomical phenotype of the left and right ventricles in heart. To this end, we use a database containing hearts of healthy subjects, patients with Pulmonary Hypertension (PH), and patients with Hypertrophic Cardiomyopathy (HCM). We demonstrate that our method can outperform traditional PCA in both generalization and specificity measures.
Statistical theory of dislocation configurations in a random array of point obstacles
International Nuclear Information System (INIS)
Labusch, R.
1977-01-01
The stable configurations of a dislocation in an infinite random array of point obstacles are analyzed using the mathematical methods of statistical mechanics. The theory provides exact distribution functions of the forces on pinning points and of the link lengths between points on the line. The expected number of stable configurations is a function of the applied stress. This number drops to zero at the critical stress. Due to a degeneracy problem in the line count, the value of the flow stress cannot be determined rigorously, but we can give a good approximation that is very close to the empirical value
Exit points, on plasma, of lost fast ions during NBI in TJ-II
International Nuclear Information System (INIS)
Guasp, J.
1995-01-01
The distribution of the exit points, on plasma border, for the lost fast ions during tangential balanced NBI in TJ-II helical axis Stellarator is theoretically analysed, as well for direct as for delayed losses. The link between, the position of those exit points and the corresponding at birth, orbits and drifts is analysed also. It is shown that such relation is rather independent of beam energy and plasma density and is mainly related to the magnetic configuration characteristics. This study is a needed intermediate step to the analysis of impacts of those ions on the vacuum vessel of TJ-II. (Author) 2 refs
Theory of superfluidity of helium II near the lambda point
International Nuclear Information System (INIS)
Ginzburg, V.L.; Sobyanin, A.A.
1982-01-01
The present state of the Psi theory of superfluidity of helium II near the lambda point is reviewed. The basic assumptions underlying this theory and the limits of its applicability are discussed. The results of the solution of some problems in the framework of the theory are presented and compared with experimental data. The necessity and possibility of further comparison of the theory with experiment are emphasized
Hong, Peilong; Li, Liming; Liu, Jianji; Zhang, Guoquan
2016-03-29
Young's double-slit or two-beam interference is of fundamental importance to understand various interference effects, in which the stationary phase difference between two beams plays the key role in the first-order coherence. Different from the case of first-order coherence, in the high-order optical coherence the statistic behavior of the optical phase will play the key role. In this article, by employing a fundamental interfering configuration with two classical point sources, we showed that the high- order optical coherence between two classical point sources can be actively designed by controlling the statistic behavior of the relative phase difference between two point sources. Synchronous position Nth-order subwavelength interference with an effective wavelength of λ/M was demonstrated, in which λ is the wavelength of point sources and M is an integer not larger than N. Interestingly, we found that the synchronous position Nth-order interference fringe fingerprints the statistic trace of random phase fluctuation of two classical point sources, therefore, it provides an effective way to characterize the statistic properties of phase fluctuation for incoherent light sources.
Multiple point statistical simulation using uncertain (soft) conditional data
Hansen, Thomas Mejer; Vu, Le Thanh; Mosegaard, Klaus; Cordua, Knud Skou
2018-05-01
Geostatistical simulation methods have been used to quantify spatial variability of reservoir models since the 80s. In the last two decades, state of the art simulation methods have changed from being based on covariance-based 2-point statistics to multiple-point statistics (MPS), that allow simulation of more realistic Earth-structures. In addition, increasing amounts of geo-information (geophysical, geological, etc.) from multiple sources are being collected. This pose the problem of integration of these different sources of information, such that decisions related to reservoir models can be taken on an as informed base as possible. In principle, though difficult in practice, this can be achieved using computationally expensive Monte Carlo methods. Here we investigate the use of sequential simulation based MPS simulation methods conditional to uncertain (soft) data, as a computational efficient alternative. First, it is demonstrated that current implementations of sequential simulation based on MPS (e.g. SNESIM, ENESIM and Direct Sampling) do not account properly for uncertain conditional information, due to a combination of using only co-located information, and a random simulation path. Then, we suggest two approaches that better account for the available uncertain information. The first make use of a preferential simulation path, where more informed model parameters are visited preferentially to less informed ones. The second approach involves using non co-located uncertain information. For different types of available data, these approaches are demonstrated to produce simulation results similar to those obtained by the general Monte Carlo based approach. These methods allow MPS simulation to condition properly to uncertain (soft) data, and hence provides a computationally attractive approach for integration of information about a reservoir model.
A note on the statistical analysis of point judgment matrices
Directory of Open Access Journals (Sweden)
MG Kabera
2013-06-01
Full Text Available The Analytic Hierarchy Process is a multicriteria decision making technique developed by Saaty in the 1970s. The core of the approach is the pairwise comparison of objects according to a single criterion using a 9-point ratio scale and the estimation of weights associated with these objects based on the resultant judgment matrix. In the present paper some statistical approaches to extracting the weights of objects from a judgment matrix are reviewed and new ideas which are rooted in the traditional method of paired comparisons are introduced.
Statistically based reevaluation of PISC-II round robin test data
International Nuclear Information System (INIS)
Heasler, P.G.; Taylor, T.T.; Doctor, S.R.
1993-05-01
This report presents a re-analysis of an international PISC-II (Programme for Inspection of Steel Components, Phase 2) round-robin inspection results using formal statistical techniques to account for experimental error. The analysis examines US team performance vs. other participants performance,flaw sizing performance and errors associated with flaw sizing, factors influencing flaw detection probability, performance of all participants with respect to recently adopted ASME Section 11 flaw detection performance demonstration requirements, and develops conclusions concerning ultrasonic inspection capability. Inspection data were gathered on four heavy section steel components which included two plates and two nozzle configurations
Jandrisevits, Carmen; Marschallinger, Robert
2014-05-01
Quarternary sediments in overdeepened alpine valleys and basins in the Eastern Alps bear substantial groundwater resources. The associated aquifer systems are generally geometrically complex with highly variable hydraulic properties. 3D geological models provide predictions of both geometry and properties of the subsurface required for subsequent modelling of groundwater flow and transport. In hydrology, geostatistical Kriging and Kriging based conditional simulations are widely used to predict the spatial distribution of hydrofacies. In the course of investigating the shallow aquifer structures in the Zell basin in the Upper Salzach valley (Salzburg, Austria), a benchmark of available geostatistical modelling and simulation methods was performed: traditional variogram based geostatistical methods, i.e. Indicator Kriging, Sequential Indicator Simulation and Sequential Indicator Co - Simulation were used as well as Multiple Point Statistics. The ~ 6 km2 investigation area is sampled by 56 drillings with depths of 5 to 50 m; in addition, there are 2 geophysical sections with lengths of 2 km and depths of 50 m. Due to clustered drilling sites, indicator Kriging models failed to consistently model the spatial variability of hydrofacies. Using classical variogram based geostatistical simulation (SIS), equally probable realizations were generated with differences among the realizations providing an uncertainty measure. The yielded models are unstructured from a geological point - they do not portray the shapes and lateral extensions of associated sedimentary units. Since variograms consider only two - point spatial correlations, they are unable to capture the spatial variability of complex geological structures. The Multiple Point Statistics approach overcomes these limitations of two point statistics as it uses a Training image instead of variograms. The 3D Training Image can be seen as a reference facies model where geological knowledge about depositional
The Statistical point of view of Quality: the Lean Six Sigma methodology.
Bertolaccini, Luca; Viti, Andrea; Terzi, Alberto
2015-04-01
Six Sigma and Lean are two quality improvement methodologies. The Lean Six Sigma methodology is applicable to repetitive procedures. Therefore, the use of this methodology in the health-care arena has focused mainly on areas of business operations, throughput, and case management and has focused on efficiency outcomes. After the revision of methodology, the paper presents a brief clinical example of the use of Lean Six Sigma as a quality improvement method in the reduction of the complications during and after lobectomies. Using Lean Six Sigma methodology, the multidisciplinary teams could identify multiple modifiable points across the surgical process. These process improvements could be applied to different surgical specialties and could result in a measurement, from statistical point of view, of the surgical quality.
Kholeif, S A
2001-06-01
A new method that belongs to the differential category for determining the end points from potentiometric titration curves is presented. It uses a preprocess to find first derivative values by fitting four data points in and around the region of inflection to a non-linear function, and then locate the end point, usually as a maximum or minimum, using an inverse parabolic interpolation procedure that has an analytical solution. The behavior and accuracy of the sigmoid and cumulative non-linear functions used are investigated against three factors. A statistical evaluation of the new method using linear least-squares method validation and multifactor data analysis are covered. The new method is generally applied to symmetrical and unsymmetrical potentiometric titration curves, and the end point is calculated using numerical procedures only. It outperforms the "parent" regular differential method in almost all factors levels and gives accurate results comparable to the true or estimated true end points. Calculated end points from selected experimental titration curves compatible with the equivalence point category of methods, such as Gran or Fortuin, are also compared with the new method.
Point defect characterization in HAADF-STEM images using multivariate statistical analysis
International Nuclear Information System (INIS)
Sarahan, Michael C.; Chi, Miaofang; Masiel, Daniel J.; Browning, Nigel D.
2011-01-01
Quantitative analysis of point defects is demonstrated through the use of multivariate statistical analysis. This analysis consists of principal component analysis for dimensional estimation and reduction, followed by independent component analysis to obtain physically meaningful, statistically independent factor images. Results from these analyses are presented in the form of factor images and scores. Factor images show characteristic intensity variations corresponding to physical structure changes, while scores relate how much those variations are present in the original data. The application of this technique is demonstrated on a set of experimental images of dislocation cores along a low-angle tilt grain boundary in strontium titanate. A relationship between chemical composition and lattice strain is highlighted in the analysis results, with picometer-scale shifts in several columns measurable from compositional changes in a separate column. -- Research Highlights: → Multivariate analysis of HAADF-STEM images. → Distinct structural variations among SrTiO 3 dislocation cores. → Picometer atomic column shifts correlated with atomic column population changes.
Statistical inference for extended or shortened phase II studies based on Simon's two-stage designs.
Zhao, Junjun; Yu, Menggang; Feng, Xi-Ping
2015-06-07
Simon's two-stage designs are popular choices for conducting phase II clinical trials, especially in the oncology trials to reduce the number of patients placed on ineffective experimental therapies. Recently Koyama and Chen (2008) discussed how to conduct proper inference for such studies because they found that inference procedures used with Simon's designs almost always ignore the actual sampling plan used. In particular, they proposed an inference method for studies when the actual second stage sample sizes differ from planned ones. We consider an alternative inference method based on likelihood ratio. In particular, we order permissible sample paths under Simon's two-stage designs using their corresponding conditional likelihood. In this way, we can calculate p-values using the common definition: the probability of obtaining a test statistic value at least as extreme as that observed under the null hypothesis. In addition to providing inference for a couple of scenarios where Koyama and Chen's method can be difficult to apply, the resulting estimate based on our method appears to have certain advantage in terms of inference properties in many numerical simulations. It generally led to smaller biases and narrower confidence intervals while maintaining similar coverages. We also illustrated the two methods in a real data setting. Inference procedures used with Simon's designs almost always ignore the actual sampling plan. Reported P-values, point estimates and confidence intervals for the response rate are not usually adjusted for the design's adaptiveness. Proper statistical inference procedures should be used.
Multiple-point statistical prediction on fracture networks at Yucca Mountain
International Nuclear Information System (INIS)
Liu, X.Y; Zhang, C.Y.; Liu, Q.S.; Birkholzer, J.T.
2009-01-01
In many underground nuclear waste repository systems, such as at Yucca Mountain, water flow rate and amount of water seepage into the waste emplacement drifts are mainly determined by hydrological properties of fracture network in the surrounding rock mass. Natural fracture network system is not easy to describe, especially with respect to its connectivity which is critically important for simulating the water flow field. In this paper, we introduced a new method for fracture network description and prediction, termed multi-point-statistics (MPS). The process of the MPS method is to record multiple-point statistics concerning the connectivity patterns of a fracture network from a known fracture map, and to reproduce multiple-scale training fracture patterns in a stochastic manner, implicitly and directly. It is applied to fracture data to study flow field behavior at the Yucca Mountain waste repository system. First, the MPS method is used to create a fracture network with an original fracture training image from Yucca Mountain dataset. After we adopt a harmonic and arithmetic average method to upscale the permeability to a coarse grid, THM simulation is carried out to study near-field water flow in the surrounding waste emplacement drifts. Our study shows that connectivity or patterns of fracture networks can be grasped and reconstructed by MPS methods. In theory, it will lead to better prediction of fracture system characteristics and flow behavior. Meanwhile, we can obtain variance from flow field, which gives us a way to quantify model uncertainty even in complicated coupled THM simulations. It indicates that MPS can potentially characterize and reconstruct natural fracture networks in a fractured rock mass with advantages of quantifying connectivity of fracture system and its simulation uncertainty simultaneously.
Method of Check of Statistical Hypotheses for Revealing of “Fraud” Point of Sale
Directory of Open Access Journals (Sweden)
T. M. Bolotskaya
2011-06-01
Full Text Available Application method checking of statistical hypotheses fraud Point of Sale working with purchasing cards and suspected of accomplishment of unauthorized operations is analyzed. On the basis of the received results the algorithm is developed, allowing receive an assessment of works of terminals in regime off-line.
Business Statistics Education: Content and Software in Undergraduate Business Statistics Courses.
Tabatabai, Manouchehr; Gamble, Ralph
1997-01-01
Survey responses from 204 of 500 business schools identified most often topics in business statistics I and II courses. The most popular software at both levels was Minitab. Most schools required both statistics I and II. (SK)
International Nuclear Information System (INIS)
Ghaedi, Mehrorang; Shokrollahi, Ardeshir; Niknam, Khodabakhsh; Niknam, Ebrahim; Najibi, Asma; Soylak, Mustafa
2009-01-01
The phase-separation phenomenon of non-ionic surfactants occurring in aqueous solution was used for the extraction of cadmium(II), lead(II), palladium(II) and silver(I). The analytical procedure involved the formation of understudy metals complex with bis((1H-benzo [d] imidazol-2yl)ethyl) sulfane (BIES), and quantitatively extracted to the phase rich in octylphenoxypolyethoxyethanol (Triton X-114) after centrifugation. Methanol acidified with 1 mol L -1 HNO 3 was added to the surfactant-rich phase prior to its analysis by flame atomic absorption spectrometry (FAAS). The concentration of BIES, pH and amount of surfactant (Triton X-114) was optimized. At optimum conditions, the detection limits of (3 sdb/m) of 1.4, 2.8, 1.6 and 1.4 ng mL -1 for Cd 2+ , Pb 2+ , Pd 2+ and Ag + along with preconcentration factors of 30 and enrichment factors of 48, 39, 32 and 42 for Cd 2+ , Pb 2+ , Pd 2+ and Ag + , respectively, were obtained. The proposed cloud point extraction has been successfully applied for the determination of metal ions in real samples with complicated matrix such as radiology waste, vegetable, blood and urine samples.
International Nuclear Information System (INIS)
Heinrich, S.
2006-01-01
Nucleus fission process is a very complex phenomenon and, even nowadays, no realistic models describing the overall process are available. The work presented here deals with a theoretical description of fission fragments distributions in mass, charge, energy and deformation. We have reconsidered and updated the B.D. Wilking Scission Point model. Our purpose was to test if this statistic model applied at the scission point and by introducing new results of modern microscopic calculations allows to describe quantitatively the fission fragments distributions. We calculate the surface energy available at the scission point as a function of the fragments deformations. This surface is obtained from a Hartree Fock Bogoliubov microscopic calculation which guarantee a realistic description of the potential dependence on the deformation for each fragment. The statistic balance is described by the level densities of the fragment. We have tried to avoid as much as possible the input of empirical parameters in the model. Our only parameter, the distance between each fragment at the scission point, is discussed by comparison with scission configuration obtained from full dynamical microscopic calculations. Also, the comparison between our results and experimental data is very satisfying and allow us to discuss the success and limitations of our approach. We finally proposed ideas to improve the model, in particular by applying dynamical corrections. (author)
Farey Statistics in Time n^{2/3} and Counting Primitive Lattice Points in Polygons
Patrascu, Mihai
2007-01-01
We present algorithms for computing ranks and order statistics in the Farey sequence, taking time O (n^{2/3}). This improves on the recent algorithms of Pawlewicz [European Symp. Alg. 2007], running in time O (n^{3/4}). We also initiate the study of a more general algorithmic problem: counting primitive lattice points in planar shapes.
Energy Technology Data Exchange (ETDEWEB)
Ghaedi, Mehrorang, E-mail: m_ghaedi@mail.yu.ac.ir [Chemistry Department, Yasouj University, Yasouj 75914-353 (Iran, Islamic Republic of); Shokrollahi, Ardeshir [Chemistry Department, Yasouj University, Yasouj 75914-353 (Iran, Islamic Republic of); Niknam, Khodabakhsh [Chemistry Department, Persian Gulf University, Bushehr (Iran, Islamic Republic of); Niknam, Ebrahim; Najibi, Asma [Chemistry Department, Yasouj University, Yasouj 75914-353 (Iran, Islamic Republic of); Soylak, Mustafa [Chemistry Department, University of Erciyes, 38039 Kayseri (Turkey)
2009-09-15
The phase-separation phenomenon of non-ionic surfactants occurring in aqueous solution was used for the extraction of cadmium(II), lead(II), palladium(II) and silver(I). The analytical procedure involved the formation of understudy metals complex with bis((1H-benzo [d] imidazol-2yl)ethyl) sulfane (BIES), and quantitatively extracted to the phase rich in octylphenoxypolyethoxyethanol (Triton X-114) after centrifugation. Methanol acidified with 1 mol L{sup -1} HNO{sub 3} was added to the surfactant-rich phase prior to its analysis by flame atomic absorption spectrometry (FAAS). The concentration of BIES, pH and amount of surfactant (Triton X-114) was optimized. At optimum conditions, the detection limits of (3 sdb/m) of 1.4, 2.8, 1.6 and 1.4 ng mL{sup -1} for Cd{sup 2+}, Pb{sup 2+}, Pd{sup 2+} and Ag{sup +} along with preconcentration factors of 30 and enrichment factors of 48, 39, 32 and 42 for Cd{sup 2+}, Pb{sup 2+}, Pd{sup 2+} and Ag{sup +}, respectively, were obtained. The proposed cloud point extraction has been successfully applied for the determination of metal ions in real samples with complicated matrix such as radiology waste, vegetable, blood and urine samples.
Statistical MOSFET Parameter Extraction with Parameter Selection for Minimal Point Measurement
Directory of Open Access Journals (Sweden)
Marga Alisjahbana
2013-11-01
Full Text Available A method to statistically extract MOSFET model parameters from a minimal number of transistor I(V characteristic curve measurements, taken during fabrication process monitoring. It includes a sensitivity analysis of the model, test/measurement point selection, and a parameter extraction experiment on the process data. The actual extraction is based on a linear error model, the sensitivity of the MOSFET model with respect to the parameters, and Newton-Raphson iterations. Simulated results showed good accuracy of parameter extraction and I(V curve fit for parameter deviations of up 20% from nominal values, including for a process shift of 10% from nominal.
Energy Technology Data Exchange (ETDEWEB)
Schmidt, Tobias M. [Department of Physics, University of California, Santa Barbara, Santa Barbara, CA (United States); Max-Planck-Institut für Astronomie, Heidelberg (Germany); Worseck, Gabor [Max-Planck-Institut für Astronomie, Heidelberg (Germany); Hennawi, Joseph F. [Department of Physics, University of California, Santa Barbara, Santa Barbara, CA (United States); Max-Planck-Institut für Astronomie, Heidelberg (Germany); Prochaska, J. Xavier [Department of Astronomy and Astrophysics, UCO/Lick Observatory, University of California, Santa Cruz, Santa Cruz, CA (United States); Crighton, Neil H. M. [Centre for Astrophysics and Supercomputing, Swinburne University of Technology, Melbourne, VIC (Australia); Lukić, Zarija [Lawrence Berkeley National Laboratory, Berkeley, CA (United States); Oñorbe, Jose, E-mail: tschmidt@mpia.de [Max-Planck-Institut für Astronomie, Heidelberg (Germany)
2017-10-17
The reionization of helium at z ~ 3 is the final phase transition of the intergalactic medium and supposed to be driven purely by quasars. The He ii transverse proximity effect—enhanced He ii transmission in a background sightline caused by the ionizing radiation of a foreground quasar—therefore offers a unique opportunity to probe the morphology of He ii reionization and to investigate the emission properties of quasars, e.g., ionizing emissivity, lifetime and beaming geometry. We use the most-recent HST/COS far-UV dataset of 22 He ii absorption spectra and conduct our own dedicated optical spectroscopic survey to find foreground quasars around these He ii sightlines. Based on a set of 66 foreground quasars, we perform the first statistical analysis of the He ii transverse proximity effect. Despite a large object-to-object variance, our stacking analysis reveals an excess in the average He ii transmission near the foreground quasars at 3σ significance. This statistical evidence for the transverse proximity effect is corroborated by a clear dependence of the signal strength on the inferred He ii ionization rate at the background sightline. Our detection places, based on the transverse light crossing time, a geometrical limit on the quasar lifetime of t{sub Q} > 25 Myr. This evidence for sustained activity of luminous quasars is relevant for the morphology of H i and He ii reionization and helps to constrain AGN triggering mechanisms, accretion physics and models of black hole mass assembly. We show how future modeling of the transverse proximity effect can additionally constrain quasar emission geometries and e.g., clarify if the large observed object-to-object variance can be explained by current models of quasar obscuration.
The conscious city II: traffic congestion and the tipping point in greater Vancouver
Holt, Rebecca
2007-01-01
The Conscious City II explores how broad, long-term change toward sustainability in cities can be fostered, nurtured and facilitated. Using a qualitative, mixed-method approach, this research adapts a model from Malcolm Gladwell’s Tipping Point framework to explore how social consciousness can be mobilized to achieve change toward sustainability through an analysis of traffic congestion in Greater Vancouver. The results demonstrate the important influence of leadership, context and message on...
Statistical Modeling of Antenna: Urban Equipment Interactions for LTE Access Points
Directory of Open Access Journals (Sweden)
Xin Zeng
2012-01-01
Full Text Available The latest standards for wireless networks such as LTE are essentially based on small cells in order to achieve a large network capacity. This applies for antennas to be deployed at street level or even within buildings. However, antennas are commonly designed, simulated, and measured in ideal conditions, which is not the real situation for most applications where antennas are often deployed in proximity to objects acting as disturbers. In this paper, three conventional wireless access point scenarios (antenna-wall, antenna-shelter, and antenna lamppost are investigated for directional or omnidirectional antennas. The paper first addresses the definition of three performance indicators for such scenarios and secondly uses such parameters towards the statistical analysis of the interactions between the wall and the antennas.
Statistics for Locally Scaled Point Patterns
DEFF Research Database (Denmark)
Prokesová, Michaela; Hahn, Ute; Vedel Jensen, Eva B.
2006-01-01
scale factor. The main emphasis of the present paper is on analysis of such models. Statistical methods are developed for estimation of scaling function and template parameters as well as for model validation. The proposed methods are assessed by simulation and used in the analysis of a vegetation...
Attitude towards statistics and performance among post-graduate students
Rosli, Mira Khalisa; Maat, Siti Mistima
2017-05-01
For student to master Statistics is a necessity, especially for those post-graduates that are involved in the research field. The purpose of this research was to identify the attitude towards Statistics among the post-graduates and to determine the relationship between the attitude towards Statistics and post-graduates' of Faculty of Education, UKM, Bangi performance. 173 post-graduate students were chosen randomly to participate in the study. These students registered in Research Methodology II course that was introduced by faculty. A survey of attitude toward Statistics using 5-points Likert scale was used for data collection purposes. The instrument consists of four components such as affective, cognitive competency, value and difficulty. The data was analyzed using the SPSS version 22 in producing the descriptive and inferential Statistics output. The result of this research showed that there is a medium and positive relation between attitude towards statistics and students' performance. As a conclusion, educators need to access students' attitude towards the course to accomplish the learning outcomes.
Energy Technology Data Exchange (ETDEWEB)
Guasp, J.
1995-07-01
The distribution of the exit points, on plasma border, for the lost fast ions during tangential balanced NBI in TJ-II helical axis Stellarator is theoretically analysed, as well for direct as for delayed losses. The link between, the position of those exit points and the corresponding at birth, orbits and drifts is analysed also. It is shown that such relation is rather independent of beam energy and plasma density and is mainly related to the magnetic configuration characteristics. This study is a needed intermediate step to the analysis of impacts of those ions on the vacuum vessel of TJ-II. (Author) 2 refs.
Vali Ahmadi, Mohammad; Doostparast, Mahdi; Ahmadi, Jafar
2015-04-01
In manufacturing industries, the lifetime of an item is usually characterised by a random variable X and considered to be satisfactory if X exceeds a given lower lifetime limit L. The probability of a satisfactory item is then ηL := P(X ≥ L), called conforming rate. In industrial companies, however, the lifetime performance index, proposed by Montgomery and denoted by CL, is widely used as a process capability index instead of the conforming rate. Assuming a parametric model for the random variable X, we show that there is a connection between the conforming rate and the lifetime performance index. Consequently, the statistical inferences about ηL and CL are equivalent. Hence, we restrict ourselves to statistical inference for CL based on generalised order statistics, which contains several ordered data models such as usual order statistics, progressively Type-II censored data and records. Various point and interval estimators for the parameter CL are obtained and optimal critical regions for the hypothesis testing problems concerning CL are proposed. Finally, two real data-sets on the lifetimes of insulating fluid and ball bearings, due to Nelson (1982) and Caroni (2002), respectively, and a simulated sample are analysed.
International Nuclear Information System (INIS)
1977-12-01
Results are reported from an engineering assessment of the problems resulting from the existence of radioactive uranium mill tailings at Ray Point, Texas. The Phase II--Title I services generally include the preparation of topographic maps, the performance of soil sampling and radiometric measurements sufficient to determine areas and volumes of tailings and other radium-contaminated materials, the evaluation of resulting radiation exposures of individuals and nearby populations, the investigation of site hydrology and meteorology and the evaluation and costing of alternative corrective actions. About 490,000 tons of ore were processed at this mill with all of the uranium sold on the commercial market. None was sold to the AEC; therefore, this report focuses on a physical description of the site and the identification of radiation pathways. No remedial action options were formulated for the site, inasmuch as none of the uranium was sold to the AEC and Exxon Corporation has agreed to perform all actions required by the State of Texas. Radon gas release from the tailings at the Ray Point site constitutes the most significant environmental impact. Windblown tailings, external gamma radiation and localized contamination of surface waters are other environmental effects. Exxon is also studying the feasibility of reprocessing the tailings
Johnson, Norman
This is author-approved bcc: This is the third volume of a collection of seminal papers in the statistical sciences written during the past 110 years. These papers have each had an outstanding influence on the development of statistical theory and practice over the last century. Each paper is preceded by an introduction written by an authority in the field providing background information and assessing its influence. Volume III concerntrates on articles from the 1980's while including some earlier articles not included in Volume I and II. Samuel Kotz is Professor of Statistics in the College of Business and Management at the University of Maryland. Norman L. Johnson is Professor Emeritus of Statistics at the University of North Carolina. Also available: Breakthroughs in Statistics Volume I: Foundations and Basic Theory Samuel Kotz and Norman L. Johnson, Editors 1993. 631 pp. Softcover. ISBN 0-387-94037-5 Breakthroughs in Statistics Volume II: Methodology and Distribution Samuel Kotz and Norman L. Johnson, Edi...
Czech Academy of Sciences Publication Activity Database
Netopilík, Miloš; Kratochvíl, Pavel
2006-01-01
Roč. 55, č. 2 (2006), s. 196-203 ISSN 0959-8103 R&D Projects: GA AV ČR IAA100500501; GA AV ČR IAA4050403; GA AV ČR IAA4050409; GA ČR GA203/03/0617 Institutional research plan: CEZ:AV0Z40500505 Keywords : statistical branching * tetrafunctional branch points * molecular-weight distribution Subject RIV: CD - Macromolecular Chemistry Impact factor: 1.475, year: 2006
Structure Learning and Statistical Estimation in Distribution Networks - Part II
Energy Technology Data Exchange (ETDEWEB)
Deka, Deepjyoti [Univ. of Texas, Austin, TX (United States); Backhaus, Scott N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Chertkov, Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-02-13
Limited placement of real-time monitoring devices in the distribution grid, recent trends notwithstanding, has prevented the easy implementation of demand-response and other smart grid applications. Part I of this paper discusses the problem of learning the operational structure of the grid from nodal voltage measurements. In this work (Part II), the learning of the operational radial structure is coupled with the problem of estimating nodal consumption statistics and inferring the line parameters in the grid. Based on a Linear-Coupled(LC) approximation of AC power flows equations, polynomial time algorithms are designed to identify the structure and estimate nodal load characteristics and/or line parameters in the grid using the available nodal voltage measurements. Then the structure learning algorithm is extended to cases with missing data, where available observations are limited to a fraction of the grid nodes. The efficacy of the presented algorithms are demonstrated through simulations on several distribution test cases.
Statistical quality control a loss minimization approach
Trietsch, Dan
1999-01-01
While many books on quality espouse the Taguchi loss function, they do not examine its impact on statistical quality control (SQC). But using the Taguchi loss function sheds new light on questions relating to SQC and calls for some changes. This book covers SQC in a way that conforms with the need to minimize loss. Subjects often not covered elsewhere include: (i) measurements, (ii) determining how many points to sample to obtain reliable control charts (for which purpose a new graphic tool, diffidence charts, is introduced), (iii) the connection between process capability and tolerances, (iv)
Directory of Open Access Journals (Sweden)
Petrov Verica D.
2011-01-01
Full Text Available The influence of wheat black point kernels on selected indicators of wheat flour quality - farinograph and extensograph indicators, amylolytic activity, wet gluten and flour ash content, were examined in this study. The examinations were conducted on samples of wheat harvested in the years 2007 and 2008 from the area of Central Banat in four treatments-control (without black point flour and with 2, 4 and 10% of black point flour which was added as a replacement for a part of the control sample. Statistically significant differences between treatments were observed on the dough stability, falling number and extensibility. The samples with 10% of black point flour had the lowest dough stability and the highest amylolytic activity and extensibility. There was a trend of the increasing 15 min drop and water absorption with the increased share of black point flour. Extensograph area, resistance and ratio resistance to extensibility decreased with the addition of black point flour, but not properly. Mahalanobis distance indicates that the addition of 10% black point flour had the greatest influence on the observed quality indicators, thus proving that black point contributes to the technological quality of wheat, i.e .flour.
Modern Statistics for Spatial Point Processes
DEFF Research Database (Denmark)
Møller, Jesper; Waagepetersen, Rasmus
2007-01-01
We summarize and discuss the current state of spatial point process theory and directions for future research, making an analogy with generalized linear models and random effect models, and illustrating the theory with various examples of applications. In particular, we consider Poisson, Gibbs...
Modern statistics for spatial point processes
DEFF Research Database (Denmark)
Møller, Jesper; Waagepetersen, Rasmus
We summarize and discuss the current state of spatial point process theory and directions for future research, making an analogy with generalized linear models and random effect models, and illustrating the theory with various examples of applications. In particular, we consider Poisson, Gibbs...
International Nuclear Information System (INIS)
Duek, E.E.
1976-09-01
A method is described for determining the amount of mercury (II) in radioactively labelled chlormerodrin and mercuric chloride. By measuring the absolute activity in an ionization chamber, the specific activity is therefore immediately obtained. The determination of Hg (II) is based on a complexometric titration. Because of method characteristics and speed convenience, the end point is observed by means of a pH-meter. A comparison is made with a determination performed by detecting the end point with color-change indicators. The error is estimated, and the results are statistically interpreted. (author) [es
Directory of Open Access Journals (Sweden)
Francisco Javier ALEJO MONTES
2013-07-01
Full Text Available This paper is a statistical comparative study between the students who were registering in the University of Salamanca and who were obtaining each of three degrees that could be obtained in the University. It fills an existing emptiness in the University of Salamanca historiography, since a comparative study was necessary in the epoch of Philip II. The study analyzes the university matriculation, the requirements to formalize it, the order in which there were staying new recruits and the statistical contributed information. Likewise, there are analyzed and compare the statistical information contributed for each of three degrees that could be obtained in the University of Salamanca, which they were that of graduate, that of licentiate and that of doctor or teacher. It appreciates clearly the importance that had each of the faculties, being clearly the most valued the Faculty of Canon law and that less, that of Medicine.El presente artículo es un estudio estadístico comparativo entre los estudiantes que se matriculaban en la Universidad de Salamanca y los que obtenían cada uno de los tres grados que se podían obtener en la Universidad. Llena un vacío existente en la historiografía universitaria salmantina, ya que era necesario un estudio comparativo en la época de Felipe II. El estudio analiza la matrícula universitaria, los requisitos para formalizarla, el orden en el que quedaban inscritos y los datos estadísticos aportados. Asimismo, se analizan y comparan los datos estadísticos aportados para cada uno de los tres grados que se podían obtener en la Universidad de Salamanca, que eran el de bachiller, el de licenciado y el de doctor o maestro. Con ello, se aprecia claramente la importancia que tenía cada una de las Facultades, siendo claramente la más valorada la Facultad de Derecho Canónico y la que menos, la de Medicina.
Statistical inferences with jointly type-II censored samples from two Pareto distributions
Abu-Zinadah, Hanaa H.
2017-08-01
In the several fields of industries the product comes from more than one production line, which is required to work the comparative life tests. This problem requires sampling of the different production lines, then the joint censoring scheme is appeared. In this article we consider the life time Pareto distribution with jointly type-II censoring scheme. The maximum likelihood estimators (MLE) and the corresponding approximate confidence intervals as well as the bootstrap confidence intervals of the model parameters are obtained. Also Bayesian point and credible intervals of the model parameters are presented. The life time data set is analyzed for illustrative purposes. Monte Carlo results from simulation studies are presented to assess the performance of our proposed method.
Three-point statistics of cosmological stochastic gravitational waves
International Nuclear Information System (INIS)
Adshead, Peter; Lim, Eugene A.
2010-01-01
We consider the three-point function (i.e. the bispectrum or non-Gaussianity) for stochastic backgrounds of gravitational waves. We estimate the amplitude of this signal for the primordial inflationary background, gravitational waves generated during preheating, and for gravitational waves produced by self-ordering scalar fields following a global phase transition. To assess detectability, we describe how to extract the three-point signal from an idealized interferometric experiment and compute the signal to noise ratio as a function of integration time. The three-point signal for the stochastic gravitational wave background generated by inflation is unsurprisingly tiny. For gravitational radiation generated by purely causal, classical mechanisms we find that, no matter how nonlinear the process is, the three-point correlations produced vanish in direct detection experiments. On the other hand, we show that in scenarios where the B-mode of the cosmic microwave background is sourced by gravitational waves generated by a global phase transition, a strong three-point signal among the polarization modes is also produced. This may provide another method of distinguishing inflationary B-modes. To carry out this computation, we have developed a diagrammatic approach to the calculation of stochastic gravitational waves sourced by scalar fluids, which has applications beyond the present scenario.
Statistical thermodynamics -- A tool for understanding point defects in intermetallic compounds
International Nuclear Information System (INIS)
Ipser, H.; Krachler, R.
1996-01-01
The principles of the derivation of statistical-thermodynamic models to interpret the compositional variation of thermodynamic properties in non-stoichiometric intermetallic compounds are discussed. Two types of models are distinguished: the Bragg-Williams type, where the total energy of the crystal is taken as the sum of the interaction energies of all nearest-neighbor pairs of atoms, and the Wagner-Schottky type, where the internal energy, the volume, and the vibrational entropy of the crystal are assumed to be linear functions of the numbers of atoms or vacancies on the different sublattices. A Wagner-Schottky type model is used for the description of two examples with different crystal structures: for β'-FeAl (with B2-structure) defect concentrations and their variation with composition are derived from the results of measurements of the aluminum vapor pressure, the resulting values are compared with results of other independent experimental methods; for Rh 3 Te 4 (with an NiAs-derivative structure) the defect mechanism responsible for non-stoichiometry is worked out by application of a theoretical model to the results of tellurium vapor pressure measurements. In addition it is shown that the shape of the activity curve indicates a certain sequence of superstructures. In principle, there are no limitations to the application of statistical thermodynamics to experimental thermodynamic data as long as these are available with sufficient accuracy, and as long as it is ensured that the distribution of the point defects is truly random, i.e. that there are no aggregates of defects
On statistical analysis of compound point process
Czech Academy of Sciences Publication Activity Database
Volf, Petr
2006-01-01
Roč. 35, 2-3 (2006), s. 389-396 ISSN 1026-597X R&D Projects: GA ČR(CZ) GA402/04/1294 Institutional research plan: CEZ:AV0Z10750506 Keywords : counting process * compound process * hazard function * Cox -model Subject RIV: BB - Applied Statistics, Operational Research
On Quantum Statistical Inference, II
Barndorff-Nielsen, O. E.; Gill, R. D.; Jupp, P. E.
2003-01-01
Interest in problems of statistical inference connected to measurements of quantum systems has recently increased substantially, in step with dramatic new developments in experimental techniques for studying small quantum systems. Furthermore, theoretical developments in the theory of quantum measurements have brought the basic mathematical framework for the probability calculations much closer to that of classical probability theory. The present paper reviews this field and proposes and inte...
Dombrowski, M. P.; Labelle, J. W.; Kletzing, C.; Bounds, S. R.; Kaeppler, S. R.
2014-12-01
Langmuir-mode electron plasma waves are frequently observed by spacecraft in active plasma environments such as the ionosphere. Ionospheric Langmuir waves may be excited by the bump-on-tail instability generated by impinging beams of electrons traveling parallel to the background magnetic field (B). The Correlation of High-frequencies and Auroral Roar Measurement (CHARM II) sounding rocket was launched into a substorm at 9:49 UT on 17 February 2010, from the Poker Flat Research Range in Alaska. The primary instruments included the University of Iowa Wave-Particle Correlator (WPC), the Dartmouth High-Frequency Experiment (HFE), several charged particle detectors, low-frequency wave instruments, and a magnetometer. The HFE is a receiver system which effectively yields continuous (100% duty cycle) electric-field waveform measurements from 100 kHz to 5 MHz, and which had its detection axis aligned nominally parallel to B. The HFE output was fed on-payload to the WPC, which uses a phase-locked loop to track the incoming wave frequency with the most power, then sorting incoming electrons at eight energy levels into sixteen wave-phase bins. CHARM II encountered several regions of strong Langmuir wave activity throughout its 15-minute flight, and the WPC showed wave-lock and statistically significant particle correlation distributions during several time periods. We show results of an in-depth analysis of the CHARM II WPC data for the entire flight, including statistical analysis of correlations which show evidence of direct interaction with the Langmuir waves, indicating (at various times) trapping of particles and both driving and damping of Langmuir waves by particles. In particular, the sign of the gradient in particle flux appears to correlate with the phase relation between the electrons and the wave field, with possible implications for the wave physics.
Statistical phenomena - experiments results. II
International Nuclear Information System (INIS)
Schnell, W.
1977-01-01
The stochastic cooling of proton and antiproton beams is discussed. Stochastic cooling is the gradual reduction of emittance of a coasting beam by a feedback system, sensing and correcting the statistical fluctuations of the beam's position or momentum. The correction at every turn can be partial or complete. Transverse and longitudinal emittance of the beam are considered and the systems designed to cool the beams are described. (B.D.)
DEFF Research Database (Denmark)
Thorndahl, Søren; Korup Andersen, Aske; Larsen, Anders Badsberg
2017-01-01
Continuous and long rainfall series are a necessity in rural and urban hydrology for analysis and design purposes. Local historical point rainfall series often cover several decades, which makes it possible to estimate rainfall means at different timescales, and to assess return periods of extreme...... includes climate changes projected to a specific future period. This paper presents a framework for resampling of historical point rainfall series in order to generate synthetic rainfall series, which has the same statistical properties as an original series. Using a number of key target predictions...... for the future climate, such as winter and summer precipitation, and representation of extreme events, the resampled historical series are projected to represent rainfall properties in a future climate. Climate-projected rainfall series are simulated by brute force randomization of model parameters, which leads...
Energy Technology Data Exchange (ETDEWEB)
Heinrich, S
2006-07-01
Nucleus fission process is a very complex phenomenon and, even nowadays, no realistic models describing the overall process are available. The work presented here deals with a theoretical description of fission fragments distributions in mass, charge, energy and deformation. We have reconsidered and updated the B.D. Wilking Scission Point model. Our purpose was to test if this statistic model applied at the scission point and by introducing new results of modern microscopic calculations allows to describe quantitatively the fission fragments distributions. We calculate the surface energy available at the scission point as a function of the fragments deformations. This surface is obtained from a Hartree Fock Bogoliubov microscopic calculation which guarantee a realistic description of the potential dependence on the deformation for each fragment. The statistic balance is described by the level densities of the fragment. We have tried to avoid as much as possible the input of empirical parameters in the model. Our only parameter, the distance between each fragment at the scission point, is discussed by comparison with scission configuration obtained from full dynamical microscopic calculations. Also, the comparison between our results and experimental data is very satisfying and allow us to discuss the success and limitations of our approach. We finally proposed ideas to improve the model, in particular by applying dynamical corrections. (author)
BetaTPred: prediction of beta-TURNS in a protein using statistical algorithms.
Kaur, Harpreet; Raghava, G P S
2002-03-01
beta-turns play an important role from a structural and functional point of view. beta-turns are the most common type of non-repetitive structures in proteins and comprise on average, 25% of the residues. In the past numerous methods have been developed to predict beta-turns in a protein. Most of these prediction methods are based on statistical approaches. In order to utilize the full potential of these methods, there is a need to develop a web server. This paper describes a web server called BetaTPred, developed for predicting beta-TURNS in a protein from its amino acid sequence. BetaTPred allows the user to predict turns in a protein using existing statistical algorithms. It also allows to predict different types of beta-TURNS e.g. type I, I', II, II', VI, VIII and non-specific. This server assists the users in predicting the consensus beta-TURNS in a protein. The server is accessible from http://imtech.res.in/raghava/betatpred/
Directory of Open Access Journals (Sweden)
M. Padma Latha
2007-04-01
Full Text Available Chemical speciation of Pb(II, Cd(II, Hg(II, Co(II, Ni(II, Cu(II and Zn(II complexes of L-methionine in 0.0-60 % v/v 1,2-propanediol-water mixtures maintaining an ionic strength of 0.16 M at 303 K has been studied pH metrically. The active forms of ligand are LH2+, LH and L-. The predominant species detected are ML, MLH, ML2, ML2H, ML2H2 and MLOH. Models containing different numbers of species were refined by using the computer program MINIQUAD 75. The best-fit chemical models were arrived at based on statistical parameters. The trend in variation of complex stability constants with change in the dielectric constant of the medium is explained on the basis of electrostatic and non-electrostatic forces.
Handbook of Spatial Statistics
Gelfand, Alan E
2010-01-01
Offers an introduction detailing the evolution of the field of spatial statistics. This title focuses on the three main branches of spatial statistics: continuous spatial variation (point referenced data); discrete spatial variation, including lattice and areal unit data; and, spatial point patterns.
Fei, Fucong; Bo, Xiangyan; Wang, Pengdong; Ying, Jianghua; Chen, Bo; Liu, Qianqian; Zhang, Yong; Sun, Zhe; Qu, Fanming; Zhang, Yi; Li, Jian; Song, Fengqi; Wan, Xiangang; Wang, Baigeng; Wang, Guanghou
2017-01-01
Topological semimetal is a topic of general interest in material science. Recently, a new kind of topological semimetal called type-II Dirac semimetal with tilted Dirac cones is discovered in PtSe2 family. However, the further investigation is hindered due to the huge energy difference from Dirac points to Fermi level and the irrelevant conducting pockets at Fermi surface. Here we characterize the optimized type-II Dirac dispersions in a metastable 1T phase of IrTe2. Our strategy of Pt doping...
Fundamentals of modern statistical methods substantially improving power and accuracy
Wilcox, Rand R
2001-01-01
Conventional statistical methods have a very serious flaw They routinely miss differences among groups or associations among variables that are detected by more modern techniques - even under very small departures from normality Hundreds of journal articles have described the reasons standard techniques can be unsatisfactory, but simple, intuitive explanations are generally unavailable Improved methods have been derived, but they are far from obvious or intuitive based on the training most researchers receive Situations arise where even highly nonsignificant results become significant when analyzed with more modern methods Without assuming any prior training in statistics, Part I of this book describes basic statistical principles from a point of view that makes their shortcomings intuitive and easy to understand The emphasis is on verbal and graphical descriptions of concepts Part II describes modern methods that address the problems covered in Part I Using data from actual studies, many examples are include...
Taylor, Charles J.; Williamson, Tanja N.; Newson, Jeremy K.; Ulery, Randy L.; Nelson, Hugh L.; Cinotto, Peter J.
2012-01-01
This report describes Phase II modifications made to the Water Availability Tool for Environmental Resources (WATER), which applies the process-based TOPMODEL approach to simulate or predict stream discharge in surface basins in the Commonwealth of Kentucky. The previous (Phase I) version of WATER did not provide a means of identifying sinkhole catchments or accounting for the effects of karst (internal) drainage in a TOPMODEL-simulated basin. In the Phase II version of WATER, sinkhole catchments are automatically identified and delineated as internally drained subbasins, and a modified TOPMODEL approach (called the sinkhole drainage process, or SDP-TOPMODEL) is applied that calculates mean daily discharges for the basin based on summed area-weighted contributions from sinkhole drain-age (SD) areas and non-karstic topographically drained (TD) areas. Results obtained using the SDP-TOPMODEL approach were evaluated for 12 karst test basins located in each of the major karst terrains in Kentucky. Visual comparison of simulated hydrographs and flow-duration curves, along with statistical measures applied to the simulated discharge data (bias, correlation, root mean square error, and Nash-Sutcliffe efficiency coefficients), indicate that the SDPOPMODEL approach provides acceptably accurate estimates of discharge for most flow conditions and typically provides more accurate simulation of stream discharge in karstic basins compared to the standard TOPMODEL approach. Additional programming modifications made to the Phase II version of WATER included implementation of a point-and-click graphical user interface (GUI), which fully automates the delineation of simulation-basin boundaries and improves the speed of input-data processing. The Phase II version of WATER enables the user to select a pour point anywhere on a stream reach of interest, and the program will automatically delineate all upstream areas that contribute drainage to that point. This capability enables
Goodman, Joseph W.
2000-07-01
The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson The Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences Robert G. Bartle The Elements of Integration and Lebesgue Measure George E. P. Box & Norman R. Draper Evolutionary Operation: A Statistical Method for Process Improvement George E. P. Box & George C. Tiao Bayesian Inference in Statistical Analysis R. W. Carter Finite Groups of Lie Type: Conjugacy Classes and Complex Characters R. W. Carter Simple Groups of Lie Type William G. Cochran & Gertrude M. Cox Experimental Designs, Second Edition Richard Courant Differential and Integral Calculus, Volume I RIchard Courant Differential and Integral Calculus, Volume II Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume I Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume II D. R. Cox Planning of Experiments Harold S. M. Coxeter Introduction to Geometry, Second Edition Charles W. Curtis & Irving Reiner Representation Theory of Finite Groups and Associative Algebras Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume I Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume II Cuthbert Daniel Fitting Equations to Data: Computer Analysis of Multifactor Data, Second Edition Bruno de Finetti Theory of Probability, Volume I Bruno de Finetti Theory of Probability, Volume 2 W. Edwards Deming Sample Design in Business Research
Directory of Open Access Journals (Sweden)
Verónica Loera-Castañeda
2014-01-01
Full Text Available Mitochondrial dysfunction has been thought to contribute to Alzheimer disease (AD pathogenesis through the accumulation of mitochondrial DNA mutations and net production of reactive oxygen species (ROS. Mitochondrial cytochrome c-oxidase plays a key role in the regulation of aerobic production of energy and is composed of 13 subunits. The 3 largest subunits (I, II, and III forming the catalytic core are encoded by mitochondrial DNA. The aim of this work was to look for mutations in mitochondrial cytochrome c-oxidase gene II (MTCO II in blood samples from probable AD Mexican patients. MTCO II gene was sequenced in 33 patients with diagnosis of probable AD. Four patients (12% harbored the A8027G polymorphism and three of them were early onset (EO AD cases with familial history of the disease. In addition, other four patients with EOAD had only one of the following point mutations: A8003C, T8082C, C8201T, or G7603A. Neither of the point mutations found in this work has been described previously for AD patients, and the A8027G polymorphism has been described previously; however, it hasn’t been related to AD. We will need further investigation to demonstrate the role of the point mutations of mitochondrial DNA in the pathogenesis of AD.
Statistics & probaility for dummies
Rumsey, Deborah J
2013-01-01
Two complete eBooks for one low price! Created and compiled by the publisher, this Statistics I and Statistics II bundle brings together two math titles in one, e-only bundle. With this special bundle, you'll get the complete text of the following two titles: Statistics For Dummies, 2nd Edition Statistics For Dummies shows you how to interpret and critique graphs and charts, determine the odds with probability, guesstimate with confidence using confidence intervals, set up and carry out a hypothesis test, compute statistical formulas, and more. Tra
Directory of Open Access Journals (Sweden)
David Normando
2010-02-01
Full Text Available A seleção de métodos apropriados para a análise estatística pode parecer complexa, principalmente para estudantes de pós-graduação e pesquisadores no início da carreira científica. Por outro lado, a apresentação em PowerPoint é uma ferramenta comum para estudantes e pesquisadores. Assim, um tutorial de Bioestatística desenvolvido em uma apresentação em PowerPoint poderia estreitar a distância entre ortodontistas e a Bioestatística. Esse guia proporciona informações úteis e objetivas a respeito de vários métodos estatísticos empregando exemplos relacionados à Odontologia e, mais especificamente, à Ortodontia. Esse tutorial deve ser empregado, principalmente, para o usuário obter algumas respostas a questões comuns relacionadas ao teste mais apropriado para executar comparações entre grupos, examinar correlações e regressões ou analisar o erro do método. Também pode ser obtido auxílio para checar a distribuição dos dados (normal ou anormal e a escolha do gráfico mais adequado para a apresentação dos resultados. Esse guia* pode ainda ser de bastante utilidade para revisores de periódicos examinarem, de forma rápida, a adequabilidade do método estatístico apresentado em um artigo submetido à publicação.Selecting appropriate methods for statistical analysis may be difficult, especially for the students and others in the early phases of the research career. On the other hand, PowerPoint presentation is a very common tool to researchers and dental students, so a statistical guide based on PowerPoint could narrow the gap between orthodontist and the Biostatistics. This guide provides objective and useful information about several statistical methods using examples related to the dental field. A Power-Point presentation is employed to assist the user to find answers to common questions regarding Biostatistics, such as the most appropriate statistical test to compare groups, to make correlations and
Høyer, Anne-Sophie; Vignoli, Giulio; Mejer Hansen, Thomas; Thanh Vu, Le; Keefer, Donald A.; Jørgensen, Flemming
2017-12-01
Most studies on the application of geostatistical simulations based on multiple-point statistics (MPS) to hydrogeological modelling focus on relatively fine-scale models and concentrate on the estimation of facies-level structural uncertainty. Much less attention is paid to the use of input data and optimal construction of training images. For instance, even though the training image should capture a set of spatial geological characteristics to guide the simulations, the majority of the research still relies on 2-D or quasi-3-D training images. In the present study, we demonstrate a novel strategy for 3-D MPS modelling characterized by (i) realistic 3-D training images and (ii) an effective workflow for incorporating a diverse group of geological and geophysical data sets. The study covers an area of 2810 km2 in the southern part of Denmark. MPS simulations are performed on a subset of the geological succession (the lower to middle Miocene sediments) which is characterized by relatively uniform structures and dominated by sand and clay. The simulated domain is large and each of the geostatistical realizations contains approximately 45 million voxels with size 100 m × 100 m × 5 m. Data used for the modelling include water well logs, high-resolution seismic data, and a previously published 3-D geological model. We apply a series of different strategies for the simulations based on data quality, and develop a novel method to effectively create observed spatial trends. The training image is constructed as a relatively small 3-D voxel model covering an area of 90 km2. We use an iterative training image development strategy and find that even slight modifications in the training image create significant changes in simulations. Thus, this study shows how to include both the geological environment and the type and quality of input information in order to achieve optimal results from MPS modelling. We present a practical workflow to build the training image and
Directory of Open Access Journals (Sweden)
Agalar M. Agalarov
2018-01-01
Full Text Available In the article, the possibility of using a bispectrum under the investigation of regular and chaotic behaviour of one-dimensional point mappings is discussed. The effectiveness of the transfer of this concept to nonlinear dynamics was demonstrated by an example of the Feigenbaum mapping. Also in the work, the application of the Kullback-Leibler entropy in the theory of point mappings is considered. It has been shown that this information-like value is able to describe the behaviour of statistical ensembles of one-dimensional mappings. In the framework of this theory some general properties of its behaviour were found out. Constructivity of the Kullback-Leibler entropy in the theory of point mappings was shown by means of its direct calculation for the ”saw tooth” mapping with linear initial probability density. Moreover, for this mapping the denumerable set of initial probability densities hitting into its stationary probability density after a finite number of steps was pointed out.
International Nuclear Information System (INIS)
Pal, T.; Jana, N.R.
1993-01-01
The individual determination of Be II , Mg II or Ca II by conventional spectrophotometry and simultaneous determination of Mg II and Ca II in mixtures by first-derivative spectrophotometry are possible at trace levels, using emodin (1,3,8-trihydroxy-6-methylanthraquinone) as spectrophotometric reagent. Interference from other metal species, application of these methods to rock samples and statistical analysis of the results are discussed. (author)
Computer simulation of vortex pinning in type II superconductors. II. Random point pins
International Nuclear Information System (INIS)
Brandt, E.H.
1983-01-01
Pinning of vortices in a type II superconductor by randomly positioned identical point pins is simulated using the two-dimensional method described in a previous paper (Part I). The system is characterized by the vortex and pin numbers (N/sub v/, N/sub p/), the vortex and pin interaction ranges (R/sub v/, R/sub p/), and the amplitude of the pin potential A/sub p/. The computation is performed for many cases: dilute or dense, sharp or soft, attractive or repulsive, weak or strong pins, and ideal or amorphous vortex lattice. The total pinning force F as a function of the mean vortex displacment X increases first linearly (over a distance usually much smaller than the vortex spacing and than R/sub p/) and then saturates, fluctuating about its averaging F-bar. We interpret F-bar as the maximum pinning force j/sub c/B of a large specimen. For weak pins the prediction of Larkin and Ovchinnikov for two-dimensional collective pinning is confirmed: F-bar = const. iW/R/sub p/c 66 , where W-bar is the mean square pinning force and c 66 is the shear modulus of the vortex lattice. If the initial vortex lattice is chosen highly defective (''amorphous'') the constant is 1.3--3 times larger than for the ideal triangular lattice. This finding may explain the often observed ''history effect.'' The function F-bar(A/sub p/) exhibits a jump, which for dilute, sharp, attractive pins occurs close to the ''threshold value'' predicted for isolated pins by Labusch. This jump reflects the onset of plastic deformation of the vortex lattice, and in some cases of vortex trapping, but is not a genuine threshold
Stamhuis, I.H.; Klep, P.M.M.; Maarseveen, J.G.S.J. van
2008-01-01
In the period 1850-1940 statistics developed as a new combination of theory and practice. A wide range of phenomena were looked at in a novel way and this statistical mindset had a pervasive influence in contemporary society. This development of statistics is closely interlinked with the process of
Green, K. N.; van Alstine, R. L.
This paper presents the current performance levels of the SDG-5 gyro, a high performance two-axis dynamically tuned gyro, and the DRIRU II redundant inertial reference unit relating to stabilization and pointing applications. Also presented is a discussion of a product improvement program aimed at further noise reductions to meet the demanding requirements of future space defense applications.
Accelerating simulation for the multiple-point statistics algorithm using vector quantization
Zuo, Chen; Pan, Zhibin; Liang, Hao
2018-03-01
Multiple-point statistics (MPS) is a prominent algorithm to simulate categorical variables based on a sequential simulation procedure. Assuming training images (TIs) as prior conceptual models, MPS extracts patterns from TIs using a template and records their occurrences in a database. However, complex patterns increase the size of the database and require considerable time to retrieve the desired elements. In order to speed up simulation and improve simulation quality over state-of-the-art MPS methods, we propose an accelerating simulation for MPS using vector quantization (VQ), called VQ-MPS. First, a variable representation is presented to make categorical variables applicable for vector quantization. Second, we adopt a tree-structured VQ to compress the database so that stationary simulations are realized. Finally, a transformed template and classified VQ are used to address nonstationarity. A two-dimensional (2D) stationary channelized reservoir image is used to validate the proposed VQ-MPS. In comparison with several existing MPS programs, our method exhibits significantly better performance in terms of computational time, pattern reproductions, and spatial uncertainty. Further demonstrations consist of a 2D four facies simulation, two 2D nonstationary channel simulations, and a three-dimensional (3D) rock simulation. The results reveal that our proposed method is also capable of solving multifacies, nonstationarity, and 3D simulations based on 2D TIs.
Strong Statistical Convergence in Probabilistic Metric Spaces
Şençimen, Celaleddin; Pehlivan, Serpil
2008-01-01
In this article, we introduce the concepts of strongly statistically convergent sequence and strong statistically Cauchy sequence in a probabilistic metric (PM) space endowed with the strong topology, and establish some basic facts. Next, we define the strong statistical limit points and the strong statistical cluster points of a sequence in this space and investigate the relations between these concepts.
Industrial commodity statistics yearbook 2001. Production statistics (1992-2001)
International Nuclear Information System (INIS)
2003-01-01
This is the thirty-fifth in a series of annual compilations of statistics on world industry designed to meet both the general demand for information of this kind and the special requirements of the United Nations and related international bodies. Beginning with the 1992 edition, the title of the publication was changed to industrial Commodity Statistics Yearbook as the result of a decision made by the United Nations Statistical Commission at its twenty-seventh session to discontinue, effective 1994, publication of the Industrial Statistics Yearbook, volume I, General Industrial Statistics by the Statistics Division of the United Nations. The United Nations Industrial Development Organization (UNIDO) has become responsible for the collection and dissemination of general industrial statistics while the Statistics Division of the United Nations continues to be responsible for industrial commodity production statistics. The previous title, Industrial Statistics Yearbook, volume II, Commodity Production Statistics, was introduced in the 1982 edition. The first seven editions in this series were published under the title The Growth of World industry and the next eight editions under the title Yearbook of Industrial Statistics. This edition of the Yearbook contains annual quantity data on production of industrial commodities by country, geographical region, economic grouping and for the world. A standard list of about 530 commodities (about 590 statistical series) has been adopted for the publication. The statistics refer to the ten-year period 1992-2001 for about 200 countries and areas
Industrial commodity statistics yearbook 2002. Production statistics (1993-2002)
International Nuclear Information System (INIS)
2004-01-01
This is the thirty-sixth in a series of annual compilations of statistics on world industry designed to meet both the general demand for information of this kind and the special requirements of the United Nations and related international bodies. Beginning with the 1992 edition, the title of the publication was changed to industrial Commodity Statistics Yearbook as the result of a decision made by the United Nations Statistical Commission at its twenty-seventh session to discontinue, effective 1994, publication of the Industrial Statistics Yearbook, volume I, General Industrial Statistics by the Statistics Division of the United Nations. The United Nations Industrial Development Organization (UNIDO) has become responsible for the collection and dissemination of general industrial statistics while the Statistics Division of the United Nations continues to be responsible for industrial commodity production statistics. The previous title, Industrial Statistics Yearbook, volume II, Commodity Production Statistics, was introduced in the 1982 edition. The first seven editions in this series were published under the title 'The Growth of World industry' and the next eight editions under the title 'Yearbook of Industrial Statistics'. This edition of the Yearbook contains annual quantity data on production of industrial commodities by country, geographical region, economic grouping and for the world. A standard list of about 530 commodities (about 590 statistical series) has been adopted for the publication. The statistics refer to the ten-year period 1993-2002 for about 200 countries and areas
Industrial commodity statistics yearbook 2000. Production statistics (1991-2000)
International Nuclear Information System (INIS)
2002-01-01
This is the thirty-third in a series of annual compilations of statistics on world industry designed to meet both the general demand for information of this kind and the special requirements of the United Nations and related international bodies. Beginning with the 1992 edition, the title of the publication was changed to industrial Commodity Statistics Yearbook as the result of a decision made by the United Nations Statistical Commission at its twenty-seventh session to discontinue, effective 1994, publication of the Industrial Statistics Yearbook, volume I, General Industrial Statistics by the Statistics Division of the United Nations. The United Nations Industrial Development Organization (UNIDO) has become responsible for the collection and dissemination of general industrial statistics while the Statistics Division of the United Nations continues to be responsible for industrial commodity production statistics. The previous title, Industrial Statistics Yearbook, volume II, Commodity Production Statistics, was introduced in the 1982 edition. The first seven editions in this series were published under the title The Growth of World industry and the next eight editions under the title Yearbook of Industrial Statistics. This edition of the Yearbook contains annual quantity data on production of industrial commodities by country, geographical region, economic grouping and for the world. A standard list of about 530 commodities (about 590 statistical series) has been adopted for the publication. Most of the statistics refer to the ten-year period 1991-2000 for about 200 countries and areas
Directory of Open Access Journals (Sweden)
A.-S. Høyer
2017-12-01
Full Text Available Most studies on the application of geostatistical simulations based on multiple-point statistics (MPS to hydrogeological modelling focus on relatively fine-scale models and concentrate on the estimation of facies-level structural uncertainty. Much less attention is paid to the use of input data and optimal construction of training images. For instance, even though the training image should capture a set of spatial geological characteristics to guide the simulations, the majority of the research still relies on 2-D or quasi-3-D training images. In the present study, we demonstrate a novel strategy for 3-D MPS modelling characterized by (i realistic 3-D training images and (ii an effective workflow for incorporating a diverse group of geological and geophysical data sets. The study covers an area of 2810 km2 in the southern part of Denmark. MPS simulations are performed on a subset of the geological succession (the lower to middle Miocene sediments which is characterized by relatively uniform structures and dominated by sand and clay. The simulated domain is large and each of the geostatistical realizations contains approximately 45 million voxels with size 100 m × 100 m × 5 m. Data used for the modelling include water well logs, high-resolution seismic data, and a previously published 3-D geological model. We apply a series of different strategies for the simulations based on data quality, and develop a novel method to effectively create observed spatial trends. The training image is constructed as a relatively small 3-D voxel model covering an area of 90 km2. We use an iterative training image development strategy and find that even slight modifications in the training image create significant changes in simulations. Thus, this study shows how to include both the geological environment and the type and quality of input information in order to achieve optimal results from MPS modelling. We present a practical
On the Relation Between Facular Bright Points and the Magnetic Field
Berger, Thomas; Shine, Richard; Tarbell, Theodore; Title, Alan; Scharmer, Goran
1994-12-01
Multi-spectral images of magnetic structures in the solar photosphere are presented. The images were obtained in the summers of 1993 and 1994 at the Swedish Solar Telescope on La Palma using the tunable birefringent Solar Optical Universal Polarimeter (SOUP filter), a 10 Angstroms wide interference filter tuned to 4304 Angstroms in the band head of the CH radical (the Fraunhofer G-band), and a 3 Angstroms wide interference filter centered on the Ca II--K absorption line. Three large format CCD cameras with shuttered exposures on the order of 10 msec and frame rates of up to 7 frames per second were used to create time series of both quiet and active region evolution. The full field--of--view is 60times 80 arcseconds (44times 58 Mm). With the best seeing, structures as small as 0.22 arcseconds (160 km) in diameter are clearly resolved. Post--processing of the images results in rigid coalignment of the image sets to an accuracy comparable to the spatial resolution. Facular bright points with mean diameters of 0.35 arcseconds (250 km) and elongated filaments with lengths on the order of arcseconds (10(3) km) are imaged with contrast values of up to 60 % by the G--band filter. Overlay of these images on contemporal Fe I 6302 Angstroms magnetograms and Ca II K images reveals that the bright points occur, without exception, on sites of magnetic flux through the photosphere. However, instances of concentrated and diffuse magnetic flux and Ca II K emission without associated bright points are common, leading to the conclusion that the presence of magnetic flux is a necessary but not sufficient condition for the occurence of resolvable facular bright points. Comparison of the G--band and continuum images shows a complex relation between structures in the two bandwidths: bright points exceeding 350 km in extent correspond to distinct bright structures in the continuum; smaller bright points show no clear relation to continuum structures. Size and contrast statistical cross
Soluyanov, Alexey A; Gresch, Dominik; Wang, Zhijun; Wu, QuanSheng; Troyer, Matthias; Dai, Xi; Bernevig, B Andrei
2015-11-26
Fermions--elementary particles such as electrons--are classified as Dirac, Majorana or Weyl. Majorana and Weyl fermions had not been observed experimentally until the recent discovery of condensed matter systems such as topological superconductors and semimetals, in which they arise as low-energy excitations. Here we propose the existence of a previously overlooked type of Weyl fermion that emerges at the boundary between electron and hole pockets in a new phase of matter. This particle was missed by Weyl because it breaks the stringent Lorentz symmetry in high-energy physics. Lorentz invariance, however, is not present in condensed matter physics, and by generalizing the Dirac equation, we find the new type of Weyl fermion. In particular, whereas Weyl semimetals--materials hosting Weyl fermions--were previously thought to have standard Weyl points with a point-like Fermi surface (which we refer to as type-I), we discover a type-II Weyl point, which is still a protected crossing, but appears at the contact of electron and hole pockets in type-II Weyl semimetals. We predict that WTe2 is an example of a topological semimetal hosting the new particle as a low-energy excitation around such a type-II Weyl point. The existence of type-II Weyl points in WTe2 means that many of its physical properties are very different to those of standard Weyl semimetals with point-like Fermi surfaces.
Residual analysis for spatial point processes
DEFF Research Database (Denmark)
Baddeley, A.; Turner, R.; Møller, Jesper
We define residuals for point process models fitted to spatial point pattern data, and propose diagnostic plots based on these residuals. The techniques apply to any Gibbs point process model, which may exhibit spatial heterogeneity, interpoint interaction and dependence on spatial covariates. Ou...... or covariate effects. Q-Q plots of the residuals are effective in diagnosing interpoint interaction. Some existing ad hoc statistics of point patterns (quadrat counts, scan statistic, kernel smoothed intensity, Berman's diagnostic) are recovered as special cases....
DEFF Research Database (Denmark)
He, Xiulan; Sonnenborg, Torben; Jørgensen, Flemming
2017-01-01
-stationary geological system characterized by a network of connected buried valleys that incise deeply into layered Miocene sediments (case study in Denmark). The results show that, based on fragmented information of the formation boundaries, the MPS partition method is able to simulate a non-stationary system......Stationarity has traditionally been a requirement of geostatistical simulations. A common way to deal with non-stationarity is to divide the system into stationary sub-regions and subsequently merge the realizations for each region. Recently, the so-called partition approach that has...... the flexibility to model non-stationary systems directly was developed for multiple-point statistics simulation (MPS). The objective of this study is to apply the MPS partition method with conventional borehole logs and high-resolution airborne electromagnetic (AEM) data, for simulation of a real-world non...
Farnsworth, G.L.; Nichols, J.D.; Sauer, J.R.; Fancy, S.G.; Pollock, K.H.; Shriner, S.A.; Simons, T.R.; Ralph, C. John; Rich, Terrell D.
2005-01-01
Point counts are a standard sampling procedure for many bird species, but lingering concerns still exist about the quality of information produced from the method. It is well known that variation in observer ability and environmental conditions can influence the detection probability of birds in point counts, but many biologists have been reluctant to abandon point counts in favor of more intensive approaches to counting. However, over the past few years a variety of statistical and methodological developments have begun to provide practical ways of overcoming some of the problems with point counts. We describe some of these approaches, and show how they can be integrated into standard point count protocols to greatly enhance the quality of the information. Several tools now exist for estimation of detection probability of birds during counts, including distance sampling, double observer methods, time-depletion (removal) methods, and hybrid methods that combine these approaches. Many counts are conducted in habitats that make auditory detection of birds much more likely than visual detection. As a framework for understanding detection probability during such counts, we propose separating two components of the probability a bird is detected during a count into (1) the probability a bird vocalizes during the count and (2) the probability this vocalization is detected by an observer. In addition, we propose that some measure of the area sampled during a count is necessary for valid inferences about bird populations. This can be done by employing fixed-radius counts or more sophisticated distance-sampling models. We recommend any studies employing point counts be designed to estimate detection probability and to include a measure of the area sampled.
Origin of chaos near critical points of quantum flow.
Efthymiopoulos, C; Kalapotharakos, C; Contopoulos, G
2009-03-01
The general theory of motion in the vicinity of a moving quantum nodal point (vortex) is studied in the framework of the de Broglie-Bohm trajectory method of quantum mechanics. Using an adiabatic approximation, we find that near any nodal point of an arbitrary wave function psi there is an unstable point (called the X point) in a frame of reference moving with the nodal point. The local phase portrait forms always a characteristic pattern called the "nodal-point- X -point complex." We find general formulas for this complex as well as necessary and sufficient conditions of validity of the adiabatic approximation. We demonstrate that chaos emerges from the consecutive scattering events of the orbits with nodal-point- X -point complexes. The scattering events are of two types (called type I and type II). A theoretical model is constructed yielding the local value of the Lyapunov characteristic numbers in scattering events of both types. The local Lyapunov characteristic number scales as an inverse power of the speed of the nodal point in the rest frame, implying that it scales proportionally to the size of the nodal-point- X -point complex. It is also an inverse power of the distance of a trajectory from the X point's stable manifold far from the complex. This distance plays the role of an effective "impact parameter." The results of detailed numerical experiments with different wave functions, possessing one, two, or three moving nodal points, are reported. Examples are given of regular and chaotic trajectories, and the statistics of the Lyapunov characteristic numbers of the orbits are found and compared to the number of encounter events of each orbit with the nodal-point- X -point complexes. The numerical results are in agreement with the theory, and various phenomena appearing at first as counterintuitive find a straightforward explanation.
International Nuclear Information System (INIS)
Dumazet, Gerard
1965-01-01
As previous works notably performed by Ericson outlined the fact that the compound nucleus model resulted in variations of efficient cross sections about average values and that these variations were not negligible at all as it had been previously admitted, this research thesis aims at establishing theoretical predictions and at showing that Ericson's predictions can be extended to polarization. After having qualitatively and quantitatively recalled the underlying concepts used in the compound nucleus and direct interaction models, the author shows the relevance of a statistical point of view on nuclei which must not be confused with the statistical model itself. Then, after a recall of results obtained by Ericson, the author reports the study of the fluctuations of differential polarization, addresses the experimental aspect of fluctuations, and shows which are the main factors for this kind of study [fr
ON STATISTICALLY CONVERGENT IN FINITE DIMENSIONAL SPACES
GÜNCAN, Ayşe Nur
2009-01-01
Abstract: In this paper, the notion of statistical convergence, which was introduced by Steinhaus (1951), was studied in Rm ; and some concepts and theorems, whose statistical correspondence for the real number sequences were given, were carried to Rm . In addition, the concepts of the statistical limit point and the statistical cluster point were given and it was mentioned that these two concepts were'nt equal in Fridy's study in 1993. These concepts were given in Rm and the i...
Coordination compounds of cobalt(II), nickel(II), copper(II), and zinc(II) with pantothenic acid
Energy Technology Data Exchange (ETDEWEB)
Shabilalov, A.A.; Yunuskhodzhaev, A.N.; Khodzhaev, O.F.; Azizov, M.A.
1986-11-01
The compounds Ni(PANA - H)/sub 2/ x 4H/sub 2/O (PANA stands for pantothenic acid, and - H indicates a deprotonated ligand), Cu(PANA - H)/sub 2/ x 2H/sub 2/O, Zn(PANA - H)/sub 2/ x H/sub 2/O, Co(PANA - H)Cl x H/sub 2/O, and Ni(PANA - H)Cl x 3H/sub 2/O have been synthesized on the basis of pantothenic acid and Co(II), Ni(II), Cu(II), and Zn(II) salts in aqueous media. The compounds have been identified by elemental and x-ray diffraction analysis. Some physicochemical properties (solubility, melting point, molar conductivity) of the compounds obtained have been studied. The structure of the compounds isolated has been established on the basis of an analysis of their IR, ESR, and electronic spectra, as well as derivatograms.
On-line statistical processing of radiation detector pulse trains with time-varying count rates
International Nuclear Information System (INIS)
Apostolopoulos, G.
2008-01-01
Statistical analysis is of primary importance for the correct interpretation of nuclear measurements, due to the inherent random nature of radioactive decay processes. This paper discusses the application of statistical signal processing techniques to the random pulse trains generated by radiation detectors. The aims of the presented algorithms are: (i) continuous, on-line estimation of the underlying time-varying count rate θ(t) and its first-order derivative dθ/dt; (ii) detection of abrupt changes in both of these quantities and estimation of their new value after the change point. Maximum-likelihood techniques, based on the Poisson probability distribution, are employed for the on-line estimation of θ and dθ/dt. Detection of abrupt changes is achieved on the basis of the generalized likelihood ratio statistical test. The properties of the proposed algorithms are evaluated by extensive simulations and possible applications for on-line radiation monitoring are discussed
DEFF Research Database (Denmark)
Barfod, Adrian
The PhD thesis presents a new method for analyzing the relationship between resistivity and lithology, as well as a method for quantifying the hydrostratigraphic modeling uncertainty related to Multiple-Point Statistical (MPS) methods. Three-dimensional (3D) geological models are im...... is to improve analysis and research of the resistivity-lithology relationship and ensemble geological/hydrostratigraphic modeling. The groundwater mapping campaign in Denmark, beginning in the 1990’s, has resulted in the collection of large amounts of borehole and geophysical data. The data has been compiled...... in two publicly available databases, the JUPITER and GERDA databases, which contain borehole and geophysical data, respectively. The large amounts of available data provided a unique opportunity for studying the resistivity-lithology relationship. The method for analyzing the resistivity...
International Nuclear Information System (INIS)
Vega, H.J. de; Sanchez, N.
2002-01-01
We complete our study of the self-gravitating gas by computing the fluctuations around the saddle point solution for the three statistical ensembles (grand canonical, canonical and microcanonical). Although the saddle point is the same for the three ensembles, the fluctuations change from one ensemble to the other. The zeroes of the small fluctuations determinant determine the position of the critical points for each ensemble. This yields the domains of validity of the mean field approach. Only the S-wave determinant exhibits critical points. Closed formulae for the S- and P-wave determinants of fluctuations are derived. The local properties of the self-gravitating gas in thermodynamic equilibrium are studied in detail. The pressure, energy density, particle density and speed of sound are computed and analyzed as functions of the position. The equation of state turns out to be locally p(r→ )=Tρ V (r→ ) as for the ideal gas. Starting from the partition function of the self-gravitating gas, we prove in this microscopic calculation that the hydrostatic description yielding locally the ideal gas equation of state is exact in the N=∞ limit. The dilute nature of the thermodynamic limit (N∼L→∞ with N/L fixed) together with the long range nature of the gravitational forces play a crucial role in obtaining such ideal gas equation. The self-gravitating gas being inhomogeneous, we have PV/[NT]=f(η)≤1 for any finite volume V. The inhomogeneous particle distribution in the ground state suggests a fractal distribution with Haussdorf dimension D, D is slowly decreasing with increasing density, 1< D<3. The average distance between particles is computed in Monte Carlo simulations and analytically in the mean field approach. A dramatic drop at the phase transition is exhibited, clearly illustrating the properties of the collapse
Kogut, A.; Banday, A. J.; Bennett, C. L.; Hinshaw, G.; Lubin, P. M.; Smoot, G. F.
1995-01-01
We use the two-point correlation function of the extrema points (peaks and valleys) in the Cosmic Background Explorer (COBE) Differential Microwave Radiometers (DMR) 2 year sky maps as a test for non-Gaussian temperature distribution in the cosmic microwave background anisotropy. A maximum-likelihood analysis compares the DMR data to n = 1 toy models whose random-phase spherical harmonic components a(sub lm) are drawn from either Gaussian, chi-square, or log-normal parent populations. The likelihood of the 53 GHz (A+B)/2 data is greatest for the exact Gaussian model. There is less than 10% chance that the non-Gaussian models tested describe the DMR data, limited primarily by type II errors in the statistical inference. The extrema correlation function is a stronger test for this class of non-Gaussian models than topological statistics such as the genus.
Polewski, Przemyslaw; Yao, Wei; Heurich, Marco; Krzystek, Peter; Stilla, Uwe
2017-07-01
This paper introduces a statistical framework for detecting cylindrical shapes in dense point clouds. We target the application of mapping fallen trees in datasets obtained through terrestrial laser scanning. This is a challenging task due to the presence of ground vegetation, standing trees, DTM artifacts, as well as the fragmentation of dead trees into non-collinear segments. Our method shares the concept of voting in parameter space with the generalized Hough transform, however two of its significant drawbacks are improved upon. First, the need to generate samples on the shape's surface is eliminated. Instead, pairs of nearby input points lying on the surface cast a vote for the cylinder's parameters based on the intrinsic geometric properties of cylindrical shapes. Second, no discretization of the parameter space is required: the voting is carried out in continuous space by means of constructing a kernel density estimator and obtaining its local maxima, using automatic, data-driven kernel bandwidth selection. Furthermore, we show how the detected cylindrical primitives can be efficiently merged to obtain object-level (entire tree) semantic information using graph-cut segmentation and a tailored dynamic algorithm for eliminating cylinder redundancy. Experiments were performed on 3 plots from the Bavarian Forest National Park, with ground truth obtained through visual inspection of the point clouds. It was found that relative to sample consensus (SAC) cylinder fitting, the proposed voting framework can improve the detection completeness by up to 10 percentage points while maintaining the correctness rate.
Franchi, Lorenzo; Pavoni, Chiara; Faltin, Kurt; Bigliazzi, Renato; Gazzani, Francesca; Cozza, Paola
2016-09-01
The purpose of this work was to evaluate the long-term morphological mandibular changes induced by functional treatment of Class II malocclusion with mandibular retrusion. Forty patients (20 females, 20 males) with Class II malocclusion consecutively treated with either a Bionator or an Activator followed by fixed appliances were compared with a control group of 40 subjects (19 females, 21 males) with untreated Class II malocclusion. Lateral cephalograms were available at the start of treatment (T1, mean age 9.9 years), at the end of treatment with functional appliances (T2, mean age 12.2 years), and for long-term follow-up (T3, mean age 18.3 years). Mandibular shape changes were analyzed on lateral cephalograms of the subjects in both groups via thin-plate spline (TPS) analysis. Shape differences were statistically analyzed by conducting permutation tests on Goodall F statistics. In the long term, both the treated and control groups exhibited significant longitudinal mandibular shape changes characterized by upward and forward dislocation of point Co associated with a vertical extension in the gonial region and backward dislocation of point B. Functional appliances induced mandible's significant posterior morphogenetic rotation over the short term. The treated and control groups demonstrated similar mandibular shape over the long term.
Mazaheri, H; Ghaedi, M; Ahmadi Azqhandi, M H; Asfaram, A
2017-05-10
Analytical chemists apply statistical methods for both the validation and prediction of proposed models. Methods are required that are adequate for finding the typical features of a dataset, such as nonlinearities and interactions. Boosted regression trees (BRTs), as an ensemble technique, are fundamentally different to other conventional techniques, with the aim to fit a single parsimonious model. In this work, BRT, artificial neural network (ANN) and response surface methodology (RSM) models have been used for the optimization and/or modeling of the stirring time (min), pH, adsorbent mass (mg) and concentrations of MB and Cd 2+ ions (mg L -1 ) in order to develop respective predictive equations for simulation of the efficiency of MB and Cd 2+ adsorption based on the experimental data set. Activated carbon, as an adsorbent, was synthesized from walnut wood waste which is abundant, non-toxic, cheap and locally available. This adsorbent was characterized using different techniques such as FT-IR, BET, SEM, point of zero charge (pH pzc ) and also the determination of oxygen containing functional groups. The influence of various parameters (i.e. pH, stirring time, adsorbent mass and concentrations of MB and Cd 2+ ions) on the percentage removal was calculated by investigation of sensitive function, variable importance rankings (BRT) and analysis of variance (RSM). Furthermore, a central composite design (CCD) combined with a desirability function approach (DFA) as a global optimization technique was used for the simultaneous optimization of the effective parameters. The applicability of the BRT, ANN and RSM models for the description of experimental data was examined using four statistical criteria (absolute average deviation (AAD), mean absolute error (MAE), root mean square error (RMSE) and coefficient of determination (R 2 )). All three models demonstrated good predictions in this study. The BRT model was more precise compared to the other models and this showed
Jana, Madhusudan
2015-01-01
Statistical mechanics is self sufficient, written in a lucid manner, keeping in mind the exam system of the universities. Need of study this subject and its relation to Thermodynamics is discussed in detail. Starting from Liouville theorem gradually, the Statistical Mechanics is developed thoroughly. All three types of Statistical distribution functions are derived separately with their periphery of applications and limitations. Non-interacting ideal Bose gas and Fermi gas are discussed thoroughly. Properties of Liquid He-II and the corresponding models have been depicted. White dwarfs and condensed matter physics, transport phenomenon - thermal and electrical conductivity, Hall effect, Magneto resistance, viscosity, diffusion, etc. are discussed. Basic understanding of Ising model is given to explain the phase transition. The book ends with a detailed coverage to the method of ensembles (namely Microcanonical, canonical and grand canonical) and their applications. Various numerical and conceptual problems ar...
Nonequilibrium statistical averages and thermo field dynamics
International Nuclear Information System (INIS)
Marinaro, A.; Scarpetta, Q.
1984-01-01
An extension of thermo field dynamics is proposed, which permits the computation of nonequilibrium statistical averages. The Brownian motion of a quantum oscillator is treated as an example. In conclusion it is pointed out that the procedure proposed to computation of time-dependent statistical average gives the correct two-point Green function for the damped oscillator. A simple extension can be used to compute two-point Green functions of free particles
Directory of Open Access Journals (Sweden)
Yin Yanshu
2017-12-01
Full Text Available In this paper, a location-based multiple point statistics method is developed to model a non-stationary reservoir. The proposed method characterizes the relationship between the sedimentary pattern and the deposit location using the relative central position distance function, which alleviates the requirement that the training image and the simulated grids have the same dimension. The weights in every direction of the distance function can be changed to characterize the reservoir heterogeneity in various directions. The local integral replacements of data events, structured random path, distance tolerance and multi-grid strategy are applied to reproduce the sedimentary patterns and obtain a more realistic result. This method is compared with the traditional Snesim method using a synthesized 3-D training image of Poyang Lake and a reservoir model of Shengli Oilfield in China. The results indicate that the new method can reproduce the non-stationary characteristics better than the traditional method and is more suitable for simulation of delta-front deposits. These results show that the new method is a powerful tool for modelling a reservoir with non-stationary characteristics.
Mazzotta, Cosimo; Traversi, Claudio; Mellace, Pierfrancesco; Bagaglia, Simone A; Zuccarini, Silvio; Mencucci, Rita; Jacob, Soosan
2017-10-04
To assess keratoconus (KC) progression in patients with allergies who also tested positive to surface matrix metalloproteinase 9 (MMP-9) point-of-care test. Prospective comparative study including 100 stage I-II keratoconic patients, mean age 16.7±4.6 years. All patients underwent an anamnestic questionnaire for concomitant allergic diseases and were screened with the MMP-9 point-of-care test. Patients were divided into two groups: patients KC with allergies (KC AL) and patients KC without allergies (KC NAL). Severity of allergy was established by papillary subtarsal response grade and KC progression assessed by Scheimpflug corneal tomography, corrected distance visual acuity (CDVA) measurement in a 12-month follow-up. The KC AL group included 52 patients and the KC NAL group 48. In the KC AL group, 42/52 of patients (81%) were positive to MMP-9 point-of-care test versus two positive patients in the KC NAL group (4%). The KC AL group data showed a statistically significant decrease of average CDVA, from 0.155±0.11 to 0.301±0.2 logarithm of the minimum angle of resolution (Paverage. The KC NAL group revealed a slight KC progression without statistically significant changes. Pearson correlation test showed a high correlation between Kmax worsening and severity of PSR in the KC AL group. The study demonstrated a statistically significant progression of KC in patients with concomitant allergies, positive to MMP-9 point-of-care test versus negative. A high correlation between severity of allergy and KC progression was documented.
DEFF Research Database (Denmark)
Peters, Christian Daugaard; Kjaergaard, Krista D; Jensen, Jens D
2014-01-01
Agents blocking the renin-angiotensin-aldosterone system are frequently used in patients with end-stage renal disease, but whether they exert beneficial cardiovascular effects is unclear. Here the long-term effects of the angiotensin II receptor blocker, irbesartan, were studied in hemodialysis......, and residual renal function. Brachial blood pressure decreased significantly in both groups, but there was no significant difference between placebo and irbesartan. Use of additional antihypertensive medication, ultrafiltration volume, and dialysis dosage were not different. Intermediate cardiovascular end...... points such as central aortic blood pressure, carotid-femoral pulse wave velocity, left ventricular mass index, N-terminal brain natriuretic prohormone, heart rate variability, and plasma catecholamines were not significantly affected by irbesartan treatment. Changes in systolic blood pressure during...
Statistical mechanics in the context of special relativity. II.
Kaniadakis, G
2005-09-01
The special relativity laws emerge as one-parameter (light speed) generalizations of the corresponding laws of classical physics. These generalizations, imposed by the Lorentz transformations, affect both the definition of the various physical observables (e.g., momentum, energy, etc.), as well as the mathematical apparatus of the theory. Here, following the general lines of [Phys. Rev. E 66, 056125 (2002)], we show that the Lorentz transformations impose also a proper one-parameter generalization of the classical Boltzmann-Gibbs-Shannon entropy. The obtained relativistic entropy permits us to construct a coherent and self-consistent relativistic statistical theory, preserving the main features of the ordinary statistical theory, which is recovered in the classical limit. The predicted distribution function is a one-parameter continuous deformation of the classical Maxwell-Boltzmann distribution and has a simple analytic form, showing power law tails in accordance with the experimental evidence. Furthermore, this statistical mechanics can be obtained as the stationary case of a generalized kinetic theory governed by an evolution equation obeying the H theorem and reproducing the Boltzmann equation of the ordinary kinetics in the classical limit.
Quantum field theory of point particles and strings
Hatfield, Brian
1992-01-01
The purpose of this book is to introduce string theory without assuming any background in quantum field theory. Part I of this book follows the development of quantum field theory for point particles, while Part II introduces strings. All of the tools and concepts that are needed to quantize strings are developed first for point particles. Thus, Part I presents the main framework of quantum field theory and provides for a coherent development of the generalization and application of quantum field theory for point particles to strings.Part II emphasizes the quantization of the bosonic string.
A course in mathematical statistics and large sample theory
Bhattacharya, Rabi; Patrangenaru, Victor
2016-01-01
This graduate-level textbook is primarily aimed at graduate students of statistics, mathematics, science, and engineering who have had an undergraduate course in statistics, an upper division course in analysis, and some acquaintance with measure theoretic probability. It provides a rigorous presentation of the core of mathematical statistics. Part I of this book constitutes a one-semester course on basic parametric mathematical statistics. Part II deals with the large sample theory of statistics — parametric and nonparametric, and its contents may be covered in one semester as well. Part III provides brief accounts of a number of topics of current interest for practitioners and other disciplines whose work involves statistical methods. Large Sample theory with many worked examples, numerical calculations, and simulations to illustrate theory Appendices provide ready access to a number of standard results, with many proofs Solutions given to a number of selected exercises from Part I Part II exercises with ...
Automating Exams for a Statistics Course: II. A Case Study.
Michener, R. Dean; And Others
A specific application of the process of automating exams for any introductory statistics course is described. The process of automating exams was accomplished by using the Statistical Test Item Collection System (STICS). This system was first used to select a set of questions based on course requirements established in advance; afterward, STICS…
Directory of Open Access Journals (Sweden)
A. Calantropio
2018-05-01
Full Text Available Due to the increasing number of low-cost sensors, widely accessible on the market, and because of the supposed granted correctness of the semi-automatic workflow for 3D reconstruction, highly implemented in the recent commercial software, more and more users operate nowadays without following the rigorousness of classical photogrammetric methods. This behaviour often naively leads to 3D products that lacks metric quality assessment. This paper proposes and analyses an approach that gives the users the possibility to preserve the trustworthiness of the metric information inherent in the 3D model, without sacrificing the automation offered by modern photogrammetry software. At the beginning, the importance of Data Quality Assessment is outlined, together with some recall of photogrammetry best practices. With the purpose of guiding the user through a correct pipeline for a certified 3D model reconstruction, an operative workflow is proposed, focusing on the first part of the object reconstruction steps (tie-points extraction, camera calibration, and relative orientation. A new GUI (Graphical User Interface developed for the open source MicMac suite is then presented, and a sample dataset is used for the evaluation of the photogrammetric block orientation using statistically obtained quality descriptors. The results and the future directions are then presented and discussed.
Pecan nutshell as biosorbent to remove Cu(II), Mn(II) and Pb(II) from aqueous solutions.
Vaghetti, Julio C P; Lima, Eder C; Royer, Betina; da Cunha, Bruna M; Cardoso, Natali F; Brasil, Jorge L; Dias, Silvio L P
2009-02-15
In the present study we reported for the first time the feasibility of pecan nutshell (PNS, Carya illinoensis) as an alternative biosorbent to remove Cu(II), Mn(II) and Pb(II) metallic ions from aqueous solutions. The ability of PNS to remove the metallic ions was investigated by using batch biosorption procedure. The effects such as, pH, biosorbent dosage on the adsorption capacities of PNS were studied. Four kinetic models were tested, being the adsorption kinetics better fitted to fractionary-order kinetic model. Besides that, the kinetic data were also fitted to intra-particle diffusion model, presenting three linear regions, indicating that the kinetics of adsorption should follow multiple sorption rates. The equilibrium data were fitted to Langmuir, Freundlich, Sips and Redlich-Peterson isotherm models. Taking into account a statistical error function, the data were best fitted to Sips isotherm model. The maximum biosorption capacities of PNS were 1.35, 1.78 and 0.946mmolg(-1) for Cu(II), Mn(II) and Pb(II), respectively.
Understanding Computational Bayesian Statistics
Bolstad, William M
2011-01-01
A hands-on introduction to computational statistics from a Bayesian point of view Providing a solid grounding in statistics while uniquely covering the topics from a Bayesian perspective, Understanding Computational Bayesian Statistics successfully guides readers through this new, cutting-edge approach. With its hands-on treatment of the topic, the book shows how samples can be drawn from the posterior distribution when the formula giving its shape is all that is known, and how Bayesian inferences can be based on these samples from the posterior. These ideas are illustrated on common statistic
An Improved Statistical Point-source Foreground Model for the Epoch of Reionization
Energy Technology Data Exchange (ETDEWEB)
Murray, S. G.; Trott, C. M.; Jordan, C. H. [ARC Centre of Excellence for All-sky Astrophysics (CAASTRO) (Australia)
2017-08-10
We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions for the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.
An Improved Statistical Point-source Foreground Model for the Epoch of Reionization
Murray, S. G.; Trott, C. M.; Jordan, C. H.
2017-08-01
We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions for the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.
Braid group, knot theory and statistical mechanics II
Yang Chen Ning
1994-01-01
The present volume is an updated version of the book edited by C N Yang and M L Ge on the topics of braid groups and knot theory, which are related to statistical mechanics. This book is based on the 1989 volume but has new material included and new contributors.
Micellar effect on metal-ligand complexes of Co(II, Ni(II, Cu(II and Zn(II with citric acid
Directory of Open Access Journals (Sweden)
Nageswara Rao Gollapalli
2009-12-01
Full Text Available Chemical speciation of citric acid complexes of Co(II, Ni(II, Cu(II and Zn(II was investigated pH-metrically in 0.0-2.5% anionic, cationic and neutral micellar media. The primary alkalimetric data were pruned with SCPHD program. The existence of different binary species was established from modeling studies using the computer program MINIQUAD75. Alkalimetric titrations were carried out in different relative concentrations (M:L:X = 1:2:5, 1:3:5, 1:5:3 of metal (M to citric acid. The selection of best chemical models was based on statistical parameters and residual analysis. The species detected were MLH, ML2, ML2H and ML2H2. The trend in variation of stability constants with change in mole fraction of the medium is explained on the basis of electrostatic and non-electrostatic forces. Distributions of the species with pH at different compositions of micellar media are also presented.
Fractional statistics of the vortex in two-dimensional superfluids
International Nuclear Information System (INIS)
Chiao, R.Y.; Hansen, A.; Moulthrop, A.A.
1985-01-01
The quantum behavior of two identical point vortices (e.g., in a superfluid 4 He thin film) is studied. It is argued that this system obeys neither Bose nor Fermi statistics, but intermediate or theta statistics: We find that a single vortex in this system possesses quarter-fractional statistics (i.e., theta = π/2 or 3π/2). The source of the theta statistics is identified in the relative zero-point motion of the vortices
Statistical correlations in an ideal gas of particles obeying fractional exclusion statistics.
Pellegrino, F M D; Angilella, G G N; March, N H; Pucci, R
2007-12-01
After a brief discussion of the concepts of fractional exchange and fractional exclusion statistics, we report partly analytical and partly numerical results on thermodynamic properties of assemblies of particles obeying fractional exclusion statistics. The effect of dimensionality is one focal point, the ratio mu/k_(B)T of chemical potential to thermal energy being obtained numerically as a function of a scaled particle density. Pair correlation functions are also presented as a function of the statistical parameter, with Friedel oscillations developing close to the fermion limit, for sufficiently large density.
Directory of Open Access Journals (Sweden)
E. E. Woodfield
2002-12-01
Full Text Available A statistical investigation of the Doppler spectral width parameter routinely observed by HF coherent radars has been conducted between the Northern and Southern Hemispheres for the nightside ionosphere. Data from the SuperDARN radars at Thykkvibær, Iceland and Syowa East, Antarctica have been employed for this purpose. Both radars frequently observe regions of high (>200 ms-1 spectral width polewards of low (<200 ms-1 spectral width. Three years of data from both radars have been analysed both for the spectral width and line of sight velocity. The pointing direction of these two radars is such that the flow reversal boundary may be estimated from the velocity data, and therefore, we have an estimate of the open/closed field line boundary location for comparison with the high spectral widths. Five key observations regarding the behaviour of the spectral width on the nightside have been made. These are (i the two radars observe similar characteristics on a statistical basis; (ii a latitudinal dependence related to magnetic local time is found in both hemispheres; (iii a seasonal dependence of the spectral width is observed by both radars, which shows a marked absence of latitudinal dependence during the summer months; (iv in general, the Syowa East spectral width tends to be larger than that from Iceland East, and (v the highest spectral widths seem to appear on both open and closed field lines. Points (i and (ii indicate that the cause of high spectral width is magnetospheric in origin. Point (iii suggests that either the propagation of the HF radio waves to regions of high spectral width or the generating mechanism(s for high spectral width is affected by solar illumination or other seasonal effects. Point (iv suggests that the radar beams from each of the radars are subject either to different instrumental or propagation effects, or different geophysical conditions due to their locations, although we suggest that this result is more likely to
International Nuclear Information System (INIS)
Hjerpe, T.; Samuelsson, C.
1999-01-01
There is a potential risk that hazardous radioactive sources could enter the environment, e.g. via satellite debris, smuggled radioactive goods or lost metal scrap. From a radiation protection point of view there is a need for rapid and reliable methods for locating and identifying sources. Car-borne and air-borne detector systems are suitable for the task. The condition in this work is a situation where the missing radionuclide is known, which is not an unlikely scenario. The possibility that the source is located near a road can be high, and thus motivating a car-borne spectrometer system. The main object is to optimise on-line statistical methods in order to achieve a high probability for locating point sources, or hot spots, and still have reasonably few false alarms from variations in the natural background radiation. Data were obtained from a car-borne 3 litres (NaI(Tl) detector and two point sources, located at various distances from the road. The nuclides used were 137 Cs and 131 I. Spectra were measured stationary on the road. From these measurements spectra we have reconstructed spectra applicable to different speed and sampling times; the time 3 seconds and 50 km/h are used in this work. The maximum distance a source can be located from the road and still be detected is estimated with four different statistical analysis methods. This distance is called the detection distance, DD. The method is applied on gross counts in the full energy peak window. For each method alarm thresholds has been calculated from background data obtained in Scania (Skaane), in the south of Sweden. The results show a 30-50% difference in DD's. With this semi-theoretical approach, the two sources could be detected from 250 m ( 137 Cs, 6GBq) and 200 m ( 131 I, 4GBq). (au)
Kursk Operation Simulation and Validation Exercise - Phase II (KOSAVE II)
National Research Council Canada - National Science Library
Bauman, Walter
1998-01-01
... (KOSAVE) Study (KOSAVE II) documents, in this report a statistical record of the Kursk battle, as represented in the KDB, for use as both a standalone descriptive record for historians, and as a baseline for a subsequent Phase...
Parametric methods for spatial point processes
DEFF Research Database (Denmark)
Møller, Jesper
is studied in Section 4, and Bayesian inference in Section 5. On one hand, as the development in computer technology and computational statistics continues,computationally-intensive simulation-based methods for likelihood inference probably will play a increasing role for statistical analysis of spatial...... inference procedures for parametric spatial point process models. The widespread use of sensible but ad hoc methods based on summary statistics of the kind studied in Chapter 4.3 have through the last two decades been supplied by likelihood based methods for parametric spatial point process models......(This text is submitted for the volume ‘A Handbook of Spatial Statistics' edited by A.E. Gelfand, P. Diggle, M. Fuentes, and P. Guttorp, to be published by Chapmand and Hall/CRC Press, and planned to appear as Chapter 4.4 with the title ‘Parametric methods'.) 1 Introduction This chapter considers...
Energy Technology Data Exchange (ETDEWEB)
Liang, Xing; Su, Yibing [College of Chemistry and Chemical Engineering, Lanzhou University, Lanzhou (China); Yang, Ying, E-mail: Yangying@lzu.edu.cn [College of Chemistry and Chemical Engineering, Lanzhou University, Lanzhou (China); Qin, Wenwu [College of Chemistry and Chemical Engineering, Lanzhou University, Lanzhou (China)
2012-02-15
Highlights: Black-Right-Pointing-Pointer Separation and recovery of Pb(II) from a solution of Pb(II) and Zn(II) was performed. Black-Right-Pointing-Pointer Pb(II) can be recovered using the hydrolysis production of poly styrene-co-maleic anhydride. Black-Right-Pointing-Pointer The adsorption capacity of the PSMA resin for Pb(II) is 641.62 mg g{sup -1}. Black-Right-Pointing-Pointer Pb(II) can be recovered through desorption of Pb-PSMA into Pb(II) ion and the solid PSMA resin. Black-Right-Pointing-Pointer The resin can be repeatedly used through desorption by an inorganic acid condition (6 M H{sub 2}SO{sub 4}). - Abstract: The Pb-Zn separation/preconcentration technique, based on the complex formation reaction of Pb(II) and Zn(II), using a copolymer poly(styrene-co-maleic anhydride) (PSMA), without adding any carrier element was developed. The effects of several experimental parameters such as solution pH, temperature and adsorption time were studied. The experimental results show that the PSMA resin-Pb equilibrium was achieved in 2 min and the Pb(II) loading capacity is up to 641.62 mg g{sup -1} in aqueous solution under optimum conditions, which is much higher than the Zn(II) loading capacity within 80 min. The adsorption test for Pb(II) indicates that PSMA can recover Pb(II) from a mixed solution of Pb(II), Zn(II) and light metals such as Ca(II) and Mg(II) with higher adsorption rate and larger selective coefficient. A further study indicates that PSMA as chelating resins recovering Pb(II) can be regenerated via mineral acid (6 M H{sub 2}SO{sub 4}). PSMA was synthesized by radical polymerization and tested as an adsorbent for the selective recovery of Pb(II). In addition, the formation procedure and structure of Pb-PSMA complex were also studied. Both the PSMA and the Pb-PSMA complex were characterized by means of FTIR spectroscopy, elemental analysis, gel permeation chromatography (GPC) and atomic absorption spectrometry (AAS).
Statistical mechanics of solitons
International Nuclear Information System (INIS)
Bishop, A.
1980-01-01
The status of statistical mechanics theory (classical and quantum, statics and dynamics) is reviewed for 1-D soliton or solitary-wave-bearing systems. Primary attention is given to (i) perspective for existing results with evaluation and representative literature guide; (ii) motivation and status report for remaining problems; (iii) discussion of connections with other 1-D topics
Nonparametric Change Point Diagnosis Method of Concrete Dam Crack Behavior Abnormality
Directory of Open Access Journals (Sweden)
Zhanchao Li
2013-01-01
Full Text Available The study on diagnosis method of concrete crack behavior abnormality has always been a hot spot and difficulty in the safety monitoring field of hydraulic structure. Based on the performance of concrete dam crack behavior abnormality in parametric statistical model and nonparametric statistical model, the internal relation between concrete dam crack behavior abnormality and statistical change point theory is deeply analyzed from the model structure instability of parametric statistical model and change of sequence distribution law of nonparametric statistical model. On this basis, through the reduction of change point problem, the establishment of basic nonparametric change point model, and asymptotic analysis on test method of basic change point problem, the nonparametric change point diagnosis method of concrete dam crack behavior abnormality is created in consideration of the situation that in practice concrete dam crack behavior may have more abnormality points. And the nonparametric change point diagnosis method of concrete dam crack behavior abnormality is used in the actual project, demonstrating the effectiveness and scientific reasonableness of the method established. Meanwhile, the nonparametric change point diagnosis method of concrete dam crack behavior abnormality has a complete theoretical basis and strong practicality with a broad application prospect in actual project.
Martinet, Nicolas; Schneider, Peter; Hildebrandt, Hendrik; Shan, HuanYuan; Asgari, Marika; Dietrich, Jörg P.; Harnois-Déraps, Joachim; Erben, Thomas; Grado, Aniello; Heymans, Catherine; Hoekstra, Henk; Klaes, Dominik; Kuijken, Konrad; Merten, Julian; Nakajima, Reiko
2018-02-01
We study the statistics of peaks in a weak-lensing reconstructed mass map of the first 450 deg2 of the Kilo Degree Survey (KiDS-450). The map is computed with aperture masses directly applied to the shear field with an NFW-like compensated filter. We compare the peak statistics in the observations with that of simulations for various cosmologies to constrain the cosmological parameter S_8 = σ _8 √{Ω _m/0.3}, which probes the (Ωm, σ8) plane perpendicularly to its main degeneracy. We estimate S8 = 0.750 ± 0.059, using peaks in the signal-to-noise range 0 ≤ S/N ≤ 4, and accounting for various systematics, such as multiplicative shear bias, mean redshift bias, baryon feedback, intrinsic alignment, and shear-position coupling. These constraints are ˜ 25 per cent tighter than the constraints from the high significance peaks alone (3 ≤ S/N ≤ 4) which typically trace single-massive haloes. This demonstrates the gain of information from low-S/N peaks. However, we find that including S/N KiDS-450. Combining shear peaks with non-tomographic measurements of the shear two-point correlation functions yields a ˜20 per cent improvement in the uncertainty on S8 compared to the shear two-point correlation functions alone, highlighting the great potential of peaks as a cosmological probe.
ROBUST CYLINDER FITTING IN THREE-DIMENSIONAL POINT CLOUD DATA
Directory of Open Access Journals (Sweden)
A. Nurunnabi
2017-05-01
Full Text Available This paper investigates the problems of cylinder fitting in laser scanning three-dimensional Point Cloud Data (PCD. Most existing methods require full cylinder data, do not study the presence of outliers, and are not statistically robust. But especially mobile laser scanning often has incomplete data, as street poles for example are only scanned from the road. Moreover, existence of outliers is common. Outliers may occur as random or systematic errors, and may be scattered and/or clustered. In this paper, we present a statistically robust cylinder fitting algorithm for PCD that combines Robust Principal Component Analysis (RPCA with robust regression. Robust principal components as obtained by RPCA allow estimating cylinder directions more accurately, and an existing efficient circle fitting algorithm following robust regression principles, properly fit cylinder. We demonstrate the performance of the proposed method on artificial and real PCD. Results show that the proposed method provides more accurate and robust results: (i in the presence of noise and high percentage of outliers, (ii for incomplete as well as complete data, (iii for small and large number of points, and (iv for different sizes of radius. On 1000 simulated quarter cylinders of 1m radius with 10% outliers a PCA based method fit cylinders with a radius of on average 3.63 meter (m; the proposed method on the other hand fit cylinders of on average 1.02 m radius. The algorithm has potential in applications such as fitting cylindrical (e.g., light and traffic poles, diameter at breast height estimation for trees, and building and bridge information modelling.
DEFF Research Database (Denmark)
Hansen, Jens Zangenberg; Brøndsted, Povl
2013-01-01
In a previous study, Trias et al. [1] determined the minimum size of a statistical representative volume element (SRVE) of a unidirectional fibre-reinforced composite primarily based on numerical analyses of the stress/strain field. In continuation of this, the present study determines the minimu...... size of an SRVE based on a statistical analysis on the spatial statistics of the fibre packing patterns found in genuine laminates, and those generated numerically using a microstructure generator. © 2012 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved....
Inhomogeneous Markov point processes by transformation
DEFF Research Database (Denmark)
Jensen, Eva B. Vedel; Nielsen, Linda Stougaard
2000-01-01
We construct parametrized models for point processes, allowing for both inhomogeneity and interaction. The inhomogeneity is obtained by applying parametrized transformations to homogeneous Markov point processes. An interesting model class, which can be constructed by this transformation approach......, is that of exponential inhomogeneous Markov point processes. Statistical inference For such processes is discussed in some detail....
Detecting Change-Point via Saddlepoint Approximations
Institute of Scientific and Technical Information of China (English)
Zhaoyuan LI; Maozai TIAN
2017-01-01
It's well-known that change-point problem is an important part of model statistical analysis.Most of the existing methods are not robust to criteria of the evaluation of change-point problem.In this article,we consider "mean-shift" problem in change-point studies.A quantile test of single quantile is proposed based on saddlepoint approximation method.In order to utilize the information at different quantile of the sequence,we further construct a "composite quantile test" to calculate the probability of every location of the sequence to be a change-point.The location of change-point can be pinpointed rather than estimated within a interval.The proposed tests make no assumptions about the functional forms of the sequence distribution and work sensitively on both large and small size samples,the case of change-point in the tails,and multiple change-points situation.The good performances of the tests are confirmed by simulations and real data analysis.The saddlepoint approximation based distribution of the test statistic that is developed in the paper is of independent interest and appealing.This finding may be of independent interest to the readers in this research area.
Statistical hadronization and hadronic micro-canonical ensemble II
International Nuclear Information System (INIS)
Becattini, F.; Ferroni, L.
2004-01-01
We present a Monte Carlo calculation of the micro-canonical ensemble of the ideal hadron-resonance gas including all known states up to a mass of about 1.8 GeV and full quantum statistics. The micro-canonical average multiplicities of the various hadron species are found to converge to the canonical ones for moderately low values of the total energy, around 8 GeV, thus bearing out previous analyses of hadronic multiplicities in the canonical ensemble. The main numerical computing method is an importance sampling Monte Carlo algorithm using the product of Poisson distributions to generate multi-hadronic channels. It is shown that the use of this multi-Poisson distribution allows for an efficient and fast computation of averages, which can be further improved in the limit of very large clusters. We have also studied the fitness of a previously proposed computing method, based on the Metropolis Monte Carlo algorithm, for event generation in the statistical hadronization model. We find that the use of the multi-Poisson distribution as proposal matrix dramatically improves the computation performance. However, due to the correlation of subsequent samples, this method proves to be generally less robust and effective than the importance sampling method. (orig.)
Change detection in polarimetric SAR data over several time points
DEFF Research Database (Denmark)
Conradsen, Knut; Nielsen, Allan Aasbjerg; Skriver, Henning
2014-01-01
A test statistic for the equality of several variance-covariance matrices following the complex Wishart distribution is introduced. The test statistic is applied successfully to detect change in C-band EMISAR polarimetric SAR data over four time points.......A test statistic for the equality of several variance-covariance matrices following the complex Wishart distribution is introduced. The test statistic is applied successfully to detect change in C-band EMISAR polarimetric SAR data over four time points....
Sevgi, Fatih; Bagkesici, Ugur; Kursunlu, Ahmed Nuri; Guler, Ersin
2018-02-01
Zinc (II), copper (II), nickel (II), cobalt (II) and iron (III) complexes of Schiff bases (LG, LP) derived from 2-hydroxynaphthaldehyde with glycine and phenylalanine were reported and characterized by 1H NMR, 13C NMR, elemental analyses, melting point, FT-IR, magnetic susceptibility and thermal analyses (TGA). TGA data show that iron and cobalt include to the coordinated water and metal:ligand ratio is 1:2 while the complex stoichiometry for Ni (II), Cu (II) and Zn (II) complexes is 1:1. As expected, Ni (II) and Zn (II) complexes are diamagnetic; Cu (II), Co (II) and Fe (III) complexes are paramagnetic character due to a strong ligand of LG and LP. The LG, LP and their metal complexes were screened for their antimicrobial activities against five Gram-positive (Staphylococcus aureus, Methicillin resistant Staphylococcus aureus (MRSA), Bacillus cereus, Streptococcus mutans and Enterococcus faecalis) and three Gram-negative (Escherichia coli, Klebsiella pneumoniae and Pseudomonas aeruginosa) and one fungi (Candida albicans) by using broth microdilution techniques. The activity data show that ligands and their metal complexes exhibited moderate to good activity against Gram-positive bacteria and fungi.
Directory of Open Access Journals (Sweden)
Amina Godinjak
2016-11-01
Full Text Available Objective. The aim is to determine SAPS II and APACHE II scores in medical intensive care unit (MICU patients, to compare them for prediction of patient outcome, and to compare with actual hospital mortality rates for different subgroups of patients. Methods. One hundred and seventy-four patients were included in this analysis over a oneyear period in the MICU, Clinical Center, University of Sarajevo. The following patient data were obtained: demographics, admission diagnosis, SAPS II, APACHE II scores and final outcome. Results. Out of 174 patients, 70 patients (40.2% died. Mean SAPS II and APACHE II scores in all patients were 48.4±17.0 and 21.6±10.3 respectively, and they were significantly different between survivors and non-survivors. SAPS II >50.5 and APACHE II >27.5 can predict the risk of mortality in these patients. There was no statistically significant difference in the clinical values of SAPS II vs APACHE II (p=0.501. A statistically significant positive correlation was established between the values of SAPS II and APACHE II (r=0.708; p=0.001. Patients with an admission diagnosis of sepsis/septic shock had the highest values of both SAPS II and APACHE II scores, and also the highest hospital mortality rate of 55.1%. Conclusion. Both APACHE II and SAPS II had an excellent ability to discriminate between survivors and non-survivors. There was no significant difference in the clinical values of SAPS II and APACHE II. A positive correlation was established between them. Sepsis/septic shock patients had the highest predicted and observed hospital mortality rate.
Point-to-point radio link variation at E-band and its effect on antenna design
Al-Rawi, A.; Dubok, A.; Herben, M.H.A.J.; Smolders, A.B.
2015-01-01
Radio propagation will strongly influence the design of the antenna and front-end components of E-band point-to-point communication systems. Based on the ITU rain model, the rain attenuation is estimated in a statistical sense and it is concluded that for backhaul links of 1–10 km, antennas with a
Statistics 101 for Radiologists.
Anvari, Arash; Halpern, Elkan F; Samir, Anthony E
2015-10-01
Diagnostic tests have wide clinical applications, including screening, diagnosis, measuring treatment effect, and determining prognosis. Interpreting diagnostic test results requires an understanding of key statistical concepts used to evaluate test efficacy. This review explains descriptive statistics and discusses probability, including mutually exclusive and independent events and conditional probability. In the inferential statistics section, a statistical perspective on study design is provided, together with an explanation of how to select appropriate statistical tests. Key concepts in recruiting study samples are discussed, including representativeness and random sampling. Variable types are defined, including predictor, outcome, and covariate variables, and the relationship of these variables to one another. In the hypothesis testing section, we explain how to determine if observed differences between groups are likely to be due to chance. We explain type I and II errors, statistical significance, and study power, followed by an explanation of effect sizes and how confidence intervals can be used to generalize observed effect sizes to the larger population. Statistical tests are explained in four categories: t tests and analysis of variance, proportion analysis tests, nonparametric tests, and regression techniques. We discuss sensitivity, specificity, accuracy, receiver operating characteristic analysis, and likelihood ratios. Measures of reliability and agreement, including κ statistics, intraclass correlation coefficients, and Bland-Altman graphs and analysis, are introduced. © RSNA, 2015.
Better prognostic marker in ICU - APACHE II, SOFA or SAP II!
Naqvi, Iftikhar Haider; Mahmood, Khalid; Ziaullaha, Syed; Kashif, Syed Mohammad; Sharif, Asim
2016-01-01
This study was designed to determine the comparative efficacy of different scoring system in assessing the prognosis of critically ill patients. This was a retrospective study conducted in medical intensive care unit (MICU) and high dependency unit (HDU) Medical Unit III, Civil Hospital, from April 2012 to August 2012. All patients over age 16 years old who have fulfilled the criteria for MICU admission were included. Predictive mortality of APACHE II, SAP II and SOFA were calculated. Calibration and discrimination were used for validity of each scoring model. A total of 96 patients with equal gender distribution were enrolled. The average APACHE II score in non-survivors (27.97+8.53) was higher than survivors (15.82+8.79) with statistically significant p value (discrimination power than SAP II and SOFA.
Index summation in real time statistical field theory
International Nuclear Information System (INIS)
Carrington, M.E.; Fugleberg, T.; Irvine, D.S.; Pickering, D.
2007-01-01
We have written a Mathematica program that calculates the integrand corresponding to any amplitude in the closed-time-path formulation of real time statistical field theory. The program is designed so that it can be used by someone with no previous experience with Mathematica. It performs the contractions over the tensor indices that appear in real time statistical field theory and gives the result in the 1-2, Keldysh or RA basis. The program treats all fields as scalars, but the result can be applied to theories with dirac and lorentz structure by making simple adjustments. As an example, we have used the program to calculate the ward identity for the QED 3-point function, the QED 4-point function for two photons and two fermions, and the QED 5-point function for three photons and two fermions. In real time statistical field theory, there are seven 3-point functions, 15 4-point functions and 31 5-point functions. We produce a table that gives the results for all of these functions. In addition, we give a simple general expression for the KMS conditions between n-point green functions and vertex functions, in both the Keldysh and RA bases. (orig.)
Research methodology in dentistry: Part II — The relevance of statistics in research
Krithikadatta, Jogikalmat; Valarmathi, Srinivasan
2012-01-01
The lifeline of original research depends on adept statistical analysis. However, there have been reports of statistical misconduct in studies that could arise from the inadequate understanding of the fundamental of statistics. There have been several reports on this across medical and dental literature. This article aims at encouraging the reader to approach statistics from its logic rather than its theoretical perspective. The article also provides information on statistical misuse in the Journal of Conservative Dentistry between the years 2008 and 2011 PMID:22876003
Einstein's statistical mechanics
Energy Technology Data Exchange (ETDEWEB)
Baracca, A; Rechtman S, R
1985-08-01
The foundation of equilibrium classical statistical mechanics were laid down in 1902 independently by Gibbs and Einstein. The latter's contribution, developed in three papers published between 1902 and 1904, is usually forgotten and when not, rapidly dismissed as equivalent to Gibb's. We review in detail Einstein's ideas on the foundations of statistical mechanics and show that they constitute the beginning of a research program that led Einstein to quantum theory. We also show how these ideas may be used as a starting point for an introductory course on the subject.
Einstein's statistical mechanics
International Nuclear Information System (INIS)
Baracca, A.; Rechtman S, R.
1985-01-01
The foundation of equilibrium classical statistical mechanics were laid down in 1902 independently by Gibbs and Einstein. The latter's contribution, developed in three papers published between 1902 and 1904, is usually forgotten and when not, rapidly dismissed as equivalent to Gibb's. We review in detail Einstein's ideas on the foundations of statistical mechanics and show that they constitute the beginning of a research program that led Einstein to quantum theory. We also show how these ideas may be used as a starting point for an introductory course on the subject. (author)
Induced Temporal Signatures for Point-Source Detection
International Nuclear Information System (INIS)
Stephens, Daniel L.; Runkle, Robert C.; Carlson, Deborah K.; Peurrung, Anthony J.; Seifert, Allen; Wyatt, Cory R.
2005-01-01
Detection of radioactive point-sized sources is inherently divided into two regimes encompassing stationary and moving detectors. The two cases differ in their treatment of background radiation and its influence on detection sensitivity. In the stationary detector case the statistical fluctuation of the background determines the minimum detectable quantity. In the moving detector case the detector may be subjected to widely and irregularly varying background radiation, as a result of geographical and environmental variation. This significant systematic variation, in conjunction with the statistical variation of the background, requires a conservative threshold to be selected to yield the same false-positive rate as the stationary detection case. This results in lost detection sensitivity for real sources. This work focuses on a simple and practical modification of the detector geometry that increase point-source recognition via a distinctive temporal signature. A key part of this effort is the integrated development of both detector geometries that induce a highly distinctive signature for point sources and the development of statistical algorithms able to optimize detection of this signature amidst varying background. The identification of temporal signatures for point sources has been demonstrated and compared with the canonical method showing good results. This work demonstrates that temporal signatures are efficient at increasing point-source discrimination in a moving detector system
Chen, Yongqin David; Jiang, Jianmin; Zhu, Yuxiang; Huang, Changxing; Zhang, Qiang
2018-05-01
This article, as part II, illustrates applications of other two algorithms, i.e., the scanning F test of change points in trend and the scanning t test of change points in mean, to both series of the normalized streamflow index (NSI) at Makou section in the Xijiang River and the normalized precipitation index (NPI) over the watershed of Xijiang River. The results from these two tests show mainly positive coherency of changes between the NSI and NPI. However, some minor negative coherency patches may expose somewhat impacts of human activities, but they were often associated with nearly normal climate periods. These suggest that the runoff still depends upon well the precipitation in the Xijiang catchment. The anthropogenic disturbances have not yet reached up to violating natural relationship on the whole in this river.
Energy Technology Data Exchange (ETDEWEB)
Zhao, Lingling; Zhong, Shuxian; Fang, Keming; Qian, Zhaosheng [College of Chemistry and Life Sciences, Zhejiang Normal University, Jinhua 321004 (China); Chen, Jianrong, E-mail: cjr@zjnu.cn [College of Chemistry and Life Sciences, Zhejiang Normal University, Jinhua 321004 (China); College of Geography and Environmental Sciences, Zhejiang Normal University, Jinhua 321004 (China)
2012-11-15
Highlights: Black-Right-Pointing-Pointer A dual-cloud point extraction (d-CPE) procedure was firstly developed for simultaneous pre-concentration and separation of trace metal ions combining with ICP-OES. Black-Right-Pointing-Pointer The developed d-CPE can significantly eliminate the surfactant of Triton X-114 and successfully extend to the determination of water samples with good performance. Black-Right-Pointing-Pointer The designed method is simple, high efficient, low cost, and in accordance with the green chemistry concept. - Abstract: A dual-cloud point extraction (d-CPE) procedure has been developed for simultaneous pre-concentration and separation of heavy metal ions (Cd{sup 2+}, Co{sup 2+}, Ni{sup 2+}, Pb{sup 2+}, Zn{sup 2+}, and Cu{sup 2+} ion) in water samples by inductively coupled plasma optical emission spectrometry (ICP-OES). The procedure is based on forming complexes of metal ion with 8-hydroxyquinoline (8-HQ) into the as-formed Triton X-114 surfactant rich phase. Instead of direct injection or analysis, the surfactant rich phase containing the complexes was treated by nitric acid, and the detected ions were back extracted again into aqueous phase at the second cloud point extraction stage, and finally determined by ICP-OES. Under the optimum conditions (pH = 7.0, Triton X-114 = 0.05% (w/v), 8-HQ = 2.0 Multiplication-Sign 10{sup -4} mol L{sup -1}, HNO{sub 3} = 0.8 mol L{sup -1}), the detection limits for Cd{sup 2+}, Co{sup 2+}, Ni{sup 2+}, Pb{sup 2+}, Zn{sup 2+}, and Cu{sup 2+} ions were 0.01, 0.04, 0.01, 0.34, 0.05, and 0.04 {mu}g L{sup -1}, respectively. Relative standard deviation (RSD) values for 10 replicates at 100 {mu}g L{sup -1} were lower than 6.0%. The proposed method could be successfully applied to the determination of Cd{sup 2+}, Co{sup 2+}, Ni{sup 2+}, Pb{sup 2+}, Zn{sup 2+}, and Cu{sup 2+} ion in water samples.
Frontiers in statistical quality control 11
Schmid, Wolfgang
2015-01-01
The main focus of this edited volume is on three major areas of statistical quality control: statistical process control (SPC), acceptance sampling and design of experiments. The majority of the papers deal with statistical process control, while acceptance sampling and design of experiments are also treated to a lesser extent. The book is organized into four thematic parts, with Part I addressing statistical process control. Part II is devoted to acceptance sampling. Part III covers the design of experiments, while Part IV discusses related fields. The twenty-three papers in this volume stem from The 11th International Workshop on Intelligent Statistical Quality Control, which was held in Sydney, Australia from August 20 to August 23, 2013. The event was hosted by Professor Ross Sparks, CSIRO Mathematics, Informatics and Statistics, North Ryde, Australia and was jointly organized by Professors S. Knoth, W. Schmid and Ross Sparks. The papers presented here were carefully selected and reviewed by the scientifi...
Securing wide appreciation of health statistics.
PYRRAIT A M DO, A; AUBENQUE, M J; BENJAMIN, B; DE GROOT, M J; KOHN, R
1954-01-01
All the authors are agreed on the need for a certain publicizing of health statistics, but do Amaral Pyrrait points out that the medical profession prefers to convince itself rather than to be convinced. While there is great utility in articles and reviews in the professional press (especially for paramedical personnel) Aubenque, de Groot, and Kohn show how appreciation can effectively be secured by making statistics more easily understandable to the non-expert by, for instance, including readable commentaries in official publications, simplifying charts and tables, and preparing simple manuals on statistical methods. Aubenque and Kohn also stress the importance of linking health statistics to other economic and social information. Benjamin suggests that the principles of market research could to advantage be applied to health statistics to determine the precise needs of the "consumers". At the same time, Aubenque points out that the value of the ultimate results must be clear to those who provide the data; for this, Kohn suggests that the enumerators must know exactly what is wanted and why.There is general agreement that some explanation of statistical methods and their uses should be given in the curricula of medical schools and that lectures and postgraduate courses should be arranged for practising physicians.
International Nuclear Information System (INIS)
Whelan, David G.; Johnson, Kelsey E.; Indebetouw, Rémy; Lebouteiller, Vianney; Galliano, Frédéric; Peeters, Els; Bernard-Salas, Jeronimo; Brandl, Bernhard R.
2013-01-01
The focus of this work is to study mid-infrared point sources and the diffuse interstellar medium (ISM) in the low-metallicity (∼0.2 Z ☉ ) giant H II region N66 in order to determine properties that may shed light on star formation in these conditions. Using the Spitzer Space Telescope's Infrared Spectrograph, we study polycyclic aromatic hydrocarbon (PAH), dust continuum, silicate, and ionic line emission from 14 targeted infrared point sources as well as spectra of the diffuse ISM that is representative of both the photodissociation regions (PDRs) and the H II regions. Among the point source spectra, we spectroscopically confirm that the brightest mid-infrared point source is a massive embedded young stellar object, we detect silicates in emission associated with two young stellar clusters, and we see spectral features of a known B[e] star that are commonly associated with Herbig Be stars. In the diffuse ISM, we provide additional evidence that the very small grain population is being photodestroyed in the hard radiation field. The 11.3 μm PAH complex emission exhibits an unexplained centroid shift in both the point source and ISM spectra that should be investigated at higher signal-to-noise and resolution. Unlike studies of other regions, the 6.2 μm and 7.7 μm band fluxes are decoupled; the data points cover a large range of I 7.7 /I 11.3 PAH ratio values within a narrow band of I 6.2 /I 11.3 ratio values. Furthermore, there is a spread in PAH ionization, being more neutral in the dense PDR where the radiation field is relatively soft, but ionized in the diffuse ISM/PDR. By contrast, the PAH size distribution appears to be independent of local ionization state. Important to unresolved studies of extragalactic low-metallicity star-forming regions, we find that emission from the infrared-bright point sources accounts for only 20%-35% of the PAH emission from the entire region. These results make a comparative data set to other star-forming regions with
Statistical Surface Recovery: A Study on Ear Canals
DEFF Research Database (Denmark)
Jensen, Rasmus Ramsbøl; Olesen, Oline Vinter; Paulsen, Rasmus Reinhold
2012-01-01
We present a method for surface recovery in partial surface scans based on a statistical model. The framework is based on multivariate point prediction, where the distribution of the points are learned from an annotated data set. The training set consist of surfaces with dense correspondence...... that are Procrustes aligned. The average shape and point covariances can be estimated from this set. It is shown how missing data in a new given shape can be predicted using the learned statistics. The method is evaluated on a data set of 29 scans of ear canal impressions. By using a leave-one-out approach we...
Hazard rate model and statistical analysis of a compound point process
Czech Academy of Sciences Publication Activity Database
Volf, Petr
2005-01-01
Roč. 41, č. 6 (2005), s. 773-786 ISSN 0023-5954 R&D Projects: GA ČR(CZ) GA402/04/1294 Institutional research plan: CEZ:AV0Z10750506 Keywords : couting process * compound process * Cox regression model * intensity Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.343, year: 2005
ϕ-statistically quasi Cauchy sequences
Directory of Open Access Journals (Sweden)
Bipan Hazarika
2016-04-01
Full Text Available Let P denote the space whose elements are finite sets of distinct positive integers. Given any element σ of P, we denote by p(σ the sequence {pn(σ} such that pn(σ=1 for n ∈ σ and pn(σ=0 otherwise. Further Ps={σ∈P:∑n=1∞pn(σ≤s}, i.e. Ps is the set of those σ whose support has cardinality at most s. Let (ϕn be a non-decreasing sequence of positive integers such that nϕn+1≤(n+1ϕn for all n∈N and the class of all sequences (ϕn is denoted by Φ. Let E⊆N. The number δϕ(E=lims→∞1ϕs|{k∈σ,σ∈Ps:k∈E}| is said to be the ϕ-density of E. A sequence (xn of points in R is ϕ-statistically convergent (or Sϕ-convergent to a real number ℓ for every ε > 0 if the set {n∈N:|xn−ℓ|≥ɛ} has ϕ-density zero. We introduce ϕ-statistically ward continuity of a real function. A real function is ϕ-statistically ward continuous if it preserves ϕ-statistically quasi Cauchy sequences where a sequence (xn is called to be ϕ-statistically quasi Cauchy (or Sϕ-quasi Cauchy when (Δxn=(xn+1−xn is ϕ-statistically convergent to 0. i.e. a sequence (xn of points in R is called ϕ-statistically quasi Cauchy (or Sϕ-quasi Cauchy for every ε > 0 if {n∈N:|xn+1−xn|≥ɛ} has ϕ-density zero. Also we introduce the concept of ϕ-statistically ward compactness and obtain results related to ϕ-statistically ward continuity, ϕ-statistically ward compactness, statistically ward continuity, ward continuity, ward compactness, ordinary compactness, uniform continuity, ordinary continuity, δ-ward continuity, and slowly oscillating continuity.
A MOSUM procedure for the estimation of multiple random change points
Eichinger, Birte; Kirch, Claudia
2018-01-01
In this work, we investigate statistical properties of change point estimators based on moving sum statistics. We extend results for testing in a classical situation with multiple deterministic change points by allowing for random exogenous change points that arise in Hidden Markov or regime switching models among others. To this end, we consider a multiple mean change model with possible time series errors and prove that the number and location of change points are estimated consistently by ...
International Nuclear Information System (INIS)
Arnold, V.I.
2006-03-01
To describe the topological structure of a real smooth function one associates to it the graph, formed by the topological variety, whose points are the connected components of the level hypersurface of the function. For a Morse function, such a graph is a tree. Generically, it has T triple vertices, T + 2 endpoints, 2T + 2 vertices and 2T + 1 arrows. The main goal of the present paper is to study the statistics of the graphs, corresponding to T triple points: what is the growth rate of the number φ(T) of different graphs? Which part of these graphs is representable by the polynomial functions of corresponding degree? A generic polynomial of degree n has at most (n - 1) 2 critical points on R 2 , corresponding to 2T + 2 = (n - 1) 2 + 1, that is to T = 2k(k - 1) saddle-points for degree n = 2k
Examples and problems in mathematical statistics
Zacks, Shelemyahu
2013-01-01
This book presents examples that illustrate the theory of mathematical statistics and details how to apply the methods for solving problems. While other books on the topic contain problems and exercises, they do not focus on problem solving. This book fills an important niche in the statistical theory literature by providing a theory/example/problem approach. Each chapter is divided into four parts: Part I provides the needed theory so readers can become familiar with the concepts, notations, and proven results; Part II presents examples from a variety of fields including engineering, mathem
New selection effect in statistical investigations of supernova remnants
Allakhverdiev, A. O.; Guseinov, O. Kh.; Kasumov, F. K.
1986-01-01
The influence of H II regions on the parameters of supernova remnants (SNR) is investigated. It has been shown that the projection of such regions on the SNRs leads to: a) local changes of morphological structure of young shell-type SNRs and b) considerable distortions of integral parameters of evolved shell-type SNRs (with D > 10 pc) and plerions, up to their complete undetectability on the background of classical and gigantic H II regions. A new selection effect, in fact, arises from these factors connected with additional limitations made by the real structure of the interstellar medium on the statistical investigations of SNRs. The influence of this effect on the statistical completeness of objects has been estimated.
Effects of quantum coherence on work statistics
Xu, Bao-Ming; Zou, Jian; Guo, Li-Sha; Kong, Xiang-Mu
2018-05-01
In the conventional two-point measurement scheme of quantum thermodynamics, quantum coherence is destroyed by the first measurement. But as we know the coherence really plays an important role in the quantum thermodynamics process, and how to describe the work statistics for a quantum coherent process is still an open question. In this paper, we use the full counting statistics method to investigate the effects of quantum coherence on work statistics. First, we give a general discussion and show that for a quantum coherent process, work statistics is very different from that of the two-point measurement scheme, specifically the average work is increased or decreased and the work fluctuation can be decreased by quantum coherence, which strongly depends on the relative phase, the energy level structure, and the external protocol. Then, we concretely consider a quenched one-dimensional transverse Ising model and show that quantum coherence has a more significant influence on work statistics in the ferromagnetism regime compared with that in the paramagnetism regime, so that due to the presence of quantum coherence the work statistics can exhibit the critical phenomenon even at high temperature.
Statistical Power in Plant Pathology Research.
Gent, David H; Esker, Paul D; Kriss, Alissa B
2018-01-01
In null hypothesis testing, failure to reject a null hypothesis may have two potential interpretations. One interpretation is that the treatments being evaluated do not have a significant effect, and a correct conclusion was reached in the analysis. Alternatively, a treatment effect may have existed but the conclusion of the study was that there was none. This is termed a Type II error, which is most likely to occur when studies lack sufficient statistical power to detect a treatment effect. In basic terms, the power of a study is the ability to identify a true effect through a statistical test. The power of a statistical test is 1 - (the probability of Type II errors), and depends on the size of treatment effect (termed the effect size), variance, sample size, and significance criterion (the probability of a Type I error, α). Low statistical power is prevalent in scientific literature in general, including plant pathology. However, power is rarely reported, creating uncertainty in the interpretation of nonsignificant results and potentially underestimating small, yet biologically significant relationships. The appropriate level of power for a study depends on the impact of Type I versus Type II errors and no single level of power is acceptable for all purposes. Nonetheless, by convention 0.8 is often considered an acceptable threshold and studies with power less than 0.5 generally should not be conducted if the results are to be conclusive. The emphasis on power analysis should be in the planning stages of an experiment. Commonly employed strategies to increase power include increasing sample sizes, selecting a less stringent threshold probability for Type I errors, increasing the hypothesized or detectable effect size, including as few treatment groups as possible, reducing measurement variability, and including relevant covariates in analyses. Power analysis will lead to more efficient use of resources and more precisely structured hypotheses, and may even
Statistical imitation system using relational interest points and Gaussian mixture models
CSIR Research Space (South Africa)
Claassens, J
2009-11-01
Full Text Available The author proposes an imitation system that uses relational interest points (RIPs) and Gaussian mixture models (GMMs) to characterize a behaviour. The system's structure is inspired by the Robot Programming by Demonstration (RDP) paradigm...
A New Statistical Tool: Scalar Score Function
Czech Academy of Sciences Publication Activity Database
Fabián, Zdeněk
2011-01-01
Roč. 2, - (2011), s. 109-116 ISSN 1934-7332 R&D Projects: GA ČR GA205/09/1079 Institutional research plan: CEZ:AV0Z10300504 Keywords : statistics * inference function * data characteristics * point estimates * heavy tails Subject RIV: BB - Applied Statistics, Operational Research
A statistical approach to instrument calibration
Robert R. Ziemer; David Strauss
1978-01-01
Summary - It has been found that two instruments will yield different numerical values when used to measure identical points. A statistical approach is presented that can be used to approximate the error associated with the calibration of instruments. Included are standard statistical tests that can be used to determine if a number of successive calibrations of the...
Statistical separability and the impossibility of the superluminal quantum communication
International Nuclear Information System (INIS)
Zhang Qiren
2004-01-01
The authors analyse the relation and the difference between the quantum correlation of two points in space and the communication between them. The statistical separability of two points in the space is defined and proven. From this statistical separability, authors prove that the superluminal quantum communication between different points is impossible. To emphasis the compatibility between the quantum theory and the relativity, authors write the von Neumann equation of density operator evolution in the multi-time form. (author)
Extracurricular Activities and Their Effect on the Student's Grade Point Average: Statistical Study
Bakoban, R. A.; Aljarallah, S. A.
2015-01-01
Extracurricular activities (ECA) are part of students' everyday life; they play important roles in students' lives. Few studies have addressed the question of how student engagements to ECA affect student's grade point average (GPA). This research was conducted to know whether the students' grade point average in King Abdulaziz University,…
Statistical measurement of the gamma-ray source-count distribution as a function of energy
Zechlin, H.-S.; Cuoco, A.; Donato, F.; Fornengo, N.; Regis, M.
2017-01-01
Photon counts statistics have recently been proven to provide a sensitive observable for characterizing gamma-ray source populations and for measuring the composition of the gamma-ray sky. In this work, we generalize the use of the standard 1-point probability distribution function (1pPDF) to decompose the high-latitude gamma-ray emission observed with Fermi-LAT into: (i) point-source contributions, (ii) the Galactic foreground contribution, and (iii) a diffuse isotropic background contribution. We analyze gamma-ray data in five adjacent energy bands between 1 and 171 GeV. We measure the source-count distribution dN/dS as a function of energy, and demonstrate that our results extend current measurements from source catalogs to the regime of so far undetected sources. Our method improves the sensitivity for resolving point-source populations by about one order of magnitude in flux. The dN/dS distribution as a function of flux is found to be compatible with a broken power law. We derive upper limits on further possible breaks as well as the angular power of unresolved sources. We discuss the composition of the gamma-ray sky and capabilities of the 1pPDF method.
Modeling fixation locations using spatial point processes.
Barthelmé, Simon; Trukenbrod, Hans; Engbert, Ralf; Wichmann, Felix
2013-10-01
Whenever eye movements are measured, a central part of the analysis has to do with where subjects fixate and why they fixated where they fixated. To a first approximation, a set of fixations can be viewed as a set of points in space; this implies that fixations are spatial data and that the analysis of fixation locations can be beneficially thought of as a spatial statistics problem. We argue that thinking of fixation locations as arising from point processes is a very fruitful framework for eye-movement data, helping turn qualitative questions into quantitative ones. We provide a tutorial introduction to some of the main ideas of the field of spatial statistics, focusing especially on spatial Poisson processes. We show how point processes help relate image properties to fixation locations. In particular we show how point processes naturally express the idea that image features' predictability for fixations may vary from one image to another. We review other methods of analysis used in the literature, show how they relate to point process theory, and argue that thinking in terms of point processes substantially extends the range of analyses that can be performed and clarify their interpretation.
Kinetic Simulations of Type II Radio Burst Emission Processes
Ganse, U.; Spanier, F. A.; Vainio, R. O.
2011-12-01
The fundamental emission process of Type II Radio Bursts has been under discussion for many decades. While analytic deliberations point to three wave interaction as the source for fundamental and harmonic radio emissions, sparse in-situ observational data and high computational demands for kinetic simulations have not allowed for a definite conclusion to be reached. A popular model puts the radio emission into the foreshock region of a coronal mass ejection's shock front, where shock drift acceleration can create eletrcon beam populations in the otherwise quiescent foreshock plasma. Beam-driven instabilities are then assumed to create waves, forming the starting point of three wave interaction processes. Using our kinetic particle-in-cell code, we have studied a number of emission scenarios based on electron beam populations in a CME foreshock, with focus on wave-interaction microphysics on kinetic scales. The self-consistent, fully kinetic simulations with completely physical mass-ratio show fundamental and harmonic emission of transverse electromagnetic waves and allow for detailled statistical analysis of all contributing wavemodes and their couplings.
Statistics of stationary points of random finite polynomial potentials
International Nuclear Information System (INIS)
Mehta, Dhagash; Niemerg, Matthew; Sun, Chuang
2015-01-01
The stationary points (SPs) of the potential energy landscapes (PELs) of multivariate random potentials (RPs) have found many applications in many areas of Physics, Chemistry and Mathematical Biology. However, there are few reliable methods available which can find all the SPs accurately. Hence, one has to rely on indirect methods such as Random Matrix theory. With a combination of the numerical polynomial homotopy continuation method and a certification method, we obtain all the certified SPs of the most general polynomial RP for each sample chosen from the Gaussian distribution with mean 0 and variance 1. While obtaining many novel results for the finite size case of the RP, we also discuss the implications of our results on mathematics of random systems and string theory landscapes. (paper)
Improved selection criteria for H II regions, based on IRAS sources
Yan, Qing-Zeng; Xu, Ye; Walsh, A. J.; Macquart, J. P.; MacLeod, G. C.; Zhang, Bo; Hancock, P. J.; Chen, Xi; Tang, Zheng-Hong
2018-05-01
We present new criteria for selecting H II regions from the Infrared Astronomical Satellite (IRAS) Point Source Catalogue (PSC), based on an H II region catalogue derived manually from the all-sky Wide-field Infrared Survey Explorer (WISE). The criteria are used to augment the number of H II region candidates in the Milky Way. The criteria are defined by the linear decision boundary of two samples: IRAS point sources associated with known H II regions, which serve as the H II region sample, and IRAS point sources at high Galactic latitudes, which serve as the non-H II region sample. A machine learning classifier, specifically a support vector machine, is used to determine the decision boundary. We investigate all combinations of four IRAS bands and suggest that the optimal criterion is log(F_{60}/F_{12})≥ ( -0.19 × log(F_{100}/F_{25})+ 1.52), with detections at 60 and 100 {μ}m. This selects 3041 H II region candidates from the IRAS PSC. We find that IRAS H II region candidates show evidence of evolution on the two-colour diagram. Merging the WISE H II catalogue with IRAS H II region candidates, we estimate a lower limit of approximately 10 200 for the number of H II regions in the Milky Way.
Data points visualization that means something
Yau, Nathan
2013-01-01
A fresh look at visualization from the author of Visualize This Whether it's statistical charts, geographic maps, or the snappy graphical statistics you see on your favorite news sites, the art of data graphics or visualization is fast becoming a movement of its own. In Data Points: Visualization That Means Something, author Nathan Yau presents an intriguing complement to his bestseller Visualize This, this time focusing on the graphics side of data analysis. Using examples from art, design, business, statistics, cartography, and online media, he explores both
Statistical Mechanics of Turbulent Flows
International Nuclear Information System (INIS)
Cambon, C
2004-01-01
This is a handbook for a computational approach to reacting flows, including background material on statistical mechanics. In this sense, the title is somewhat misleading with respect to other books dedicated to the statistical theory of turbulence (e.g. Monin and Yaglom). In the present book, emphasis is placed on modelling (engineering closures) for computational fluid dynamics. The probabilistic (pdf) approach is applied to the local scalar field, motivated first by the nonlinearity of chemical source terms which appear in the transport equations of reacting species. The probabilistic and stochastic approaches are also used for the velocity field and particle position; nevertheless they are essentially limited to Lagrangian models for a local vector, with only single-point statistics, as for the scalar. Accordingly, conventional techniques, such as single-point closures for RANS (Reynolds-averaged Navier-Stokes) and subgrid-scale models for LES (large-eddy simulations), are described and in some cases reformulated using underlying Langevin models and filtered pdfs. Even if the theoretical approach to turbulence is not discussed in general, the essentials of probabilistic and stochastic-processes methods are described, with a useful reminder concerning statistics at the molecular level. The book comprises 7 chapters. Chapter 1 briefly states the goals and contents, with a very clear synoptic scheme on page 2. Chapter 2 presents definitions and examples of pdfs and related statistical moments. Chapter 3 deals with stochastic processes, pdf transport equations, from Kramer-Moyal to Fokker-Planck (for Markov processes), and moments equations. Stochastic differential equations are introduced and their relationship to pdfs described. This chapter ends with a discussion of stochastic modelling. The equations of fluid mechanics and thermodynamics are addressed in chapter 4. Classical conservation equations (mass, velocity, internal energy) are derived from their
Rweb:Web-based Statistical Analysis
Directory of Open Access Journals (Sweden)
Jeff Banfield
1999-03-01
Full Text Available Rweb is a freely accessible statistical analysis environment that is delivered through the World Wide Web (WWW. It is based on R, a well known statistical analysis package. The only requirement to run the basic Rweb interface is a WWW browser that supports forms. If you want graphical output you must, of course, have a browser that supports graphics. The interface provides access to WWW accessible data sets, so you may run Rweb on your own data. Rweb can provide a four window statistical computing environment (code input, text output, graphical output, and error information through browsers that support Javascript. There is also a set of point and click modules under development for use in introductory statistics courses.
DEFF Research Database (Denmark)
Elleby, Anita; Ingwersen, Peter
2010-01-01
The paper presents comparative analyses of two publication point systems, The Norwegian and the in-house system from the interdiscplinary Danish Institute for International Studies (DIIS), used as case in the study for publications published 2006, and compares central citation-based indicators...... with novel publication point indicators (PPIs) that are formalized and exemplified. Two diachronic citation windows are applied: 2006-07 and 2006-08. Web of Science (WoS) as well as Google Scholar (GS) are applied to observe the cite delay and citedness for the different document types published by DIIS...... for all document types. Statistical significant correlations were only found between WoS and GS and the two publication point systems in between, respectively. The study demonstrates how the nCPPI can be applied to institutions as evaluation tools supplementary to JCI in various combinations...
Combining statistical inference and decisions in ecology
Williams, Perry J.; Hooten, Mevin B.
2016-01-01
Statistical decision theory (SDT) is a sub-field of decision theory that formally incorporates statistical investigation into a decision-theoretic framework to account for uncertainties in a decision problem. SDT provides a unifying analysis of three types of information: statistical results from a data set, knowledge of the consequences of potential choices (i.e., loss), and prior beliefs about a system. SDT links the theoretical development of a large body of statistical methods including point estimation, hypothesis testing, and confidence interval estimation. The theory and application of SDT have mainly been developed and published in the fields of mathematics, statistics, operations research, and other decision sciences, but have had limited exposure in ecology. Thus, we provide an introduction to SDT for ecologists and describe its utility for linking the conventionally separate tasks of statistical investigation and decision making in a single framework. We describe the basic framework of both Bayesian and frequentist SDT, its traditional use in statistics, and discuss its application to decision problems that occur in ecology. We demonstrate SDT with two types of decisions: Bayesian point estimation, and an applied management problem of selecting a prescribed fire rotation for managing a grassland bird species. Central to SDT, and decision theory in general, are loss functions. Thus, we also provide basic guidance and references for constructing loss functions for an SDT problem.
2010-04-01
... INSTITUTIONS AND MORTGAGEES Title I and Title II Specific Requirements § 202.12 Title II. (a) Tiered pricing—(1... rate up to two percentage points under the mortgagee's customary lending practices must be based on... after accounting for the value of servicing rights generated by making the loan and other income to the...
Statistical Methods in Integrative Genomics
Richardson, Sylvia; Tseng, George C.; Sun, Wei
2016-01-01
Statistical methods in integrative genomics aim to answer important biology questions by jointly analyzing multiple types of genomic data (vertical integration) or aggregating the same type of data across multiple studies (horizontal integration). In this article, we introduce different types of genomic data and data resources, and then review statistical methods of integrative genomics, with emphasis on the motivation and rationale of these methods. We conclude with some summary points and future research directions. PMID:27482531
Directory of Open Access Journals (Sweden)
Jing-Chzi Hsieh
2016-05-01
Full Text Available The recycled Kevlar®/polyester/low-melting-point polyester (recycled Kevlar®/PET/LPET nonwoven geotextiles are immersed in neutral, strong acid, and strong alkali solutions, respectively, at different temperatures for four months. Their tensile strength is then tested according to various immersion periods at various temperatures, in order to determine their durability to chemicals. For the purpose of analyzing the possible factors that influence mechanical properties of geotextiles under diverse environmental conditions, the experimental results and statistical analyses are incorporated in this study. Therefore, influences of the content of recycled Kevlar® fibers, implementation of thermal treatment, and immersion periods on the tensile strength of recycled Kevlar®/PET/LPET nonwoven geotextiles are examined, after which their influential levels are statistically determined by performing multiple regression analyses. According to the results, the tensile strength of nonwoven geotextiles can be enhanced by adding recycled Kevlar® fibers and thermal treatment.
Shot Group Statistics for Small Arms Applications
2017-06-01
if its probability distribution is known with sufficient accuracy, then it can be used to make a sound statistical inference on the unknown... statistical inference on the unknown, population standard deviations of the x and y impact-point positions. The dispersion measures treated in this report...known with sufficient accuracy, then it can be used to make a sound statistical inference on the unknown, population standard deviations of the x and y
78 FR 58570 - Environmental Assessment; Entergy Nuclear Operations, Inc., Big Rock Point
2013-09-24
... Assessment; Entergy Nuclear Operations, Inc., Big Rock Point AGENCY: Nuclear Regulatory Commission. ACTION... applicant or the licensee), for the Big Rock Point (BRP) Independent Spent Fuel Storage Installation (ISFSI... Rock Point (BRP) Independent Spent Fuel Storage Installation (ISFSI). II. Environmental Assessment (EA...
Fully Convolutional Networks for Ground Classification from LIDAR Point Clouds
Rizaldy, A.; Persello, C.; Gevaert, C. M.; Oude Elberink, S. J.
2018-05-01
Deep Learning has been massively used for image classification in recent years. The use of deep learning for ground classification from LIDAR point clouds has also been recently studied. However, point clouds need to be converted into an image in order to use Convolutional Neural Networks (CNNs). In state-of-the-art techniques, this conversion is slow because each point is converted into a separate image. This approach leads to highly redundant computation during conversion and classification. The goal of this study is to design a more efficient data conversion and ground classification. This goal is achieved by first converting the whole point cloud into a single image. The classification is then performed by a Fully Convolutional Network (FCN), a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than state-of-the-art techniques. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Our experimental analysis on the same dataset shows that the proposed method results in 5.22 % of total error, 4.10 % of type I error, and 15.07 % of type II error. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher). The method was also tested on a very high point density LIDAR point clouds resulting in 4.02 % of total error, 2.15 % of type I error and 6.14 % of type II error.
DEFF Research Database (Denmark)
Møller, Jesper; Ghorbani, Mohammad; Rubak, Ege Holger
We show how a spatial point process, where to each point there is associated a random quantitative mark, can be identified with a spatio-temporal point process specified by a conditional intensity function. For instance, the points can be tree locations, the marks can express the size of trees......, and the conditional intensity function can describe the distribution of a tree (i.e., its location and size) conditionally on the larger trees. This enable us to construct parametric statistical models which are easily interpretable and where likelihood-based inference is tractable. In particular, we consider maximum...
Toda 3-point functions from topological strings II
Energy Technology Data Exchange (ETDEWEB)
Isachenkov, Mikhail [DESY Hamburg, Theory Group,Notkestrasse 85, D-22607 Hamburg (Germany); Mitev, Vladimir [Institut für Mathematik und Institut für Physik, Humboldt-Universität zu Berlin,IRIS Haus, Zum Großen Windkanal 6, 12489 Berlin (Germany); Pomoni, Elli [DESY Hamburg, Theory Group,Notkestrasse 85, D-22607 Hamburg (Germany); Physics Division, National Technical University of Athens,15780 Zografou Campus, Athens (Greece)
2016-08-09
In http://dx.doi.org/10.1007/JHEP06(2015)049 we proposed a formula for the 3-point structure constants of generic primary fields in the Toda field theory, derived using topological strings and the AGT-W correspondence from the partition functions of the non-Lagrangian T{sub N} theories on S{sup 4}. In this article, we obtain from it the well-known formula by Fateev and Litvinov and show that the degeneration on a first level of one of the three primary fields on the Toda side corresponds to a particular Higgsing of the T{sub N} theories.
Numerical integration subprogrammes in Fortran II-D
Energy Technology Data Exchange (ETDEWEB)
Fry, C. R.
1966-12-15
This note briefly describes some integration subprogrammes written in FORTRAN II-D for the IBM 1620-II at CARDE. These presented are two Newton-Cotes, Chebyshev polynomial summation, Filon's, Nordsieck's and optimum Runge-Kutta and predictor-corrector methods. A few miscellaneous numerical integration procedures are also mentioned covering statistical functions, oscillating integrands and functions occurring in electrical engineering.
Quantum mechanics as applied mathematical statistics
International Nuclear Information System (INIS)
Skala, L.; Cizek, J.; Kapsa, V.
2011-01-01
Basic mathematical apparatus of quantum mechanics like the wave function, probability density, probability density current, coordinate and momentum operators, corresponding commutation relation, Schroedinger equation, kinetic energy, uncertainty relations and continuity equation is discussed from the point of view of mathematical statistics. It is shown that the basic structure of quantum mechanics can be understood as generalization of classical mechanics in which the statistical character of results of measurement of the coordinate and momentum is taken into account and the most important general properties of statistical theories are correctly respected.
PVeStA: A Parallel Statistical Model Checking and Quantitative Analysis Tool
AlTurki, Musab
2011-01-01
Statistical model checking is an attractive formal analysis method for probabilistic systems such as, for example, cyber-physical systems which are often probabilistic in nature. This paper is about drastically increasing the scalability of statistical model checking, and making such scalability of analysis available to tools like Maude, where probabilistic systems can be specified at a high level as probabilistic rewrite theories. It presents PVeStA, an extension and parallelization of the VeStA statistical model checking tool [10]. PVeStA supports statistical model checking of probabilistic real-time systems specified as either: (i) discrete or continuous Markov Chains; or (ii) probabilistic rewrite theories in Maude. Furthermore, the properties that it can model check can be expressed in either: (i) PCTL/CSL, or (ii) the QuaTEx quantitative temporal logic. As our experiments show, the performance gains obtained from parallelization can be very high. © 2011 Springer-Verlag.
Second Language Experience Facilitates Statistical Learning of Novel Linguistic Materials.
Potter, Christine E; Wang, Tianlin; Saffran, Jenny R
2017-04-01
Recent research has begun to explore individual differences in statistical learning, and how those differences may be related to other cognitive abilities, particularly their effects on language learning. In this research, we explored a different type of relationship between language learning and statistical learning: the possibility that learning a new language may also influence statistical learning by changing the regularities to which learners are sensitive. We tested two groups of participants, Mandarin Learners and Naïve Controls, at two time points, 6 months apart. At each time point, participants performed two different statistical learning tasks: an artificial tonal language statistical learning task and a visual statistical learning task. Only the Mandarin-learning group showed significant improvement on the linguistic task, whereas both groups improved equally on the visual task. These results support the view that there are multiple influences on statistical learning. Domain-relevant experiences may affect the regularities that learners can discover when presented with novel stimuli. Copyright © 2016 Cognitive Science Society, Inc.
Testing the statistical compatibility of independent data sets
International Nuclear Information System (INIS)
Maltoni, M.; Schwetz, T.
2003-01-01
We discuss a goodness-of-fit method which tests the compatibility between statistically independent data sets. The method gives sensible results even in cases where the χ 2 minima of the individual data sets are very low or when several parameters are fitted to a large number of data points. In particular, it avoids the problem that a possible disagreement between data sets becomes diluted by data points which are insensitive to the crucial parameters. A formal derivation of the probability distribution function for the proposed test statistics is given, based on standard theorems of statistics. The application of the method is illustrated on data from neutrino oscillation experiments, and its complementarity to the standard goodness-of-fit is discussed
PHOTOMETRIC SUPERNOVA COSMOLOGY WITH BEAMS AND SDSS-II
Energy Technology Data Exchange (ETDEWEB)
Hlozek, Renee [Oxford Astrophysics, Department of Physics, University of Oxford, Keble Road, Oxford, OX1 3RH (United Kingdom); Kunz, Martin [Department de physique theorique, Universite de Geneve, 30, quai Ernest-Ansermet, CH-1211 Geneve 4 (Switzerland); Bassett, Bruce; Smith, Mat; Newling, James [African Institute for Mathematical Sciences, 68 Melrose Road, Muizenberg 7945 (South Africa); Varughese, Melvin [Department of Mathematics and Applied Mathematics, University of Cape Town, Rondebosch, Cape Town, 7700 (South Africa); Kessler, Rick; Frieman, Joshua [The Kavli Institute for Cosmological Physics, The University of Chicago, 933 East 56th Street, Chicago, IL 60637 (United States); Bernstein, Joseph P.; Kuhlmann, Steve; Marriner, John [Argonne National Laboratory, 9700 South Cass Avenue, Argonne, IL 60439 (United States); Campbell, Heather; Lampeitl, Hubert; Nichol, Robert C. [Institute of Cosmology and Gravitation, Dennis Sciama Building Burnaby Road Portsmouth PO1 3FX (United Kingdom); Dilday, Ben [Las Cumbres Observatory Global Telescope Network, 6740 Cortona Drive, Suite 102, Goleta, CA 93117 (United States); Falck, Bridget; Riess, Adam G. [Department of Physics and Astronomy, Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218 (United States); Sako, Masao [Department of Physics and Astronomy, University of Pennsylvania, 203 South 33rd Street, Philadelphia, PA 19104 (United States); Schneider, Donald P., E-mail: rhlozek@astro.princeton.edu [Department of Astronomy and Astrophysics, Pennsylvania State University, University Park, PA 16802 (United States)
2012-06-20
Supernova (SN) cosmology without spectroscopic confirmation is an exciting new frontier, which we address here with the Bayesian Estimation Applied to Multiple Species (BEAMS) algorithm and the full three years of data from the Sloan Digital Sky Survey II Supernova Survey (SDSS-II SN). BEAMS is a Bayesian framework for using data from multiple species in statistical inference when one has the probability that each data point belongs to a given species, corresponding in this context to different types of SNe with their probabilities derived from their multi-band light curves. We run the BEAMS algorithm on both Gaussian and more realistic SNANA simulations with of order 10{sup 4} SNe, testing the algorithm against various pitfalls one might expect in the new and somewhat uncharted territory of photometric SN cosmology. We compare the performance of BEAMS to that of both mock spectroscopic surveys and photometric samples that have been cut using typical selection criteria. The latter typically either are biased due to contamination or have significantly larger contours in the cosmological parameters due to small data sets. We then apply BEAMS to the 792 SDSS-II photometric SNe with host spectroscopic redshifts. In this case, BEAMS reduces the area of the {Omega}{sub m}, {Omega}{sub {Lambda}} contours by a factor of three relative to the case where only spectroscopically confirmed data are used (297 SNe). In the case of flatness, the constraints obtained on the matter density applying BEAMS to the photometric SDSS-II data are {Omega}{sup BEAMS}{sub m} = 0.194 {+-} 0.07. This illustrates the potential power of BEAMS for future large photometric SN surveys such as Large Synoptic Survey Telescope.
PHOTOMETRIC SUPERNOVA COSMOLOGY WITH BEAMS AND SDSS-II
International Nuclear Information System (INIS)
Hlozek, Renée; Kunz, Martin; Bassett, Bruce; Smith, Mat; Newling, James; Varughese, Melvin; Kessler, Rick; Frieman, Joshua; Bernstein, Joseph P.; Kuhlmann, Steve; Marriner, John; Campbell, Heather; Lampeitl, Hubert; Nichol, Robert C.; Dilday, Ben; Falck, Bridget; Riess, Adam G.; Sako, Masao; Schneider, Donald P.
2012-01-01
Supernova (SN) cosmology without spectroscopic confirmation is an exciting new frontier, which we address here with the Bayesian Estimation Applied to Multiple Species (BEAMS) algorithm and the full three years of data from the Sloan Digital Sky Survey II Supernova Survey (SDSS-II SN). BEAMS is a Bayesian framework for using data from multiple species in statistical inference when one has the probability that each data point belongs to a given species, corresponding in this context to different types of SNe with their probabilities derived from their multi-band light curves. We run the BEAMS algorithm on both Gaussian and more realistic SNANA simulations with of order 10 4 SNe, testing the algorithm against various pitfalls one might expect in the new and somewhat uncharted territory of photometric SN cosmology. We compare the performance of BEAMS to that of both mock spectroscopic surveys and photometric samples that have been cut using typical selection criteria. The latter typically either are biased due to contamination or have significantly larger contours in the cosmological parameters due to small data sets. We then apply BEAMS to the 792 SDSS-II photometric SNe with host spectroscopic redshifts. In this case, BEAMS reduces the area of the Ω m , Ω Λ contours by a factor of three relative to the case where only spectroscopically confirmed data are used (297 SNe). In the case of flatness, the constraints obtained on the matter density applying BEAMS to the photometric SDSS-II data are Ω BEAMS m = 0.194 ± 0.07. This illustrates the potential power of BEAMS for future large photometric SN surveys such as Large Synoptic Survey Telescope.
Vortex dynamics and Lagrangian statistics in a model for active turbulence.
James, Martin; Wilczek, Michael
2018-02-14
Cellular suspensions such as dense bacterial flows exhibit a turbulence-like phase under certain conditions. We study this phenomenon of "active turbulence" statistically by using numerical tools. Following Wensink et al. (Proc. Natl. Acad. Sci. U.S.A. 109, 14308 (2012)), we model active turbulence by means of a generalized Navier-Stokes equation. Two-point velocity statistics of active turbulence, both in the Eulerian and the Lagrangian frame, is explored. We characterize the scale-dependent features of two-point statistics in this system. Furthermore, we extend this statistical study with measurements of vortex dynamics in this system. Our observations suggest that the large-scale statistics of active turbulence is close to Gaussian with sub-Gaussian tails.
Directory of Open Access Journals (Sweden)
Pawankumar Dnyandeo Tekale
2014-01-01
Full Text Available Objective: The aim of the study was to investigate the hyoid bone position and the head posture using lateral cephalograms in subjects with skeletal Class I and skeletal Class II pattern and to investigate the gender differences. Materials and Methods: The study used lateral cephalograms of 40 subjects (20 skeletal Class I pattern; 20 skeletal Class II pattern. Lateral cephalograms were traced and analyzed for evaluation of the hyoid bone position and the head posture using 34 parameters. Independent sample t-test was performed to compare the differences between the two groups and between genders in each group. Statistical tests were performed using NCSS 2007 software (NCSST, Kaysville, Utah, USA. Results: The linear measurements between the hyoid bone (H and cervical spine (CV2ia, the nasion-sella line, palatal line nasion line, the anterior nasal spine (ANS to perpendicular projection of H on the NLP (NLP- Nasal Linear Projection (H-NLP/ANS as well as the posterior cranial points (Bo, Ar and S points were found to be less in skeletal Class II subjects. The measurement H-CV2ia was found to be less in males with skeletal Class I pattern and H-CV4ia was found to be less in males with skeletal Class II pattern. The natural head posture showed no significant gender differences. Conclusion: The position of hyoid bone was closer to the cervical vertebra horizontally in skeletal Class II subjects when compared with skeletal Class I subjects. In males, the hyoid bone position was closer to the cervical vertebra horizontally both in skeletal Class I and skeletal Class II subjects.
Matrix algebra theory, computations and applications in statistics
Gentle, James E
2017-01-01
This textbook for graduate and advanced undergraduate students presents the theory of matrix algebra for statistical applications, explores various types of matrices encountered in statistics, and covers numerical linear algebra. Matrix algebra is one of the most important areas of mathematics in data science and in statistical theory, and the second edition of this very popular textbook provides essential updates and comprehensive coverage on critical topics in mathematics in data science and in statistical theory. Part I offers a self-contained description of relevant aspects of the theory of matrix algebra for applications in statistics. It begins with fundamental concepts of vectors and vector spaces; covers basic algebraic properties of matrices and analytic properties of vectors and matrices in multivariate calculus; and concludes with a discussion on operations on matrices in solutions of linear systems and in eigenanalysis. Part II considers various types of matrices encountered in statistics, such as...
International Nuclear Information System (INIS)
Robinson, M.T.
1993-01-01
The MARLOWE program was used to study the statistics of sputtering on the example of 1- to 100-keV Au atoms normally incident on static (001) and (111) Au crystals. The yield of sputtered atoms was examined as a function of the impact point of the incident particles (''ions'') on the target surfaces. There were variations on two scales. The effects of the axial and planar channeling of the ions could be traced, the details depending on the orientation of the target and the energies of the ions. Locally, the sputtering yield was very sensitive to the impact point, small changes in position often producing large changes yield. Results indicate strongly that the sputtering yield is a random (''chaotic'') function of the impact point
Testing statistical hypotheses
Lehmann, E L
2005-01-01
The third edition of Testing Statistical Hypotheses updates and expands upon the classic graduate text, emphasizing optimality theory for hypothesis testing and confidence sets. The principal additions include a rigorous treatment of large sample optimality, together with the requisite tools. In addition, an introduction to the theory of resampling methods such as the bootstrap is developed. The sections on multiple testing and goodness of fit testing are expanded. The text is suitable for Ph.D. students in statistics and includes over 300 new problems out of a total of more than 760. E.L. Lehmann is Professor of Statistics Emeritus at the University of California, Berkeley. He is a member of the National Academy of Sciences and the American Academy of Arts and Sciences, and the recipient of honorary degrees from the University of Leiden, The Netherlands and the University of Chicago. He is the author of Elements of Large-Sample Theory and (with George Casella) he is also the author of Theory of Point Estimat...
End-point sharpness in thermometric titrimetry.
Tyrrell, H J
1967-07-01
It is shown that the sharpness of an end-point in a thermometric titration where the simple reaction A + B right harpoon over left harpoon AB takes place, depends on Kc(A') where K is the equilibrium constant for the reaction, and c(A') is the total concentration of the titrand (A) in the reaction mixture. The end-point is sharp if, (i) the enthalpy change in the reaction is not negligible, and (ii) Kc(A') > 10(3). This shows that it should, for example, be possible to titrate 0.1 M acid, pK(A) = 10, using a thennometric end-point. Some aspects of thermometric titrimetry when Kc(A') < 10(3) are also considered.
An introduction to inferential statistics: A review and practical guide
International Nuclear Information System (INIS)
Marshall, Gill; Jonker, Leon
2011-01-01
Building on the first part of this series regarding descriptive statistics, this paper demonstrates why it is advantageous for radiographers to understand the role of inferential statistics in deducing conclusions from a sample and their application to a wider population. This is necessary so radiographers can understand the work of others, can undertake their own research and evidence base their practice. This article explains p values and confidence intervals. It introduces the common statistical tests that comprise inferential statistics, and explains the use of parametric and non-parametric statistics. To do this, the paper reviews relevant literature, and provides a checklist of points to consider before and after applying statistical tests to a data set. The paper provides a glossary of relevant terms and the reader is advised to refer to this when any unfamiliar terms are used in the text. Together with the information provided on descriptive statistics in an earlier article, it can be used as a starting point for applying statistics in radiography practice and research.
An introduction to inferential statistics: A review and practical guide
Energy Technology Data Exchange (ETDEWEB)
Marshall, Gill, E-mail: gill.marshall@cumbria.ac.u [Faculty of Health, Medical Sciences and Social Care, University of Cumbria, Lancaster LA1 3JD (United Kingdom); Jonker, Leon [Faculty of Health, Medical Sciences and Social Care, University of Cumbria, Lancaster LA1 3JD (United Kingdom)
2011-02-15
Building on the first part of this series regarding descriptive statistics, this paper demonstrates why it is advantageous for radiographers to understand the role of inferential statistics in deducing conclusions from a sample and their application to a wider population. This is necessary so radiographers can understand the work of others, can undertake their own research and evidence base their practice. This article explains p values and confidence intervals. It introduces the common statistical tests that comprise inferential statistics, and explains the use of parametric and non-parametric statistics. To do this, the paper reviews relevant literature, and provides a checklist of points to consider before and after applying statistical tests to a data set. The paper provides a glossary of relevant terms and the reader is advised to refer to this when any unfamiliar terms are used in the text. Together with the information provided on descriptive statistics in an earlier article, it can be used as a starting point for applying statistics in radiography practice and research.
Poisson branching point processes
International Nuclear Information System (INIS)
Matsuo, K.; Teich, M.C.; Saleh, B.E.A.
1984-01-01
We investigate the statistical properties of a special branching point process. The initial process is assumed to be a homogeneous Poisson point process (HPP). The initiating events at each branching stage are carried forward to the following stage. In addition, each initiating event independently contributes a nonstationary Poisson point process (whose rate is a specified function) located at that point. The additional contributions from all points of a given stage constitute a doubly stochastic Poisson point process (DSPP) whose rate is a filtered version of the initiating point process at that stage. The process studied is a generalization of a Poisson branching process in which random time delays are permitted in the generation of events. Particular attention is given to the limit in which the number of branching stages is infinite while the average number of added events per event of the previous stage is infinitesimal. In the special case when the branching is instantaneous this limit of continuous branching corresponds to the well-known Yule--Furry process with an initial Poisson population. The Poisson branching point process provides a useful description for many problems in various scientific disciplines, such as the behavior of electron multipliers, neutron chain reactions, and cosmic ray showers
Identification of Influential Points in a Linear Regression Model
Directory of Open Access Journals (Sweden)
Jan Grosz
2011-03-01
Full Text Available The article deals with the detection and identification of influential points in the linear regression model. Three methods of detection of outliers and leverage points are described. These procedures can also be used for one-sample (independentdatasets. This paper briefly describes theoretical aspects of several robust methods as well. Robust statistics is a powerful tool to increase the reliability and accuracy of statistical modelling and data analysis. A simulation model of the simple linear regression is presented.
Matusiak, Agnieszka; Kuczer, Mariola; Czarniewska, Elżbieta; Rosiński, Grzegorz; Kowalik-Jankowska, Teresa
2014-09-01
Mono- and polynuclear copper(II) complexes of the alloferon 1 with point mutations (H1A) A(1)GVSGH(6)GQH(9)GVH(12)G (Allo1A) and (H9A) H(1)GVSGH(6)GQA(9)GVH(12)G (Allo9A) have been studied by potentiometric, UV-visible, CD, EPR spectroscopic and mass spectrometry (MS) methods. To obtain a complete complex speciation different metal-to-ligand molar ratios ranging from 1:1 to 4:1 for Allo1A and to 3:1 for Allo9A were studied. The presence of the His residue in first position of the peptide chain changes the coordination abilities of the Allo9A peptide in comparison to that of the Allo1A. Imidazole-N3 atom of N-terminal His residue of the Allo9A peptide forms stable 6-membered chelate with the terminal amino group. Furthermore, the presence of two additional histidine residues in the Allo9A peptide (H(6),H(12)) leads to the formation of the CuL complex with 4N {NH2,NIm-H(1),NIm-H(6),NIm-H(12)} binding site in wide pH range (5-8). For the Cu(II)-Allo1A system, the results demonstrated that at physiological pH7.4 the predominant complex the CuH-1L consists of the 3N {NH2,N(-),CO,NIm} coordination mode. The inductions of phenoloxidase activity and apoptosis in vivo in Tenebrio molitor cells by the ligands and their copper(II) complexes at pH7.4 were studied. The Allo1A, Allo1K peptides and their copper(II) complexes displayed the lowest hemocytotoxic activity while the most active was the Cu(II)-Allo9A complex formed at pH7.4. The results may suggest that the N-terminal-His(1) and His(6) residues may be more important for their proapoptotic properties in insects than those at positions 9 and 12 in the peptide chain. Copyright © 2014 Elsevier Inc. All rights reserved.
Statistical Decision Support Tools for System-Oriented Runway Management, Phase II
National Aeronautics and Space Administration — The feasibility of developing a statistical decision support system for traffic flow management in the terminal area and runway load balancing was demonstrated in...
Studies in Theoretical and Applied Statistics
Pratesi, Monica; Ruiz-Gazen, Anne
2018-01-01
This book includes a wide selection of the papers presented at the 48th Scientific Meeting of the Italian Statistical Society (SIS2016), held in Salerno on 8-10 June 2016. Covering a wide variety of topics ranging from modern data sources and survey design issues to measuring sustainable development, it provides a comprehensive overview of the current Italian scientific research in the fields of open data and big data in public administration and official statistics, survey sampling, ordinal and symbolic data, statistical models and methods for network data, time series forecasting, spatial analysis, environmental statistics, economic and financial data analysis, statistics in the education system, and sustainable development. Intended for researchers interested in theoretical and empirical issues, this volume provides interesting starting points for further research.
FULLY CONVOLUTIONAL NETWORKS FOR GROUND CLASSIFICATION FROM LIDAR POINT CLOUDS
Directory of Open Access Journals (Sweden)
A. Rizaldy
2018-05-01
Full Text Available Deep Learning has been massively used for image classification in recent years. The use of deep learning for ground classification from LIDAR point clouds has also been recently studied. However, point clouds need to be converted into an image in order to use Convolutional Neural Networks (CNNs. In state-of-the-art techniques, this conversion is slow because each point is converted into a separate image. This approach leads to highly redundant computation during conversion and classification. The goal of this study is to design a more efficient data conversion and ground classification. This goal is achieved by first converting the whole point cloud into a single image. The classification is then performed by a Fully Convolutional Network (FCN, a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than state-of-the-art techniques. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Our experimental analysis on the same dataset shows that the proposed method results in 5.22 % of total error, 4.10 % of type I error, and 15.07 % of type II error. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher. The method was also tested on a very high point density LIDAR point clouds resulting in 4.02 % of total error, 2.15 % of type I error and 6.14 % of type II error.
A basic introduction to statistics for the orthopaedic surgeon.
Bertrand, Catherine; Van Riet, Roger; Verstreken, Frederik; Michielsen, Jef
2012-02-01
Orthopaedic surgeons should review the orthopaedic literature in order to keep pace with the latest insights and practices. A good understanding of basic statistical principles is of crucial importance to the ability to read articles critically, to interpret results and to arrive at correct conclusions. This paper explains some of the key concepts in statistics, including hypothesis testing, Type I and Type II errors, testing of normality, sample size and p values.
Overview of the TJ-II remote participation system
International Nuclear Information System (INIS)
Vega, J.; Sanchez, E.; Portas, A.; Pereira, A.; Mollinedo, A.; Munoz, J.A.; Ruiz, M.; Barrera, E.; Lopez, S.; Machon, D.; Castro, R.; Lopez, D.
2006-01-01
The TJ-II remote participation system (RPS) is focused on providing remote access to elements that depend exclusively on characteristics of the TJ-II environment: data acquisition, diagnostics control systems and TJ-II operation tracking. Four key points were taken into account prior to starting the software design: access security, software execution platforms, software maintenance and distribution and delivery of operation events. The first, access security, was addressed by means of a distributed authentication and authorization system, PAPI. Regarding the other points, the development was based on the use of web servers (due to their standard character, flexibility and scalability) and Java technologies (due to their open nature, security properties and technological maturity). Software deployment was prepared to make use of the Java Network Launching Protocol (JNLP). On-line message distribution was planned according to a message oriented middleware. At present, the TJ-II RPS manages over 1000 digitization channels and 20 diagnostic control systems. The TJ-II RPS architecture is flexible, scalable and powerful enough to be applied to distributed environments and, in particular, it could be used in the ITER environment
Overview of the TJ-II remote participation system
Energy Technology Data Exchange (ETDEWEB)
Vega, J. [Asociacion EURATOM/CIEMAT para Fusion, Avda. Complutense, 22, 28040 Madrid (Spain)]. E-mail: jesus.vega@ciemat.es; Sanchez, E. [Asociacion EURATOM/CIEMAT para Fusion, Avda. Complutense, 22, 28040 Madrid (Spain); Portas, A. [Asociacion EURATOM/CIEMAT para Fusion, Avda. Complutense, 22, 28040 Madrid (Spain); Pereira, A. [Asociacion EURATOM/CIEMAT para Fusion, Avda. Complutense, 22, 28040 Madrid (Spain); Mollinedo, A. [Computer Centre, CIEMAT, Avda. Complutense, 22, 28040 Madrid (Spain); Munoz, J.A. [Computer Centre, CIEMAT, Avda. Complutense, 22, 28040 Madrid (Spain); Ruiz, M. [Dpto. De Sistemas Electronicos y de Control, UPM, Campus Sur, Ctra. Valencia km 7, 28031 Madrid (Spain); Barrera, E. [Dpto. De Sistemas Electronicos y de Control, UPM, Campus Sur, Ctra. Valencia km 7, 28031 Madrid (Spain); Lopez, S. [Dpto. De Sistemas Electronicos y de Control, UPM, Campus Sur, Ctra. Valencia km 7, 28031 Madrid (Spain); Machon, D. [Dpto. De Sistemas Electronicos y de Control, UPM, Campus Sur, Ctra. Valencia km 7, 28031 Madrid (Spain); Castro, R. [Red.es-RedIRIS, Edificio Bronce, Plaza Manuel Gomez Moreno s/n, 28020 Madrid (Spain); Lopez, D. [Red.es-RedIRIS, Edificio Bronce, Plaza Manuel Gomez Moreno s/n, 28020 Madrid (Spain)
2006-07-15
The TJ-II remote participation system (RPS) is focused on providing remote access to elements that depend exclusively on characteristics of the TJ-II environment: data acquisition, diagnostics control systems and TJ-II operation tracking. Four key points were taken into account prior to starting the software design: access security, software execution platforms, software maintenance and distribution and delivery of operation events. The first, access security, was addressed by means of a distributed authentication and authorization system, PAPI. Regarding the other points, the development was based on the use of web servers (due to their standard character, flexibility and scalability) and Java technologies (due to their open nature, security properties and technological maturity). Software deployment was prepared to make use of the Java Network Launching Protocol (JNLP). On-line message distribution was planned according to a message oriented middleware. At present, the TJ-II RPS manages over 1000 digitization channels and 20 diagnostic control systems. The TJ-II RPS architecture is flexible, scalable and powerful enough to be applied to distributed environments and, in particular, it could be used in the ITER environment.
Kittiwisit, Piyanat; Bowman, Judd D.; Jacobs, Daniel C.; Beardsley, Adam P.; Thyagarajan, Nithyanandan
2018-03-01
We present a baseline sensitivity analysis of the Hydrogen Epoch of Reionization Array (HERA) and its build-out stages to one-point statistics (variance, skewness, and kurtosis) of redshifted 21 cm intensity fluctuation from the Epoch of Reionization (EoR) based on realistic mock observations. By developing a full-sky 21 cm light-cone model, taking into account the proper field of view and frequency bandwidth, utilizing a realistic measurement scheme, and assuming perfect foreground removal, we show that HERA will be able to recover statistics of the sky model with high sensitivity by averaging over measurements from multiple fields. All build-out stages will be able to detect variance, while skewness and kurtosis should be detectable for HERA128 and larger. We identify sample variance as the limiting constraint of the measurements at the end of reionization. The sensitivity can also be further improved by performing frequency windowing. In addition, we find that strong sample variance fluctuation in the kurtosis measured from an individual field of observation indicates the presence of outlying cold or hot regions in the underlying fluctuations, a feature that can potentially be used as an EoR bubble indicator.
International Nuclear Information System (INIS)
Krivoruchenko, M.I.
1989-01-01
A detailed statistical analysis of angular distribution of neutrino events observed in Kamiokande II and IMB detectors on UT 07:35, 2/23'87 is carried out. Distribution functions of the mean scattering angles in the reaction anti υ e p→e + n and υe→υe are constructed with account taken of the multiple Coulomb scattering and the experimental angular errors. The Smirnov and Wald-Wolfowitz run tests are used to test the hypothesis that the angular distributions of events from the two detectors agree with each other. We test with the use of the Kolmogorov and Mises statistical criterions the hypothesis that the recorded events all represent anti υ e p→e + n inelastic scatterings. Then the Neyman-Pearson test is applied to each event in testing the hypothesis anti υ e p→e + n against the alternative υe→υe. The hypotheses that the number of elastic events equals s=0, 1, 2, ... against the alternatives s≠0, 1, 2, ... are tested on the basis of the generalized likelihood ratio criterion. The confidence intervals for the number of elastic events are also constructed. The current supernova models fail to give a satisfactory account of the angular distribution data. (orig.)
Mantovani, Daniela; Sutherland, Holly
2003-01-01
This paper reports an exercise to validate EUROMOD output for 1998 by comparing income statistics calculated from the baseline micro-output with comparable statistics from other sources, including the European Community Household Panel. The main potential reasons for discrepancies are identified. While there are some specific national issues that arise, there are two main general points to consider in interpreting EUROMOD estimates of social indicators across EU member States: (a) the method ...
Directory of Open Access Journals (Sweden)
Andry Leonard Je
2006-03-01
Full Text Available Calcium Hydroxide point and Chlorhexidine point are new drugs for eliminating bacteria in the root canal. The points slowly and controly realease Calcium Hydroxide and Chlorhexidine into root canal. The purpose of the study was to determined the effectivity of Calcium hydroxide point (Calcium hydroxide plus point and Chlorhexidine point in eleminating the root canal bacteria of nescrosis teeth. In this study 14 subjects were divided into 2 groups. The first group was treated with Calcium hydroxide point and the second was treated with Chlorhexidine poin. The bacteriological sampling were measured with spectrofotometry. The Paired T Test analysis (before and after showed significant difference between the first and second group. The Independent T Test which analysed the effectivity of both groups had not showed significant difference. Although there was no significant difference in statistical test, the result of second group eliminate more bacteria than the first group. The present finding indicated that the use of Chlorhexidine point was better than Calcium hydroxide point in seven days period. The conclusion is Chlorhexidine point and Calcium hydroxide point as root canal medicament effectively eliminate root canal bacteria of necrosis teeth.
The importance of topographically corrected null models for analyzing ecological point processes.
McDowall, Philip; Lynch, Heather J
2017-07-01
Analyses of point process patterns and related techniques (e.g., MaxEnt) make use of the expected number of occurrences per unit area and second-order statistics based on the distance between occurrences. Ecologists working with point process data often assume that points exist on a two-dimensional x-y plane or within a three-dimensional volume, when in fact many observed point patterns are generated on a two-dimensional surface existing within three-dimensional space. For many surfaces, however, such as the topography of landscapes, the projection from the surface to the x-y plane preserves neither area nor distance. As such, when these point patterns are implicitly projected to and analyzed in the x-y plane, our expectations of the point pattern's statistical properties may not be met. When used in hypothesis testing, we find that the failure to account for the topography of the generating surface may bias statistical tests that incorrectly identify clustering and, furthermore, may bias coefficients in inhomogeneous point process models that incorporate slope as a covariate. We demonstrate the circumstances under which this bias is significant, and present simple methods that allow point processes to be simulated with corrections for topography. These point patterns can then be used to generate "topographically corrected" null models against which observed point processes can be compared. © 2017 by the Ecological Society of America.
Statistical mechanics for a system with imperfections: pt. 1
International Nuclear Information System (INIS)
Choh, S.T.; Kahng, W.H.; Um, C.I.
1982-01-01
Statistical mechanics is extended to treat a system where parts of the Hamiltonian are randomly varying. As the starting point of the theory, the statistical correlation among energy levels is neglected, allowing use of the central limit theorem of the probability theory. (Author)
Statistical mechanics for solitons in liquid Helium. II
International Nuclear Information System (INIS)
Evangelista, L.R.; Ventura, I.
1988-06-01
The thermal cloud is perfected through the introduction of the second condensate field ψ c , that condensates in the instantaneous packet wave function, and provides a coherent envelope to modulate the bound states. The squared amplitude of the second classical field [ψ c ] 2 , is equal to the thermal cloud density. The bound -state zero-point kinetic energy belongs now to the classical field kinetic term, and this leads us to subtract another counter-term from the thermal cloud Hamiltonian. It then results a new gap, given by the kinetic energy, 1/2 m c-tilde V) 2 , that is due to the soliton's motion. Besides the superfluid and the normal liquid, we report the theoretical existence of two other phases. (author) [pt
International Nuclear Information System (INIS)
French, J.B.
1974-01-01
The concepts of statistical behavior and symmetry are presented from the point of view of many body spectroscopy. Remarks are made on methods for the evaluation of moments, particularly widths, for the purpose of giving a feeling for the types of mathematical structures encountered. Applications involving ground state energies, spectra, and level densities are discussed. The extent to which Hamiltonian eigenstates belong to irreducible representations is mentioned. (4 figures, 1 table) (U.S.)
Statistical Yearbook of Norway 2012
Energy Technology Data Exchange (ETDEWEB)
NONE
2012-07-01
The Statistical Yearbook of Norway 2012 contains statistics on Norway and main figures for the Nordic countries and other countries selected from international statistics. The international over-views are integrated with the other tables and figures. The selection of tables in this edition is mostly the same as in the 2011 edition. The yearbook's 480 tables and figures present the main trends in official statistics in most areas of society. The list of tables and figures and an index at the back of the book provide easy access to relevant information. In addition, source information and Internet addresses below the tables make the yearbook a good starting point for those who are looking for more detailed statistics. The statistics are based on data gathered in statistical surveys and from administrative data, which, in cooperation with other public institutions, have been made available for statistical purposes. Some tables have been prepared in their entirety by other public institutions. The statistics follow approved principles, standards and classifications that are in line with international recommendations and guidelines. Content: 00. General subjects; 01. Environment; 02. Population; 03. Health and social conditions; 04. Education; 05. Personal economy and housing conditions; 06. Labour market; 07. Recreational, cultural and sporting activities; 08. Prices and indices; 09. National Economy and external trade; 10. Industrial activities; 11. Financial markets; 12. Public finances; Geographical survey.(eb)
Statistical Yearbook of Norway 2012
Energy Technology Data Exchange (ETDEWEB)
NONE
2012-07-01
The Statistical Yearbook of Norway 2012 contains statistics on Norway and main figures for the Nordic countries and other countries selected from international statistics. The international over-views are integrated with the other tables and figures. The selection of tables in this edition is mostly the same as in the 2011 edition. The yearbook's 480 tables and figures present the main trends in official statistics in most areas of society. The list of tables and figures and an index at the back of the book provide easy access to relevant information. In addition, source information and Internet addresses below the tables make the yearbook a good starting point for those who are looking for more detailed statistics. The statistics are based on data gathered in statistical surveys and from administrative data, which, in cooperation with other public institutions, have been made available for statistical purposes. Some tables have been prepared in their entirety by other public institutions. The statistics follow approved principles, standards and classifications that are in line with international recommendations and guidelines. Content: 00. General subjects; 01. Environment; 02. Population; 03. Health and social conditions; 04. Education; 05. Personal economy and housing conditions; 06. Labour market; 07. Recreational, cultural and sporting activities; 08. Prices and indices; 09. National Economy and external trade; 10. Industrial activities; 11. Financial markets; 12. Public finances; Geographical survey.(eb)
Quantum statistics and liquid helium 3 - helum 4 mixtures
International Nuclear Information System (INIS)
Cohen, E.G.D.
1979-01-01
The behaviour of liquid helium 3-helium 4 mixtures is considered from the point of view of manifestation of quantum statistics effects in macrophysics. The Boze=Einstein statistics is shown to be of great importance for understanding superfluid helium-4 properties whereas the Fermi-Dirac statistics is of importance for understanding helium-3 properties. Without taking into consideration the interaction between the helium atoms it is impossible to understand the basic properties of liquid helium 33 - helium 4 mixtures at constant pressure. Proposed is a simple model of the liquid helium 3-helium 4 mixture, namely the binary mixture consisting of solid spheres of two types subjecting to the Fermi-Dirac and Bose-Einstein statistics relatively. This model predicts correctly the most surprising peculiarities of phase diagrams of concentration dependence on temperature for helium solutions. In particular, the helium 4 Bose-Einstein statistics is responsible for the phase lamination of helium solutions at low temperatures. It starts in the peculiar critical point. The helium 4 Fermi-Dirac statistics results in incomplete phase lamination close to the absolute zero temperatures, that permits operation of a powerful cooling facility, namely refrigerating machine on helium solution
Statistics and Data Interpretation for Social Work
Rosenthal, James
2011-01-01
"Without question, this text will be the most authoritative source of information on statistics in the human services. From my point of view, it is a definitive work that combines a rigorous pedagogy with a down to earth (commonsense) exploration of the complex and difficult issues in data analysis (statistics) and interpretation. I welcome its publication.". -Praise for the First Edition. Written by a social worker for social work students, this is a nuts and bolts guide to statistics that presents complex calculations and concepts in clear, easy-to-understand language. It includes
International Nuclear Information System (INIS)
Chen, Lin; Fan, Xiangtao; Du, Xiaoping
2014-01-01
Point cloud filtering is the basic and key step in LiDAR data processing. Adaptive Triangle Irregular Network Modelling (ATINM) algorithm and Threshold Segmentation on Elevation Statistics (TSES) algorithm are among the mature algorithms. However, few researches concentrate on the parameter selections of ATINM and the iteration condition of TSES, which can greatly affect the filtering results. First the paper presents these two key problems under two different terrain environments. For a flat area, small height parameter and angle parameter perform well and for areas with complex feature changes, large height parameter and angle parameter perform well. One-time segmentation is enough for flat areas, and repeated segmentations are essential for complex areas. Then the paper makes comparisons and analyses of the results by these two methods. ATINM has a larger I error in both two data sets as it sometimes removes excessive points. TSES has a larger II error in both two data sets as it ignores topological relations between points. ATINM performs well even with a large region and a dramatic topology while TSES is more suitable for small region with flat topology. Different parameters and iterations can cause relative large filtering differences
Randomised clinical trial of five ear acupuncture points for the treatment of overweight people.
Yeo, Sujung; Kim, Kang Sik; Lim, Sabina
2014-04-01
To evaluate the efficacy of the five ear acupuncture points (Shen-men, Spleen, Stomach, Hunger, Endocrine), generally used in Korean clinics for treating obesity, and compare them with the Hunger acupuncture point. A randomised controlled clinical trial was conducted in 91 Koreans (16 male and 75 female, body mass index (BMI)≥23), who had not received any other weight control treatment within the past 6 months. Subjects were divided randomly into treatment I, treatment II or sham control groups and received unilateral auricular acupuncture with indwelling needles replaced weekly for 8 weeks. Treatment I group received acupuncture at the five ear acupuncture points, treatment II group at the Hunger acupuncture point only and the sham control group received acupuncture at the five ear acupuncture points used in treatment I, but the needles were removed immediately after insertion. BMI, waist circumference, weight, body fat mass (BFM), percentage body fat and blood pressure were measured at baseline and at 4 and 8 weeks after treatment. For the 58 participants who provided data at 8 weeks, significant differences in BMI, weight and BFM were found between the treatment and control groups. Treatment groups I and II showed 6.1% and 5.7% reduction in BMI, respectively (pfive ear acupuncture points, generally used in Korean clinics, and the Hunger point alone treatment are both effective for treating overweight people.
Some biodiversity points and suggestions for the Myanmar Protected Area System
Daniel H. Henning
2007-01-01
This paper is divided into a brief background section followed by Part I: Biodiversity Points, and Part II: Suggestions that are needed for the ecological integrity of actual and potential protected areas in Myanmar. Part I consists of general and Myanmar Biodiversity Considerations, and Part II consists of the following suggestions: (l) international financial and...
Directory of Open Access Journals (Sweden)
Hua-Fang Liao
2010-03-01
Conclusion: If only one cutoff point can be chosen, the authors suggest that clinicians should choose cutoff point B when using the Taipei II for screening. However, two cutoff points of Taipei II, a combination of strategy A and B, can also be used clinically.
Directory of Open Access Journals (Sweden)
Ya-Sha Zhou
2017-09-01
Full Text Available AIM: To remark the effect of Qingguang'an II on expression of PAX6, Ngn1, and Ngn2 mRNA of rats with chronic high intraocular pressure. METHODS: Totally 40 male SD rats were randomly divided into 6 groups, that was: A: blank group, B: model group, C: Qingguang'an II low dose group, D: Qingguang'an II moderate dose group, E: Qingguang'an II high dose group, F: Yimaikang disket group. B, C, D, E, F groups of experimental rats were established the model of chronic high intraocular pressure(IOPby cauterizing of superficial scleral vein. Animal model was established successfully by using monitoring IOP consistently keep above 25mmHg for 8wk as cut-off criterion. Tissues of Eyes were obtained after intragastric administration for 2wk and 4wk. The expressions of PAX6, Ngn1, and Ngn2 mRNA were investigated by Real-time PCR. RESULTS: At the time-point of 2wk, PAX6, Ngn1, and Ngn2 mRNA in group B were statistically expressed in lower level comparing with other groups(PPPP>0.05. CONCLUSION: In summar, Qingguang'an II and Yimaikang disket can remarkably increase the expressions of PAX6, Ngn1, and Ngn2, which suggest protecting the optic nerve of rats caused by chronic high IOP. What's more, this study indicated that, in the protection of optic nerve of rats with chronic high IOP, the high dose of Qingguang'an II at the time-point of 4wk was the better choice.
International Nuclear Information System (INIS)
Jemikalajah, Johnson D.; Okogun, Godwin Ray A.
2009-01-01
To assess the prevalence of human immunodeficiency virus (HIV) and pulmonary tuberculosis (PTB) in the study population in Delta State of Nigeria. Two hundred and five patients suspected of HIV and TB were prospectively studied in Kwale, Agbor and Eku in Delta State of Nigeria from February 2006 to February 2008. Human immunodeficiency virus status was determined using World Health Organization systems II, and Zeihl Nelson staining technique was used for TB screening. A health point prevalence rate of 53.2%, was obtained for HIV, 49.3% for TB, and 16.6% for HIV/TB. The population of HIV positive (p=0.890, p=0.011, p=0.006) and TB positive (p=0.135, p=0.0003, p=0.0001) subjects were statistically significant among the suspected subjects while the HIV/TB positive cases were not statistically significant (p=0.987, p=0.685, p=0.731). Our study showed that HIV and PTB infections remains high in parts of Delta State in Nigeria. (author)
The statistical analysis of anisotropies
International Nuclear Information System (INIS)
Webster, A.
1977-01-01
One of the many uses to which a radio survey may be put is an analysis of the distribution of the radio sources on the celestial sphere to find out whether they are bunched into clusters or lie in preferred regions of space. There are many methods of testing for clustering in point processes and since they are not all equally good this contribution is presented as a brief guide to what seems to be the best of them. The radio sources certainly do not show very strong clusering and may well be entirely unclustered so if a statistical method is to be useful it must be both powerful and flexible. A statistic is powerful in this context if it can efficiently distinguish a weakly clustered distribution of sources from an unclustered one, and it is flexible if it can be applied in a way which avoids mistaking defects in the survey for true peculiarities in the distribution of sources. The paper divides clustering statistics into two classes: number density statistics and log N/log S statistics. (Auth.)
Zero-point field in curved spaces
International Nuclear Information System (INIS)
Hacyan, S.; Sarmiento, A.; Cocho, G.; Soto, F.
1985-01-01
Boyer's conjecture that the thermal effects of acceleration are manifestations of the zero-point field is further investigated within the context of quantum field theory in curved spaces. The energy-momentum current for a spinless field is defined rigorously and used as the basis for investigating the energy density observed in a noninertial frame. The following examples are considered: (i) uniformly accelerated observers, (ii) two-dimensional Schwarzschild black holes, (iii) the Einstein universe. The energy spectra which have been previously calculated appear in the present formalism as an additional contribution to the energy of the zero-point field, but particle creation does not occur. It is suggested that the radiation produced by gravitational fields or by acceleration is a manifestation of the zero-point field and of the same nature (whether real or virtual)
International Nuclear Information System (INIS)
Duri, D.
2012-01-01
This experimental work is focused on the statistical study of the high Reynolds number turbulent velocity field in an inertially driven liquid helium axis-symmetric round jet at temperatures above and below the lambda transition (between 2.3 K and 1.78 K) in a cryogenic wind tunnel. The possibility to finely tune the fluid temperature allows us to perform a comparative study of the quantum He II turbulence within the classical framework of the Kolmogorov turbulent cascade in order to have a better understanding of the energy cascade process in a superfluid. In particular we focused our attention on the intermittency phenomena, in both He I and He II phases, by measuring the high order statistics of the longitudinal velocity increments by means of the flatness and the skewness statistical estimators. A first phase consisted in developing the cryogenic facility, a closed loop pressurized and temperature regulated wind tunnel, and adapting the classic hot-wire anemometry technique in order to be able to work in such a challenging low temperature environment. A detailed calibration procedure of the fully developed turbulent flow was the carried out at 2.3 K at Reynolds numbers based on the Taylor length scale up to 2600 in order to qualify our testing set-up and to identify possible facility-related spurious phenomena. This procedure showed that the statistical properties of the longitudinal velocity increments are in good agreement with respect to previous results. By further reducing the temperature of the working fluid (at a constant pressure) below the lambda point down to 1.78 K local velocity measurements were performed at different superfluid density fractions. The results show a classic behaviour of the He II energy cascade at large scales while, at smaller scales, a deviation has been observed. The occurrence of this phenomenon, which requires further investigation and modelling, is highlighted by the observed changing sign of the third order structure
Statistical models based on conditional probability distributions
International Nuclear Information System (INIS)
Narayanan, R.S.
1991-10-01
We present a formulation of statistical mechanics models based on conditional probability distribution rather than a Hamiltonian. We show that it is possible to realize critical phenomena through this procedure. Closely linked with this formulation is a Monte Carlo algorithm, in which a configuration generated is guaranteed to be statistically independent from any other configuration for all values of the parameters, in particular near the critical point. (orig.)
The statistics of galaxies: beyond correlation functions
International Nuclear Information System (INIS)
Lachieze-Rey, M.
1988-01-01
I mention some normalization problems encountered when estimating the 2-point correlation functions in samples of galaxies of different average densities. I present some aspects of the void probability function as a statistical indicator, free of such normalization problems. Finally I suggest a new statistical approach to give an account in a synthetic way of those aspects of the galaxy distribution that a conventional method is unable to characterize
Statistical Learning Theory: Models, Concepts, and Results
von Luxburg, Ulrike; Schoelkopf, Bernhard
2008-01-01
Statistical learning theory provides the theoretical basis for many of today's machine learning algorithms. In this article we attempt to give a gentle, non-technical overview over the key ideas and insights of statistical learning theory. We target at a broad audience, not necessarily machine learning researchers. This paper can serve as a starting point for people who want to get an overview on the field before diving into technical details.
Optical spectrum of HDE 226868 = Cygnus X-1. II. Spectrophotometry and mass estimates
International Nuclear Information System (INIS)
Gies, D.R.; Bolton, C.T.
1986-01-01
In part I of this series, Gies and Bolton (1982) have presented the results of radial velocity measures of 78 high-dispersion spectrograms of HDE 226868 = Cyg X-1. For the present study, 55 of the best plates considered by Gies and Bolton were selected to form 10 average spectra. An overall mean spectrum with S/N ratio = 300 was formed by coadding the 10 averaged spectra. There is no evidence for statistically significant variations of the spectral type about the mean value of 09.7 Iab, and all the absorption line strengths are normal for the spectral type. Evidence is presented that the He II lambda 4846 emission line is formed in the stellar wind above the substellar point on the visible star. Probable values regarding the mass for the visible star and its companion are 33 and 16 solar masses, respectively. Theoretical He II lambda 4686 emission line profiles are computed for the focused stellar wind model for the Cyg X-1 system considered by Friend and Castor (1982). 105 references
Statistical properties of earthquakes clustering
Directory of Open Access Journals (Sweden)
A. Vecchio
2008-04-01
Full Text Available Often in nature the temporal distribution of inhomogeneous stochastic point processes can be modeled as a realization of renewal Poisson processes with a variable rate. Here we investigate one of the classical examples, namely, the temporal distribution of earthquakes. We show that this process strongly departs from a Poisson statistics for both catalogue and sequence data sets. This indicate the presence of correlations in the system probably related to the stressing perturbation characterizing the seismicity in the area under analysis. As shown by this analysis, the catalogues, at variance with sequences, show common statistical properties.
Probability, statistics, and computational science.
Beerenwinkel, Niko; Siebourg, Juliane
2012-01-01
In this chapter, we review basic concepts from probability theory and computational statistics that are fundamental to evolutionary genomics. We provide a very basic introduction to statistical modeling and discuss general principles, including maximum likelihood and Bayesian inference. Markov chains, hidden Markov models, and Bayesian network models are introduced in more detail as they occur frequently and in many variations in genomics applications. In particular, we discuss efficient inference algorithms and methods for learning these models from partially observed data. Several simple examples are given throughout the text, some of which point to models that are discussed in more detail in subsequent chapters.
First impressions and beyond: marketing your practice in touch points--Part II.
Bisera, Cheryl
2012-01-01
When calling in a marketing expert to boost a practice's numbers, administrators and providers are usually looking for external marketing strategies--ways to attract new patients to the practice. However, one of the most important, yet often overlooked, elements to successfully marketing a practice is the very important work of retaining current patients and turning them into enthusiastic referrers. When new patients are simply filling the place of previous patients that have moved on, you are not building solid practice growth. You can create an atmosphere of loyal referring patients by providing positive touch points that fulfill the needs of your patients. This article will cover touch points that occur before a patient has chosen your practice. Laying the groundwork for positive touch points will give your marketing efforts a snowball effect, build growth, and deliver the most bang for your marketing bucks.
Lindskog, Marcus; Winman, Anders; Juslin, Peter
2013-01-01
The capacity of short-term memory is a key constraint when people make online judgments requiring them to rely on samples retrieved from memory (e.g., Dougherty & Hunter, 2003). In this article, the authors compare 2 accounts of how people use knowledge of statistical distributions to make point estimates: either by retrieving precomputed…
Statistical evaluation and measuring strategy for extremely small line shifts
International Nuclear Information System (INIS)
Hansen, P.G.
1978-01-01
For a measuring situation limited by counting statistics, but where the level of precision is such that possible systematic errors are a major concern, it is proposed to determine the position of a spectral line from a measured line segment by applying a bias correction to the centre of gravity of the segment. This procedure is statistically highly efficient and not sensitive to small errors in assumptions about the line shape. The counting strategy for an instrument that takes data point by point is also considered. It is shown that an optimum (''two-point'') strategy exists; a scan of the central part of the line is 68% efficient by this standard. (Auth.)
Extending statistical boosting. An overview of recent methodological developments.
Mayr, A; Binder, H; Gefeller, O; Schmid, M
2014-01-01
Boosting algorithms to simultaneously estimate and select predictor effects in statistical models have gained substantial interest during the last decade. This review highlights recent methodological developments regarding boosting algorithms for statistical modelling especially focusing on topics relevant for biomedical research. We suggest a unified framework for gradient boosting and likelihood-based boosting (statistical boosting) which have been addressed separately in the literature up to now. The methodological developments on statistical boosting during the last ten years can be grouped into three different lines of research: i) efforts to ensure variable selection leading to sparser models, ii) developments regarding different types of predictor effects and how to choose them, iii) approaches to extend the statistical boosting framework to new regression settings. Statistical boosting algorithms have been adapted to carry out unbiased variable selection and automated model choice during the fitting process and can nowadays be applied in almost any regression setting in combination with a large amount of different types of predictor effects.
Brouwer, D.; Meijer, R.R.; Zevalkink, D.J.
2013-01-01
Several researchers have emphasized that item response theory (IRT)-based methods should be preferred over classical approaches in measuring change for individual patients. In the present study we discuss and evaluate the use of IRT-based statistics to measure statistical significant individual
Topology and statistics in zero dimensions
International Nuclear Information System (INIS)
Aneziris, Charilaos.
1992-05-01
It has been suggested that space-time may be intrinsically not continuous, but discrete. Here we review some topological notions of discrete manifolds, in particular ones made out of final number of points, and discuss the possibilties for statistics in such spaces. (author)
Game Related Statistics Which Discriminate Between Winning and Losing Under-16 Male Basketball Games
Lorenzo, Alberto; Gómez, Miguel Ángel; Ortega, Enrique; Ibáñez, Sergio José; Sampaio, Jaime
2010-01-01
The aim of the present study was to identify the game-related statistics which discriminate between winning and losing teams in under-16 years old male basketball games. The sample gathered all 122 games in the 2004 and 2005 Under-16 European Championships. The game-related statistics analysed were the free-throws (both successful and unsuccessful), 2- and 3-points field-goals (both successful and unsuccessful) offensive and defensive rebounds, blocks, assists, fouls, turnovers and steals. The winning teams exhibited lower ball possessions per game and better offensive and defensive efficacy coefficients than the losing teams. Results from discriminant analysis were statistically significant and allowed to emphasize several structure coefficients (SC). In close games (final score differences below 9 points), the discriminant variables were the turnovers (SC = -0.47) and the assists (SC = 0.33). In balanced games (final score differences between 10 and 29 points), the variables that discriminated between the groups were the successful 2-point field-goals (SC = -0.34) and defensive rebounds (SC = -0. 36); and in unbalanced games (final score differences above 30 points) the variables that best discriminated both groups were the successful 2-point field-goals (SC = 0.37). These results allowed understanding that these players' specific characteristics result in a different game-related statistical profile and helped to point out the importance of the perceptive and decision making process in practice and in competition. Key points The players' game-related statistical profile varied according to game type, game outcome and in formative categories in basketball. The results of this work help to point out the different player's performance described in U-16 men's basketball teams compared with senior and professional men's basketball teams. The results obtained enhance the importance of the perceptive and decision making process in practice and in competition. PMID
Isotopic safeguards statistics
International Nuclear Information System (INIS)
Timmerman, C.L.; Stewart, K.B.
1978-06-01
The methods and results of our statistical analysis of isotopic data using isotopic safeguards techniques are illustrated using example data from the Yankee Rowe reactor. The statistical methods used in this analysis are the paired comparison and the regression analyses. A paired comparison results when a sample from a batch is analyzed by two different laboratories. Paired comparison techniques can be used with regression analysis to detect and identify outlier batches. The second analysis tool, linear regression, involves comparing various regression approaches. These approaches use two basic types of models: the intercept model (y = α + βx) and the initial point model [y - y 0 = β(x - x 0 )]. The intercept model fits strictly the exposure or burnup values of isotopic functions, while the initial point model utilizes the exposure values plus the initial or fabricator's data values in the regression analysis. Two fitting methods are applied to each of these models. These methods are: (1) the usual least squares fitting approach where x is measured without error, and (2) Deming's approach which uses the variance estimates obtained from the paired comparison results and considers x and y are both measured with error. The Yankee Rowe data were first measured by Nuclear Fuel Services (NFS) and remeasured by Nuclear Audit and Testing Company (NATCO). The ratio of Pu/U versus 235 D (in which 235 D is the amount of depleted 235 U expressed in weight percent) using actual numbers is the isotopic function illustrated. Statistical results using the Yankee Rowe data indicates the attractiveness of Deming's regression model over the usual approach by simple comparison of the given regression variances with the random variance from the paired comparison results
Bias expansion of spatial statistics and approximation of differenced ...
Indian Academy of Sciences (India)
Investigations of spatial statistics, computed from lattice data in the plane, can lead to a special lattice point counting problem. The statistical goal is to expand the asymptotic expectation or large-sample bias of certain spatial covariance estimators, where this bias typically depends on the shape of a spatial sampling region.
International Nuclear Information System (INIS)
Zhao, Lingling; Zhong, Shuxian; Fang, Keming; Qian, Zhaosheng; Chen, Jianrong
2012-01-01
Highlights: ► A dual-cloud point extraction (d-CPE) procedure was firstly developed for simultaneous pre-concentration and separation of trace metal ions combining with ICP-OES. ► The developed d-CPE can significantly eliminate the surfactant of Triton X-114 and successfully extend to the determination of water samples with good performance. ► The designed method is simple, high efficient, low cost, and in accordance with the green chemistry concept. - Abstract: A dual-cloud point extraction (d-CPE) procedure has been developed for simultaneous pre-concentration and separation of heavy metal ions (Cd 2+ , Co 2+ , Ni 2+ , Pb 2+ , Zn 2+ , and Cu 2+ ion) in water samples by inductively coupled plasma optical emission spectrometry (ICP-OES). The procedure is based on forming complexes of metal ion with 8-hydroxyquinoline (8-HQ) into the as-formed Triton X-114 surfactant rich phase. Instead of direct injection or analysis, the surfactant rich phase containing the complexes was treated by nitric acid, and the detected ions were back extracted again into aqueous phase at the second cloud point extraction stage, and finally determined by ICP-OES. Under the optimum conditions (pH = 7.0, Triton X-114 = 0.05% (w/v), 8-HQ = 2.0 × 10 −4 mol L −1 , HNO 3 = 0.8 mol L −1 ), the detection limits for Cd 2+ , Co 2+ , Ni 2+ , Pb 2+ , Zn 2+ , and Cu 2+ ions were 0.01, 0.04, 0.01, 0.34, 0.05, and 0.04 μg L −1 , respectively. Relative standard deviation (RSD) values for 10 replicates at 100 μg L −1 were lower than 6.0%. The proposed method could be successfully applied to the determination of Cd 2+ , Co 2+ , Ni 2+ , Pb 2+ , Zn 2+ , and Cu 2+ ion in water samples.
Statistical models of a gas diffusion electrode: II. Current resistent
Energy Technology Data Exchange (ETDEWEB)
Proksch, D B; Winsel, O W
1965-07-01
The authors describe an apparatus for measuring the flow resistance of gas diffusion electrodes which is a mechanical analog of the Wheatstone bridge for measuring electric resistance. The flow resistance of a circular DSK electrode sheet, consisting of two covering layers and a working layer between them, was measured as a function of the gas pressure. While the pressure first was increased and then decreased, a hysteresis occurred, which is discussed and explained by a statistical model of a porous electrode.
Comparison of stability statistics for yield in barley (Hordeum vulgare ...
African Journals Online (AJOL)
STORAGESEVER
2010-03-15
Mar 15, 2010 ... statistics and yield indicated that only TOP method would be useful for simultaneously selecting for high yield and ... metric stability methods; i) they reduce the bias caused by outliers, ii) ...... Biometrics, 43: 45-53. Sabaghnia N ...
Armstrong, Kelly; Gokal, Raman; Chevalier, Antoine; Todorsky, William; Lim, Mike
2017-04-01
Although acupuncture and microcurrent are widely used for chronic pain, there remains considerable controversy as to their therapeutic value for neck pain. We aimed to determine the effect size of microcurrent applied to lower back acupuncture points to assess the impact on the neck pain. This was a cohort analysis of treatment outcomes pre- and postmicrocurrent stimulation, involving 34 patients with a history of nonspecific chronic neck pain. Consenting patients were enrolled from a group of therapists attending educational seminars and were asked to report pain levels pre-post and 48 hours after a single MPS application. Direct current microcurrent point stimulation (MPS) applied to standardized lower back acupuncture protocol points was used. Evaluations entailed a baseline visual analog scale (VAS) pain scale assessment, using a VAS, which was repeated twice after therapy, once immediately postelectrotherapy and again after a 48-h follow-up period. All 34 patients received a single MPS session. Results were analyzed using paired t tests. Results and Outcomes: Pain intensity showed an initial statistically significant reduction of 68% [3.9050 points; 95% CI (2.9480, 3.9050); p = 0.0001], in mean neck pain levels after standard protocol treatment, when compared to initial pain levels. There was a further statistically significant reduction of 35% in mean neck pain levels at 48 h when compared to pain levels immediately after standard protocol treatment [0.5588 points; 95% CI (0.2001, 0.9176); p = 0.03], for a total average pain relief of 80%. The positive results in this study could have applications for those patients impacted by chronic neck pain.
Localization of Usher syndrome type II to chromosome 1q.
Kimberling, W J; Weston, M D; Möller, C; Davenport, S L; Shugart, Y Y; Priluck, I A; Martini, A; Milani, M; Smith, R J
1990-06-01
Usher syndrome is characterized by congenital hearing loss, progressive visual impairment due to retinitis pigmentosa, and variable vestibular problems. The two subtypes of Usher syndrome, types I and II, can be distinguished by the degree of hearing loss and by the presence or absence of vestibular dysfunction. Type I is characterized by a profound hearing loss and totally absent vestibular responses, while type II has a milder hearing loss and normal vestibular function. Fifty-five members of eight type II Usher syndrome families were typed for three DNA markers in the distal region of chromosome 1q: D1S65 (pEKH7.4), REN (pHRnES1.9), and D1S81 (pTHH33). Statistically significant linkage was observed for Usher syndrome type II with a maximum multipoint lod score of 6.37 at the position of the marker THH33, thus localizing the Usher type II (USH2) gene to 1q. Nine families with type I Usher syndrome failed to show linkage to the same three markers. The statistical test for heterogeneity of linkage between Usher syndrome types I and II was highly significant, thus demonstrating that they are due to mutations at different genetic loci.
International Nuclear Information System (INIS)
Kleijnen, J.P.C.; Helton, J.C.
1999-01-01
The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (i) linear relationships with correlation coefficients, (ii) monotonic relationships with rank correlation coefficients, (iii) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (iv) trends in variability as defined by variances and interquartile ranges, and (v) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are considered for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (i) Type I errors are unavoidable, (ii) Type II errors can occur when inappropriate analysis procedures are used, (iii) physical explanations should always be sought for why statistical procedures identify variables as being important, and (iv) the identification of important variables tends to be stable for independent Latin hypercube samples
BOOK REVIEW: Statistical Mechanics of Turbulent Flows
Cambon, C.
2004-10-01
This is a handbook for a computational approach to reacting flows, including background material on statistical mechanics. In this sense, the title is somewhat misleading with respect to other books dedicated to the statistical theory of turbulence (e.g. Monin and Yaglom). In the present book, emphasis is placed on modelling (engineering closures) for computational fluid dynamics. The probabilistic (pdf) approach is applied to the local scalar field, motivated first by the nonlinearity of chemical source terms which appear in the transport equations of reacting species. The probabilistic and stochastic approaches are also used for the velocity field and particle position; nevertheless they are essentially limited to Lagrangian models for a local vector, with only single-point statistics, as for the scalar. Accordingly, conventional techniques, such as single-point closures for RANS (Reynolds-averaged Navier-Stokes) and subgrid-scale models for LES (large-eddy simulations), are described and in some cases reformulated using underlying Langevin models and filtered pdfs. Even if the theoretical approach to turbulence is not discussed in general, the essentials of probabilistic and stochastic-processes methods are described, with a useful reminder concerning statistics at the molecular level. The book comprises 7 chapters. Chapter 1 briefly states the goals and contents, with a very clear synoptic scheme on page 2. Chapter 2 presents definitions and examples of pdfs and related statistical moments. Chapter 3 deals with stochastic processes, pdf transport equations, from Kramer-Moyal to Fokker-Planck (for Markov processes), and moments equations. Stochastic differential equations are introduced and their relationship to pdfs described. This chapter ends with a discussion of stochastic modelling. The equations of fluid mechanics and thermodynamics are addressed in chapter 4. Classical conservation equations (mass, velocity, internal energy) are derived from their
Plasma parameters for alternate operating modes of TIBER-II
International Nuclear Information System (INIS)
Fenstermacher, M.E.; Devoto, R.S.; Logan, B.G.; Perkins, L.J.
1987-01-01
Parameters for operating points of TIBER-II, different from the baseline steady-state operation, are presented. These results have been generated with the MUMAK tokamak power balance code. Pulsed ignited and high performance steady-state operating points are described. 20 refs
Intermediate statistics in quantum maps
Energy Technology Data Exchange (ETDEWEB)
Giraud, Olivier [H H Wills Physics Laboratory, University of Bristol, Tyndall Avenue, Bristol BS8 1TL (United Kingdom); Marklof, Jens [School of Mathematics, University of Bristol, University Walk, Bristol BS8 1TW (United Kingdom); O' Keefe, Stephen [School of Mathematics, University of Bristol, University Walk, Bristol BS8 1TW (United Kingdom)
2004-07-16
We present a one-parameter family of quantum maps whose spectral statistics are of the same intermediate type as observed in polygonal quantum billiards. Our central result is the evaluation of the spectral two-point correlation form factor at small argument, which in turn yields the asymptotic level compressibility for macroscopic correlation lengths. (letter to the editor)
Compound nucleus decay: Comparison between saddle point and scission point barriers
Energy Technology Data Exchange (ETDEWEB)
Santos, T. J.; Carlson, B. V. [Depto. de Física, Instituto Tecnológico de Aeronáutica, São José dos Campos, SP (Brazil)
2014-11-11
One of the principal characteristics of nuclear multifragmentation is the emission of complex fragments of intermediate mass. An extension of the statistical multifragmentation model has been developed, in which the process can be interpreted as the near simultaneous limit of a series of sequential binary decays. In this extension, intermediate mass fragment emissions are described by expressions almost identical to those of light particle emission. At lower temperatures, similar expressions have been shown to furnish a good description of very light intermediate mass fragment emission but not of the emission of heavier fragments, which seems to be determined by the transition density at the saddle-point rather than at the scission point. Here, we wish to compare these different formulations of intermediate fragmment emission and analyze the extent to which they remain distinguishable at high excitation energy.
International Nuclear Information System (INIS)
Romero, Vicente J.; Burkardt, John V.; Gunzburger, Max D.; Peterson, Janet S.
2006-01-01
A recently developed centroidal Voronoi tessellation (CVT) sampling method is investigated here to assess its suitability for use in statistical sampling applications. CVT efficiently generates a highly uniform distribution of sample points over arbitrarily shaped M-dimensional parameter spaces. On several 2-D test problems CVT has recently been found to provide exceedingly effective and efficient point distributions for response surface generation. Additionally, for statistical function integration and estimation of response statistics associated with uniformly distributed random-variable inputs (uncorrelated), CVT has been found in initial investigations to provide superior points sets when compared against latin-hypercube and simple-random Monte Carlo methods and Halton and Hammersley quasi-random sequence methods. In this paper, the performance of all these sampling methods and a new variant ('Latinized' CVT) are further compared for non-uniform input distributions. Specifically, given uncorrelated normal inputs in a 2-D test problem, statistical sampling efficiencies are compared for resolving various statistics of response: mean, variance, and exceedence probabilities
[Cross-sectional survey of characteristics of reaction point Jingtong in balance acupuncture].
Wu, Dong; Hou, Zhong-Wei; Wang, Chen-Fei; Li, Shuai-Shuai; Liu, Yi-Rong; Liu, Qing-Guo
2014-04-01
To explore the performance patterns of reaction point Jingtong in balance acupuncture through multi-center and big-sample clinical investigation. Methods The Jingtong points of balance acupuncture on healthy side and affected side were observed among 230 cases of cervical spondylosis and scores of self-discomfort in reaction point, color of skin, changes of skin, morphology of subcutaneous tissue and abnormal pressing pain were recorded. The software SPSS 15.0 was applied to statistically analyze the recorded scores. Among 230 cases, the reaction point appeared in 226 cases, accounting for 98. 3%. Among the 226 cases who had reaction point, the total score of symptom and sign was (1.08+/-1.09) on the healthy side and (0. 84+/-1. 36) on the affected side, which had statistical significance (Ppoint was (0. 76 +/-0. 83) on the healthy side and (0. 40+/-0.80) on the affected side, which had statistical significance (Ppoint Jingtong on the healthy side is higher than that on the affected side, indicating positive reaction of Jingtong on the healthy side has specificity for cervical spondylosis. When patient has cervical spondylosis on either side of neck, the other side will have anomaly in Jingtong.
Energy Technology Data Exchange (ETDEWEB)
Wang, Song; Qiu, Yanli; Liu, Jifeng [Key Laboratory of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012 (China); Bregman, Joel N., E-mail: songw@bao.ac.cn, E-mail: jfliu@bao.ac.cn [University of Michigan, Ann Arbor, MI 48109 (United States)
2016-09-20
Based on the recently completed Chandra /ACIS survey of X-ray point sources in nearby galaxies, we study the X-ray luminosity functions (XLFs) for X-ray point sources in different types of galaxies and the statistical properties of ultraluminous X-ray sources (ULXs). Uniform procedures are developed to compute the detection threshold, to estimate the foreground/background contamination, and to calculate the XLFs for individual galaxies and groups of galaxies, resulting in an XLF library of 343 galaxies of different types. With the large number of surveyed galaxies, we have studied the XLFs and ULX properties across different host galaxy types, and confirm with good statistics that the XLF slope flattens from lenticular ( α ∼ 1.50 ± 0.07) to elliptical (∼1.21 ± 0.02), to spirals (∼0.80 ± 0.02), to peculiars (∼0.55 ± 0.30), and to irregulars (∼0.26 ± 0.10). The XLF break dividing the neutron star and black hole binaries is also confirmed, albeit at quite different break luminosities for different types of galaxies. A radial dependency is found for ellipticals, with a flatter XLF slope for sources located between D {sub 25} and 2 D {sub 25}, suggesting the XLF slopes in the outer region of early-type galaxies are dominated by low-mass X-ray binaries in globular clusters. This study shows that the ULX rate in early-type galaxies is 0.24 ± 0.05 ULXs per surveyed galaxy, on a 5 σ confidence level. The XLF for ULXs in late-type galaxies extends smoothly until it drops abruptly around 4 × 10{sup 40} erg s{sup −1}, and this break may suggest a mild boundary between the stellar black hole population possibly including 30 M {sub ⊙} black holes with super-Eddington radiation and intermediate mass black holes.
International Nuclear Information System (INIS)
Sanford, T.W.L.
1982-06-01
At LAMPF II, intense beams of kaons will be available that will enable the rare kaon-decay processes to be investigated. This note explores some of the possibilities, which divide into two classes: (1) those that test the standard model of Weinberg and Salam and (2) those that are sensitive to new interactions. For both classes, experiments have been limited not by systematic errors but rather by statistical ones. LAMPF II with its intense flux of kaons thus will enable the frontier of rare kaon decay to be realistically probed
A formalism for scattering of complex composite structures. II. Distributed reference points
DEFF Research Database (Denmark)
Svaneborg, Carsten; Pedersen, Jan Skov
2012-01-01
Recently we developed a formalism for the scattering from linear and acyclic branched structures build of mutually non-interacting sub-units.[C. Svaneborg and J. S. Pedersen, J. Chem. Phys. 136, 104105 (2012)] We assumed each sub-unit has reference points associated with it. These are well defined...... positions where sub-units can be linked together. In the present paper, we generalize the formalism to the case where each reference point can represent a distribution of potential link positions. We also present a generalized diagrammatic representation of the formalism. Scattering expressions required...
Nonparametric Change Point Diagnosis Method of Concrete Dam Crack Behavior Abnormality
Li, Zhanchao; Gu, Chongshi; Wu, Zhongru
2013-01-01
The study on diagnosis method of concrete crack behavior abnormality has always been a hot spot and difficulty in the safety monitoring field of hydraulic structure. Based on the performance of concrete dam crack behavior abnormality in parametric statistical model and nonparametric statistical model, the internal relation between concrete dam crack behavior abnormality and statistical change point theory is deeply analyzed from the model structure instability of parametric statistical model ...
Accuracy and reliability of China's energy statistics
Energy Technology Data Exchange (ETDEWEB)
Sinton, Jonathan E.
2001-09-14
Many observers have raised doubts about the accuracy and reliability of China's energy statistics, which show an unprecedented decline in recent years, while reported economic growth has remained strong. This paper explores the internal consistency of China's energy statistics from 1990 to 2000, coverage and reporting issues, and the state of the statistical reporting system. Available information suggests that, while energy statistics were probably relatively good in the early 1990s, their quality has declined since the mid-1990s. China's energy statistics should be treated as a starting point for analysis, and explicit judgments regarding ranges of uncertainty should accompany any conclusions.
Developing points-based risk-scoring systems in the presence of competing risks.
Austin, Peter C; Lee, Douglas S; D'Agostino, Ralph B; Fine, Jason P
2016-09-30
Predicting the occurrence of an adverse event over time is an important issue in clinical medicine. Clinical prediction models and associated points-based risk-scoring systems are popular statistical methods for summarizing the relationship between a multivariable set of patient risk factors and the risk of the occurrence of an adverse event. Points-based risk-scoring systems are popular amongst physicians as they permit a rapid assessment of patient risk without the use of computers or other electronic devices. The use of such points-based risk-scoring systems facilitates evidence-based clinical decision making. There is a growing interest in cause-specific mortality and in non-fatal outcomes. However, when considering these types of outcomes, one must account for competing risks whose occurrence precludes the occurrence of the event of interest. We describe how points-based risk-scoring systems can be developed in the presence of competing events. We illustrate the application of these methods by developing risk-scoring systems for predicting cardiovascular mortality in patients hospitalized with acute myocardial infarction. Code in the R statistical programming language is provided for the implementation of the described methods. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Aspects of statistical model for multifragmentation
International Nuclear Information System (INIS)
Bhattacharyya, P.; Das Gupta, S.; Mekjian, A. Z.
1999-01-01
We deal with two different aspects of an exactly soluble statistical model of fragmentation. First we show, using zero range force and finite temperature Thomas-Fermi theory, that a common link can be found between finite temperature mean field theory and the statistical fragmentation model. We show the latter naturally arises in the spinodal region. Next we show that although the exact statistical model is a canonical model and uses temperature, microcanonical results which use constant energy rather than constant temperature can also be obtained from the canonical model using saddle-point approximation. The methodology is extremely simple to implement and at least in all the examples studied in this work is very accurate. (c) 1999 The American Physical Society
Adiós al Vaticano II? Tres superaciones del Concilio Vaticano II
Directory of Open Access Journals (Sweden)
José María Vigil
2009-05-01
Full Text Available Resumen El autor confiesa que pertenece a la generación que ha dedicado su vida a implementar la herencia del Concilio Vaticano II, generación que ha tenido a ese concilio como el punto de referencia más importante - eclesialmente hablando - en los últimos 40 años. Sin embargo, aventura la hipótesis de que la problemática del Vaticano II ha quedado ya obsoleta, y lo justifica presentando tres olas de nuevos signos de los tiempos que han transformado radicalmente el panorama teológico y pastoral: la teología de la liberación, el pluralismo religioso y la crisis de la religión. El artículo trata de las grandes olas que avalan esa superación: la teología y la espiritualidad de la liberación, la teología del pluralismo religioso y la revisión de paradigma que provoca la actual crisis de la religión. Estas tres "superaciones" del Vaticano II son, pues, las que nos permiten pensar que lo que se dirimió en el Concilio ya no es el punto de referencia principal de la vida de la Iglesia, ni de las necesidades religiosas más urgentes de la Humanidad. Palabras clave: Concilio Vaticano II; Cambio de paradigma; Teología de la liberación; Pluralismo religioso; Crisis de la religión. Abstract The author belongs to the generation that has dedicated its life to implement the heritage of Vatican Council II, and has taken that Council as the most relevant point of reference, in ecclesiastical terms, for the last 40 years. However, he formulates the hypothesis that the problematics of Vatican II has become obsolete, and justifies that viewpoint presenting three new waves that have transformed radically the theological and pastoral panorama: the theology of liberation, religious pluralism and the crisis in religion. The article focuses on the signs that support such surpassing: the theology and spirituality of liberation, the theology of religious pluralism and the review of paradigms that has brought about the current crisis in religion
Statistical hot spot analysis of reactor cores
International Nuclear Information System (INIS)
Schaefer, H.
1974-05-01
This report is an introduction into statistical hot spot analysis. After the definition of the term 'hot spot' a statistical analysis is outlined. The mathematical method is presented, especially the formula concerning the probability of no hot spots in a reactor core is evaluated. A discussion with the boundary conditions of a statistical hot spot analysis is given (technological limits, nominal situation, uncertainties). The application of the hot spot analysis to the linear power of pellets and the temperature rise in cooling channels is demonstrated with respect to the test zone of KNK II. Basic values, such as probability of no hot spots, hot spot potential, expected hot spot diagram and cumulative distribution function of hot spots, are discussed. It is shown, that the risk of hot channels can be dispersed equally over all subassemblies by an adequate choice of the nominal temperature distribution in the core
A modern course in statistical physics
Reichl, Linda E
2016-01-01
"A Modern Course in Statistical Physics" is a textbook that illustrates the foundations of equilibrium and non-equilibrium statistical physics, and the universal nature of thermodynamic processes, from the point of view of contemporary research problems. The book treats such diverse topics as the microscopic theory of critical phenomena, superfluid dynamics, quantum conductance, light scattering, transport processes, and dissipative structures, all in the framework of the foundations of statistical physics and thermodynamics. It shows the quantum origins of problems in classical statistical physics. One focus of the book is fluctuations that occur due to the discrete nature of matter, a topic of growing importance for nanometer scale physics and biophysics. Another focus concerns classical and quantum phase transitions, in both monatomic and mixed particle systems. This fourth edition extends the range of topics considered to include, for example, entropic forces, electrochemical processes in biological syste...
Statistical methods for spatio-temporal systems
Finkenstadt, Barbel
2006-01-01
Statistical Methods for Spatio-Temporal Systems presents current statistical research issues on spatio-temporal data modeling and will promote advances in research and a greater understanding between the mechanistic and the statistical modeling communities.Contributed by leading researchers in the field, each self-contained chapter starts with an introduction of the topic and progresses to recent research results. Presenting specific examples of epidemic data of bovine tuberculosis, gastroenteric disease, and the U.K. foot-and-mouth outbreak, the first chapter uses stochastic models, such as point process models, to provide the probabilistic backbone that facilitates statistical inference from data. The next chapter discusses the critical issue of modeling random growth objects in diverse biological systems, such as bacteria colonies, tumors, and plant populations. The subsequent chapter examines data transformation tools using examples from ecology and air quality data, followed by a chapter on space-time co...
Quenched spin tunneling and diabolical points in magnetic molecules. II. Asymmetric configurations
Garg, Anupam
2001-09-01
The perfect quenching of spin tunneling first predicted for a model with biaxial symmetry, and recently observed in the magnetic molecule Fe8, is further studied using the discrete phase integral or WKB (Wentzel-Kramers-Brillouin) method. The analysis of the previous paper is extended to the case where the magnetic field has both hard and easy components, so that the Hamiltonian has no obvious symmetry. Herring's formula is now inapplicable, so the problem is solved by finding the wave function and using connection formulas at every turning point. A general formula for the energy surface in the vicinity of the diabolo is obtained in this way. This formula gives the tunneling amplitude between two wells unrelated by symmetry in terms of a small number of action integrals, and appears to be generally valid, even for problems where the recursion contains more than five terms. Explicit results are obtained for the diabolical points in the model for Fe8 that closely parallel the experimental observations. The leading semiclassical results for the diabolical points are found to agree precisely with exact results.
Elliott, P; Westlake, A J; Hills, M; Kleinschmidt, I; Rodrigues, L; McGale, P; Marshall, K; Rose, G
1992-01-01
STUDY OBJECTIVE--The Small Area Health Statistics Unit (SAHSU) was established at the London School of Hygiene and Tropical Medicine in response to a recommendation of the enquiry into the increased incidence of childhood leukaemia near Sellafield, the nuclear reprocessing plant in West Cumbria. The aim of this paper was to describe the Unit's methods for the investigation of health around point sources of environmental pollution in the United Kingdom. DESIGN--Routine data currently including deaths and cancer registrations are held in a large national database which uses a post code based retrieval system to locate cases geographically and link them to the underlying census enumeration districts, and hence to their populations at risk. Main outcome measures were comparison of observed/expected ratios (based on national rates) within bands delineated by concentric circles around point sources of environmental pollution located anywhere in Britain. MAIN RESULTS--The system is illustrated by a study of mortality from mesothelioma and asbestosis near the Plymouth naval dockyards during 1981-87. Within a 3 km radius of the docks the mortality rate for mesothelioma was higher than the national rate by a factor of 8.4, and that for asbestosis was higher by a factor of 13.6. CONCLUSIONS--SAHSU is a new national facility which is rapidly able to provide rates of mortality and cancer incidence for arbitrary circles drawn around any point in Britain. The example around Plymouth of mesothelioma and asbestosis demonstrates the ability of the system to detect an unusual excess of disease in a small locality, although in this case the findings are likely to be related to occupational rather than environmental exposure. PMID:1431704
Ten-year clinico-statistical study of oral squamous cell carcinoma
International Nuclear Information System (INIS)
Aoki, Shinjiro; Kawabe, Ryoichi; Chikumaru, Hiroshi; Saito, Tomokatsu; Hirota, Makoto; Miyake, Tetsumi; Omura, Susumu; Fujita, Kiyohide
2003-01-01
This clinico-statistical study includes 232 cases of oral squamous cell carcinoma that underwent radical treatment in the Department of Oral and Maxillofacial Surgery, Yokohama City University Hospital, during the decade from 1991 to 2000. Surgery was principally adopted as the first line for treatment in 199 cases, and radiotherapy in 33 cases. The 5-year overall survival rate was 73.4%. The results according to stage were as follows: stage I, 87.5%; Stage II, 77.9%; Stage III, 63.5%; and Stage IV A, 44.7%. The primary sites were classified as follows: upper gingiva, 85.2%; tongue, 73.7%; floor of mouth, 68.9%; lower gingiva, 66.3%; buccal mucosa, 63.9%; and hard palate, 50%. For tongue cancer, the 5-year overall survival rates by stage were: Stage I, 90.8%; Stage II, 82.1%; Stage III, 40.3%; and Stage IV A, 45.7%. Statistical significance was seen between cases of Stages I and II and those of Stages III and IV A stage. For lower gingival cancer, the 5-year overall survival rates by stage were: Stage I, 90.8%; Stage II, 82.1%; Stage III, 40.3%; and Stage IV A, 45.7%. Even in Stage I lower gingival cancers had unfavorable clinical outcomes. Preventive neck dissections were performed on 52 N 0 neck patients, but clinically negative nodes however showed metastasis in 14 patients (26.9%). (author)
Directory of Open Access Journals (Sweden)
H. D. Yin
2014-03-01
Full Text Available Cellular retinol-binding protein II (CRBP II belongs to the family of cellular retinol-binding proteins and plays a major role in absorption, transport, and metabolism of vitamin A. In addition, because vitamin A is correlated with reproductive performance, we measured CRBP II mRNA abundance in erlang mountainous chickens by real-time PCR using the relative quantification method. The expression of CRBP II showed a tissue-specific pattern and egg production rate-dependent changes. The expression was very high (p<0.05 in jejunum and liver, intermediate in kidney, ovary, and oviduct, and lowest (p<0.05 in heart, hypothalamus, and pituitary. In the hypothalamus, oviduct, ovary, and pituitary, CRBP II mRNA abundance were correlated to egg production rate, which increased from 12 wk to 32 wk, peaked at 32 wk relative to the other time points, and then decreased from 32 wk to 45 wk. In contrast, the expression of CRBP II mRNA in heart, jejunum, kidney, and liver was not different at any of the ages evaluated in this study. These data may help to understand the genetic basis of vitamin A metabolism, and suggest that CRBP II may be a candidate gene to affect egg production traits in chickens.
A J–function for inhomogeneous point processes
M.N.M. van Lieshout (Marie-Colette)
2010-01-01
htmlabstractWe propose new summary statistics for intensity-reweighted moment stationary point processes that generalise the well known J-, empty space, and nearest-neighbour distance dis- tribution functions, represent them in terms of generating functionals and conditional intensities, and relate
Enhanced echolocation via robust statistics and super-resolution of sonar images
Kim, Kio
Echolocation is a process in which an animal uses acoustic signals to exchange information with environments. In a recent study, Neretti et al. have shown that the use of robust statistics can significantly improve the resiliency of echolocation against noise and enhance its accuracy by suppressing the development of sidelobes in the processing of an echo signal. In this research, the use of robust statistics is extended to problems in underwater explorations. The dissertation consists of two parts. Part I describes how robust statistics can enhance the identification of target objects, which in this case are cylindrical containers filled with four different liquids. Particularly, this work employs a variation of an existing robust estimator called an L-estimator, which was first suggested by Koenker and Bassett. As pointed out by Au et al.; a 'highlight interval' is an important feature, and it is closely related with many other important features that are known to be crucial for dolphin echolocation. A varied L-estimator described in this text is used to enhance the detection of highlight intervals, which eventually leads to a successful classification of echo signals. Part II extends the problem into 2 dimensions. Thanks to the advances in material and computer technology, various sonar imaging modalities are available on the market. By registering acoustic images from such video sequences, one can extract more information on the region of interest. Computer vision and image processing allowed application of robust statistics to the acoustic images produced by forward looking sonar systems, such as Dual-frequency Identification Sonar and ProViewer. The first use of robust statistics for sonar image enhancement in this text is in image registration. Random Sampling Consensus (RANSAC) is widely used for image registration. The registration algorithm using RANSAC is optimized for sonar image registration, and the performance is studied. The second use of robust
Exact Identification of a Quantum Change Point
Sentís, Gael; Calsamiglia, John; Muñoz-Tapia, Ramon
2017-10-01
The detection of change points is a pivotal task in statistical analysis. In the quantum realm, it is a new primitive where one aims at identifying the point where a source that supposedly prepares a sequence of particles in identical quantum states starts preparing a mutated one. We obtain the optimal procedure to identify the change point with certainty—naturally at the price of having a certain probability of getting an inconclusive answer. We obtain the analytical form of the optimal probability of successful identification for any length of the particle sequence. We show that the conditional success probabilities of identifying each possible change point show an unexpected oscillatory behavior. We also discuss local (online) protocols and compare them with the optimal procedure.
Single-point incremental forming and formability-failure diagrams
DEFF Research Database (Denmark)
Silva, M.B.; Skjødt, Martin; Atkins, A.G.
2008-01-01
In a recent work [1], the authors constructed a closed-form analytical model that is capable of dealing with the fundamentals of single point incremental forming and explaining the experimental and numerical results published in the literature over the past couple of years. The model is based...... of deformation that are commonly found in general single point incremental forming processes; and (ii) to investigate the formability limits of SPIF in terms of ductile damage mechanics and the question of whether necking does, or does not, precede fracture. Experimentation by the authors together with data...
OXIDATION OF CYCLIC AMINES BY MOLYBDENUM(II) AND ...
African Journals Online (AJOL)
Preferred Customer
metal is in formal oxidation state +2. Since no reduction can take place without oxidation and vice versa, we can reasonably say the reduction of Mo(II) and W(II) species is accompanied by oxidation of the amine. At this juncture, we should point out that C=N bonds are also known to absorb IR radiation in the same spectral ...
A new instrument for statistical process control of thermoset molding
International Nuclear Information System (INIS)
Day, D.R.; Lee, H.L.; Shepard, D.D.; Sheppard, N.F.
1991-01-01
The recent development of a rugged ceramic mold mounted dielectric sensor and high speed dielectric instrumentation now enables monitoring and statistical process control of production molding over thousands of runs. In this work special instrumentation and software (ICAM-1000) was utilized that automatically extracts critical point during the molding process including flow point, viscosity minimum gel inflection, and reaction endpoint. In addition, other sensors were incorporated to measure temperature and pressure. The critical point as well as temperature and pressure were then recorded during normal production and then plotted in the form of statistical process control (SPC) charts. Experiments have been carried out in RIM, SMC, and RTM type molding operations. The influence of temperature, pressure chemistry, and other variables has been investigated. In this paper examples of both RIM and SMC are discussed
Automated extraction and analysis of rock discontinuity characteristics from 3D point clouds
Bianchetti, Matteo; Villa, Alberto; Agliardi, Federico; Crosta, Giovanni B.
2016-04-01
A reliable characterization of fractured rock masses requires an exhaustive geometrical description of discontinuities, including orientation, spacing, and size. These are required to describe discontinuum rock mass structure, perform Discrete Fracture Network and DEM modelling, or provide input for rock mass classification or equivalent continuum estimate of rock mass properties. Although several advanced methodologies have been developed in the last decades, a complete characterization of discontinuity geometry in practice is still challenging, due to scale-dependent variability of fracture patterns and difficult accessibility to large outcrops. Recent advances in remote survey techniques, such as terrestrial laser scanning and digital photogrammetry, allow a fast and accurate acquisition of dense 3D point clouds, which promoted the development of several semi-automatic approaches to extract discontinuity features. Nevertheless, these often need user supervision on algorithm parameters which can be difficult to assess. To overcome this problem, we developed an original Matlab tool, allowing fast, fully automatic extraction and analysis of discontinuity features with no requirements on point cloud accuracy, density and homogeneity. The tool consists of a set of algorithms which: (i) process raw 3D point clouds, (ii) automatically characterize discontinuity sets, (iii) identify individual discontinuity surfaces, and (iv) analyse their spacing and persistence. The tool operates in either a supervised or unsupervised mode, starting from an automatic preliminary exploration data analysis. The identification and geometrical characterization of discontinuity features is divided in steps. First, coplanar surfaces are identified in the whole point cloud using K-Nearest Neighbor and Principal Component Analysis algorithms optimized on point cloud accuracy and specified typical facet size. Then, discontinuity set orientation is calculated using Kernel Density Estimation and
The matchmaking paradox: a statistical explanation
International Nuclear Information System (INIS)
Eliazar, Iddo I; Sokolov, Igor M
2010-01-01
Medical surveys regarding the number of heterosexual partners per person yield different female and male averages-a result which, from a physical standpoint, is impossible. In this paper we term this puzzle the 'matchmaking paradox', and establish a statistical model explaining it. We consider a bipartite graph with N male and N female nodes (N >> 1), and B bonds connecting them (B >> 1). Each node is associated a random 'attractiveness level', and the bonds connect to the nodes randomly-with probabilities which are proportionate to the nodes' attractiveness levels. The population's average bonds-per-nodes B/N is estimated via a sample average calculated from a survey of size n (n >> 1). A comprehensive statistical analysis of this model is carried out, asserting that (i) the sample average well estimates the population average if and only if the attractiveness levels possess a finite mean; (ii) if the attractiveness levels are governed by a 'fat-tailed' probability law then the sample average displays wild fluctuations and strong skew-thus providing a statistical explanation to the matchmaking paradox.
Baró, Jordi; Davidsen, Jörn
2018-03-01
The hypothesis of critical failure relates the presence of an ultimate stability point in the structural constitutive equation of materials to a divergence of characteristic scales in the microscopic dynamics responsible for deformation. Avalanche models involving critical failure have determined common universality classes for stick-slip processes and fracture. However, not all empirical failure processes exhibit the trademarks of criticality. The rheological properties of materials introduce dissipation, usually reproduced in conceptual models as a hardening of the coarse grained elements of the system. Here, we investigate the effects of transient hardening on (i) the activity rate and (ii) the statistical properties of avalanches. We find the explicit representation of transient hardening in the presence of generalized viscoelasticity and solve the corresponding mean-field model of fracture. In the quasistatic limit, the accelerated energy release is invariant with respect to rheology and the avalanche propagation can be reinterpreted in terms of a stochastic counting process. A single universality class can be defined from such analogy, and all statistical properties depend only on the distance to criticality. We also prove that interevent correlations emerge due to the hardening—even in the quasistatic limit—that can be interpreted as "aftershocks" and "foreshocks."
Energy Technology Data Exchange (ETDEWEB)
Perez, M.R.; Crespo, I.; Ulibarri, M.A.; Barriga, C. [Departamento de Quimica Inorganica e Ingenieria Quimica, Campus de Rabanales, Universidad de Cordoba, Cordoba (Spain); Rives, V. [GIR-QUESCAT, Departamento de Quimica Inorganica, Universidad de Salamanca, Salamanca (Spain); Fernandez, J.M., E-mail: um1feroj@uco.es [Departamento de Quimica Inorganica e Ingenieria Quimica, Campus de Rabanales, Universidad de Cordoba, Cordoba (Spain)
2012-02-15
Highlights: Black-Right-Pointing-Pointer LDHs M{sup II}-Al-Cr (M = Zn, Cd) with Cr in the layer or interlayer have been prepared. Black-Right-Pointing-Pointer LDHs Zn-Al or Zn-Cr decompose by heating forming ZnO and ZnAl{sub 2}O{sub 4} or ZnO and ZnCr{sub 2}O{sub 4}. Black-Right-Pointing-Pointer LDHs Zn-Al-Cr give rise to the formation of ZnO and the mixed spinel ZnAl{sub 2-x}Cr{sub x}O{sub 4}. Black-Right-Pointing-Pointer LDH Cd-Al-Cr shows the formation of CdO, CdCr{sub 2-x}Al{sub x}O{sub 4}, and (Al, Cr){sub 2}O{sub 3} mixed oxide. Black-Right-Pointing-Pointer Calcination of the CdAl-CrO{sub 4} give rise to (Al, Cr){sub 2}O{sub 3} as the majority phase. - Abstract: Layered double hydroxides (LDHs) containing M{sup II}, Al{sup III}, and Cr{sup III} in the brucite-like layers (M = Cd, Zn) with different starting Al/Cr molar ratios and nitrate/carbonate as the interlayer anion have been prepared following the coprecipitation method at a constant pH: Zn{sup II}-Al{sup III}-Cr{sup III}-CO{sub 3}{sup 2-} at pH = 10, and Cd{sup II}-Al{sup III}-Cr{sup III}-NO{sub 3}{sup -} at pH = 8. Two additional M{sup II},Al{sup III}-LDH samples (M = Cd, Zn) with chromate ions (CrO{sub 4}{sup 2-}) in the interlayer have been prepared by ionic exchange at pH = 9 and 8, respectively, starting from M{sup II}-Al{sup III}-NO{sub 3}{sup -}. The samples have been characterised by absorption atomic spectrometry, powder X-ray diffraction (PXRD), FT-IR spectroscopy and transmission electron microscopy (TEM). Their thermal stability has been assessed by DTA-TG and mass spectrometric analysis of the evolved gases. The PXRD patterns of the solids calcined at 800 Degree-Sign C show diffraction lines corresponding to ZnO and ZnAl{sub 2-x}Cr{sub x}O{sub 4} for the Zn-containing samples, and diffraction lines attributed to CdO and CdCr{sub 2}O{sub 4} and (Al,Cr){sub 2}O{sub 3} for the Cd-containing ones. Additionally a minority oxide, Cd{sub 2}CrO{sub 5}, is observed to Cd{sup II}-Al{sup III
Sensitivity analysis and related analysis : A survey of statistical techniques
Kleijnen, J.P.C.
1995-01-01
This paper reviews the state of the art in five related types of analysis, namely (i) sensitivity or what-if analysis, (ii) uncertainty or risk analysis, (iii) screening, (iv) validation, and (v) optimization. The main question is: when should which type of analysis be applied; which statistical
Using statistics to understand the environment
Cook, Penny A
2000-01-01
Using Statistics to Understand the Environment covers all the basic tests required for environmental practicals and projects and points the way to the more advanced techniques that may be needed in more complex research designs. Following an introduction to project design, the book covers methods to describe data, to examine differences between samples, and to identify relationships and associations between variables.Featuring: worked examples covering a wide range of environmental topics, drawings and icons, chapter summaries, a glossary of statistical terms and a further reading section, this book focuses on the needs of the researcher rather than on the mathematics behind the tests.
Patron Preference in Reference Service Points.
Morgan, Linda
1980-01-01
Behavior of patrons choosing between a person sitting at a counter and one sitting at a desk at each of two reference points was observed at the reference department during remodeling at the M. D. Anderson Library of the University of Houston. Results showed a statistically relevant preference for the counter. (Author/JD)
The Developing Infant Creates a Curriculum for Statistical Learning.
Smith, Linda B; Jayaraman, Swapnaa; Clerkin, Elizabeth; Yu, Chen
2018-04-01
New efforts are using head cameras and eye-trackers worn by infants to capture everyday visual environments from the point of view of the infant learner. From this vantage point, the training sets for statistical learning develop as the sensorimotor abilities of the infant develop, yielding a series of ordered datasets for visual learning that differ in content and structure between timepoints but are highly selective at each timepoint. These changing environments may constitute a developmentally ordered curriculum that optimizes learning across many domains. Future advances in computational models will be necessary to connect the developmentally changing content and statistics of infant experience to the internal machinery that does the learning. Copyright © 2018 Elsevier Ltd. All rights reserved.
Second nuclear reactor, Point Lepreau, New Brunswick
International Nuclear Information System (INIS)
Connelly, R.; Desjardins, L.
1985-05-01
This is a report of the findings, conclusions and recommendations of the Environmental Assessment Panel appointed by the Ministers of Environment of New Brunswick and Canada to review the proposal to build a seond nuclear unit at Point Lepreau, New Brunswick. The Panel's mandate was to assess the environmental and related social impacts of the proposal. The Panel concludes that the project can proceed without significant adverse effects provided certain recommendations are followed. In order to understand the impacts of Lepreau II, it was necessary to review, to the extent possible, the actual effects of Lepreau I before estimating the incremental effects of Lepreau II. In so doing, the Panel made a number of recommendations that should be implemented now. The information gathered and experience gained can be applied to Lepreau II to ensure that potential impacts are reduced to a minimum and existing concerns associated with Lepreau I can be corrected
Describing chaotic attractors: Regular and perpetual points
Dudkowski, Dawid; Prasad, Awadhesh; Kapitaniak, Tomasz
2018-03-01
We study the concepts of regular and perpetual points for describing the behavior of chaotic attractors in dynamical systems. The idea of these points, which have been recently introduced to theoretical investigations, is thoroughly discussed and extended into new types of models. We analyze the correlation between regular and perpetual points, as well as their relation with phase space, showing the potential usefulness of both types of points in the qualitative description of co-existing states. The ability of perpetual points in finding attractors is indicated, along with its potential cause. The location of chaotic trajectories and sets of considered points is investigated and the study on the stability of systems is shown. The statistical analysis of the observing desired states is performed. We focus on various types of dynamical systems, i.e., chaotic flows with self-excited and hidden attractors, forced mechanical models, and semiconductor superlattices, exhibiting the universality of appearance of the observed patterns and relations.
Diedrich, Alice; Schlegl, Sandra; Greetfeld, Martin; Fumi, Markus; Voderholzer, Ulrich
2018-03-01
This study examines the statistical and clinical significance of symptom changes during an intensive inpatient treatment program with a strong psychotherapeutic focus for individuals with severe bulimia nervosa. 295 consecutively admitted bulimic patients were administered the Structured Interview for Anorexic and Bulimic Syndromes-Self-Rating (SIAB-S), the Eating Disorder Inventory-2 (EDI-2), the Brief Symptom Inventory (BSI), and the Beck Depression Inventory-II (BDI-II) at treatment intake and discharge. Results indicated statistically significant symptom reductions with large effect sizes regarding severity of binge eating and compensatory behavior (SIAB-S), overall eating disorder symptom severity (EDI-2), overall psychopathology (BSI), and depressive symptom severity (BDI-II) even when controlling for antidepressant medication. The majority of patients showed either reliable (EDI-2: 33.7%, BSI: 34.8%, BDI-II: 18.1%) or even clinically significant symptom changes (EDI-2: 43.2%, BSI: 33.9%, BDI-II: 56.9%). Patients with clinically significant improvement were less distressed at intake and less likely to suffer from a comorbid borderline personality disorder when compared with those who did not improve to a clinically significant extent. Findings indicate that intensive psychotherapeutic inpatient treatment may be effective in about 75% of severely affected bulimic patients. For the remaining non-responding patients, inpatient treatment might be improved through an even stronger focus on the reduction of comorbid borderline personality traits.
Directory of Open Access Journals (Sweden)
Taran Mojtaba
2015-09-01
Full Text Available Bioremediation is the removal of heavy-metals such as nickel (Ni using microorganisms and has been considered as an important field in the biotechnology. Isolation and characterization of microorganisms exhibiting bioremediation activities and their optimization to treat polluted wastewaters is a vital and difficult task in remediation technologies. In this study, investigation was carried out to isolate Ni (II remediating microbial strains from soils contaminated with municipal solid waste leachate. Furthermore, Taguchi design of experiments were used to evaluate the influence of concentration, pH, temperature, and time on bioremediation of Ni (II using isolated bacteria. This study concluded that Bacillus sp. KL1 is a Ni (II-resistant strain and had Ni (II bioremediation activity. The highest bioremediation of Ni (II was observed as 55.06% after 24 h at 30ºC, pH 7, and 100 ppm concentration. Moreover, it was also observed that concentration is the most effective factor in the bioremediation process. In conclusion, we have demonstrated that bacteria isolated from soils contaminated with garbage leachate have the Bacillus sp. KL1 bacteria which can efficiently uptake and eliminate Ni (II from contaminated sites and thus makes it possible to treat heavy-metal containing wastewaters in industry by using this microorganism at optimized conditions.
Poullis, Michael
2014-11-01
EuroSCORE II, despite improving on the original EuroSCORE system, has not solved all the calibration and predictability issues. Recursive, non-linear and mixed recursive and non-linear regression analysis were assessed with regard to sensitivity, specificity and predictability of the original EuroSCORE and EuroSCORE II systems. The original logistic EuroSCORE, EuroSCORE II and recursive, non-linear and mixed recursive and non-linear regression analyses of these risk models were assessed via receiver operator characteristic curves (ROC) and Hosmer-Lemeshow statistic analysis with regard to the accuracy of predicting in-hospital mortality. Analysis was performed for isolated coronary artery bypass grafts (CABGs) (n = 2913), aortic valve replacement (AVR) (n = 814), mitral valve surgery (n = 340), combined AVR and CABG (n = 517), aortic (n = 350), miscellaneous cases (n = 642), and combinations of the above cases (n = 5576). The original EuroSCORE had an ROC below 0.7 for isolated AVR and combined AVR and CABG. None of the methods described increased the ROC above 0.7. The EuroSCORE II risk model had an ROC below 0.7 for isolated AVR only. Recursive regression, non-linear regression, and mixed recursive and non-linear regression all increased the ROC above 0.7 for isolated AVR. The original EuroSCORE had a Hosmer-Lemeshow statistic that was above 0.05 for all patients and the subgroups analysed. All of the techniques markedly increased the Hosmer-Lemeshow statistic. The EuroSCORE II risk model had a Hosmer-Lemeshow statistic that was significant for all patients (P linear regression failed to improve on the original Hosmer-Lemeshow statistic. The mixed recursive and non-linear regression using the EuroSCORE II risk model was the only model that produced an ROC of 0.7 or above for all patients and procedures and had a Hosmer-Lemeshow statistic that was highly non-significant. The original EuroSCORE and the EuroSCORE II risk models do not have adequate ROC and Hosmer
International Nuclear Information System (INIS)
Harangozo, M.; Jombik, J.; Schiller, P.; Toelgyessy, J.
1981-01-01
A method for the determination of citric, tartaric and undecylenic acids based on radiometric titration with 0.1 or 0.05 mole.l -1 NaOH was developed. As an indicator of the end point, radioactive kryptonate of glass was used. Experimental technique, results of determinations as well as other possible applications of the radioactive kryptonate of glass for end point determination in alkalimetric analyses of officinal pharmaceuticals are discussed. (author)
Energy Technology Data Exchange (ETDEWEB)
Khramov, R.N.; Vorob`ev, V.V.
1994-07-01
The frequency spectra (0-26 Hz) of electrograms (EG) of the preoptic region of the hypothalamus were studied in chronic experiments on nine awake rabbits under the influence of nonthermal millimeter-bank (55-75 GHz) electromagnetic fields on various acupuncture points: (I) the auricular {open_quotes}heart{close_quotes} point (after F. G. Portnov); (II) the cranial acupoint (TR-20; the {open_quotes}hypothalamus{close_quotes} point after R. Voll); and (III) the {open_quotes}longevity{close_quotes} acupoint (E-36). Irradiation of point I was accompanied by significant suppression of hypothalamic electrical activity at 5 and 16 Hz and enhancement at 7-8, 12, and 26 Hz. Irradiation of point II, and III were, respectively, 31%, 21%, and 5% (p < 0.05, U-criterion). These results suggest that acupuncture points I and II are more sensitive to millimeter-band radiation than is point III. The presence of individual characteristics of the effects and their change after stress to sign inversion were shown in rat experiments in which the acupuncture points were irradiated.
Immobilization of (dd)heteronuclear hexacyanoferrates(II) in a gelatin matrix
International Nuclear Information System (INIS)
Mikhajlov, O.V.
2008-01-01
Data pertinent to potentiality of preparing salts of (dd)heteronuclear hexacyanoferrates(II) with(M 1 ) II and (M 2 ) II (M 1 , M 2 = Mn, Co, Ni, Cu, Zn, Cd) as a result of contact between M 1 2 [Fe(CN) 6 ] immobilized in a gelatin matrix and aqueous solutions of metal chlorides have been systematized and summarized. The decisive role of the gelatin matrix, performing the function of an organizing system in formation of (dd)heteronuclear hexacyanoferrates(II) of metals, has been pointed out [ru
Energy Technology Data Exchange (ETDEWEB)
Jansson, Tommy; Segerpalm, Henrik
2008-06-15
direction and content of the research programme. The minor financiers, on the other hand, consider the programme a means to keep informed about what is happening in wind energy research on a national level. The international experts have measured the performance of the programme against its vision and goals, but also in comparison to similar projects abroad. The experts conclude that the programme generally is of a high scientific level, with a project portfolio that is very relevant in relation to the goals of the programme. The experts recommend that the programme continues with the same goals as now, and that it gives priority to projects that benefit international collaboration. The programme's flexibility in allocating resources is seen as positive, and the experts suggest this flexibility can be used to finance industrially promising projects even when these do not obviously fit into existing programme areas. The experts also recommend a longer programme period in order to be able to finance PhD students to a larger extent, and that the criteria for project selection also formally could include scientific quality. Although the administration of the programme works well, the evaluation points to some areas that could be improved. The projects' reference groups are an appreciated tool for knowledge transfer, and the international experts as well as some interviewees think this function of the reference groups could be developed further. Another area for improvement would be the external communication. According to several interviewees, the programme could benefit from transmitting a clearer vision of what it represents and does. Vindforsk-II is, thus, an appreciated research programme, but the added value of being a financier of the programme is clearly not the same for everyone concerned and more than one of the financiers raise demands for their continued participation. It is not a matter of course that those who today finances the programme will all
ARSENIC CONTAMINATION IN GROUNDWATER: A STATISTICAL MODELING
Palas Roy; Naba Kumar Mondal; Biswajit Das; Kousik Das
2013-01-01
High arsenic in natural groundwater in most of the tubewells of the Purbasthali- Block II area of Burdwan district (W.B, India) has recently been focused as a serious environmental concern. This paper is intending to illustrate the statistical modeling of the arsenic contaminated groundwater to identify the interrelation of that arsenic contain with other participating groundwater parameters so that the arsenic contamination level can easily be predicted by analyzing only such parameters. Mul...
Critical point inequalities and scaling limits
International Nuclear Information System (INIS)
Newman, C.M.
1979-01-01
A refined and extended version of the Buckingham-Gunton inequality relating various pairs of critical exponents is shown to be valid for a large class of statistical mechanical models. If this inequality is an equality (in the refined sense) and one of the critical exponents has a non-Gaussian value, then any scaling limit must be non-Gaussian. This result clarifies the relationships between the nontriviality of triviality of the scaling limit for ordinary critical points in four dimensions (or tricritical points in three dimensions) and the existence of logarithmic factors in the asymptotics which define the two critical exponents. (orig.) [de
12th Workshop on Stochastic Models, Statistics and Their Applications
Rafajłowicz, Ewaryst; Szajowski, Krzysztof
2015-01-01
This volume presents the latest advances and trends in stochastic models and related statistical procedures. Selected peer-reviewed contributions focus on statistical inference, quality control, change-point analysis and detection, empirical processes, time series analysis, survival analysis and reliability, statistics for stochastic processes, big data in technology and the sciences, statistical genetics, experiment design, and stochastic models in engineering. Stochastic models and related statistical procedures play an important part in furthering our understanding of the challenging problems currently arising in areas of application such as the natural sciences, information technology, engineering, image analysis, genetics, energy and finance, to name but a few. This collection arises from the 12th Workshop on Stochastic Models, Statistics and Their Applications, Wroclaw, Poland.
THE USE OF STATISTIC KNOWLEDGE IN ACADEMIC DISCOURSES OF LITERACY
Directory of Open Access Journals (Sweden)
Renata Sperrhake
2012-12-01
Full Text Available The present paper analyzes how discourses of literacy, illiteracy and alphabetic literacy have used statistical knowledge. We carried out a search into digital files of journals specialized in Education and Statistics, as well as into CAPES Thesis Database. From the selected corpus, it is possible to perceive the following kinds of uses: 1 statistics used as empirical material; 2 statistics used as methodological procedure; 3 reference to statistical knowledge. Besides evidencing how the statistical knowledge has been used, the analysis of academic productions has enabled us to perceive other two points: the appearance of levels of alphabetic literacy, and the change in the understanding of the relationship of the subject with reading and writing.
Improving the Pattern Reproducibility of Multiple-Point-Based Prior Models Using Frequency Matching
DEFF Research Database (Denmark)
Cordua, Knud Skou; Hansen, Thomas Mejer; Mosegaard, Klaus
2014-01-01
Some multiple-point-based sampling algorithms, such as the snesim algorithm, rely on sequential simulation. The conditional probability distributions that are used for the simulation are based on statistics of multiple-point data events obtained from a training image. During the simulation, data...... events with zero probability in the training image statistics may occur. This is handled by pruning the set of conditioning data until an event with non-zero probability is found. The resulting probability distribution sampled by such algorithms is a pruned mixture model. The pruning strategy leads...... to a probability distribution that lacks some of the information provided by the multiple-point statistics from the training image, which reduces the reproducibility of the training image patterns in the outcome realizations. When pruned mixture models are used as prior models for inverse problems, local re...
Modeling of Dissipation Element Statistics in Turbulent Non-Premixed Jet Flames
Denker, Dominik; Attili, Antonio; Boschung, Jonas; Hennig, Fabian; Pitsch, Heinz
2017-11-01
The dissipation element (DE) analysis is a method for analyzing and compartmentalizing turbulent scalar fields. DEs can be described by two parameters, namely the Euclidean distance l between their extremal points and the scalar difference in the respective points Δϕ . The joint probability density function (jPDF) of these two parameters P(Δϕ , l) is expected to suffice for a statistical reconstruction of the scalar field. In addition, reacting scalars show a strong correlation with these DE parameters in both premixed and non-premixed flames. Normalized DE statistics show a remarkable invariance towards changes in Reynolds numbers. This feature of DE statistics was exploited in a Boltzmann-type evolution equation based model for the probability density function (PDF) of the distance between the extremal points P(l) in isotropic turbulence. Later, this model was extended for the jPDF P(Δϕ , l) and then adapted for the use in free shear flows. The effect of heat release on the scalar scales and DE statistics is investigated and an extended model for non-premixed jet flames is introduced, which accounts for the presence of chemical reactions. This new model is validated against a series of DNS of temporally evolving jet flames. European Research Council Project ``Milestone''.
Validation of statistical models for creep rupture by parametric analysis
Energy Technology Data Exchange (ETDEWEB)
Bolton, J., E-mail: john.bolton@uwclub.net [65, Fisher Ave., Rugby, Warks CV22 5HW (United Kingdom)
2012-01-15
Statistical analysis is an efficient method for the optimisation of any candidate mathematical model of creep rupture data, and for the comparative ranking of competing models. However, when a series of candidate models has been examined and the best of the series has been identified, there is no statistical criterion to determine whether a yet more accurate model might be devised. Hence there remains some uncertainty that the best of any series examined is sufficiently accurate to be considered reliable as a basis for extrapolation. This paper proposes that models should be validated primarily by parametric graphical comparison to rupture data and rupture gradient data. It proposes that no mathematical model should be considered reliable for extrapolation unless the visible divergence between model and data is so small as to leave no apparent scope for further reduction. This study is based on the data for a 12% Cr alloy steel used in BS PD6605:1998 to exemplify its recommended statistical analysis procedure. The models considered in this paper include a) a relatively simple model, b) the PD6605 recommended model and c) a more accurate model of somewhat greater complexity. - Highlights: Black-Right-Pointing-Pointer The paper discusses the validation of creep rupture models derived from statistical analysis. Black-Right-Pointing-Pointer It demonstrates that models can be satisfactorily validated by a visual-graphic comparison of models to data. Black-Right-Pointing-Pointer The method proposed utilises test data both as conventional rupture stress and as rupture stress gradient. Black-Right-Pointing-Pointer The approach is shown to be more reliable than a well-established and widely used method (BS PD6605).
The Battle of Moscow - Turning Point of World War II
Directory of Open Access Journals (Sweden)
V M Falin
2015-12-01
Full Text Available The article is dedicated to the Battle of Moscow in October- December, 1941. Author analyzes the causes of the failure of German army, who tries to encircle and capture Moscow, the events taking place on the outskirts of Moscow, German troops attempts to encircle Moscow. The author presents data on the speech by Adolf Hitler in Berlin on October 5, 1941, in which he acknowledged the failure of the Blitzkrieg and the Battle for Moscow and its suburbs. The researcher uses the documents of the Wehrmacht High Command, which stated that after the Battle of Moscow, German troops could not on any further stage of the war to restore the quality and morale of the armed forces, with whom Reich rushed to a campaign for world domination. The author, a prominent public and political figure of the USSR, also relies on personal recollections, interviews with prominent generals of World War II, including I. Konev.
Ergodic theory and dynamical systems from a physical point of view
International Nuclear Information System (INIS)
Sabbagan, M.; Nasertayoob, P.
2008-01-01
Ergodic theory and a large part of dynamical systems are in essence some mathematical modeling, which belongs to statistical physics. This paper is an attempt to present some of the results and principles in ergodic theory and dynamical systems from certain view points of physics such as thermodynamics and classical mechanics. The significance of the varational principle in the statistical physics, the relation between classical approach and statistical approach, also comparison between reversibility from statistical of view are discussed. (author)
Statistical inference an integrated approach
Migon, Helio S; Louzada, Francisco
2014-01-01
Introduction Information The concept of probability Assessing subjective probabilities An example Linear algebra and probability Notation Outline of the bookElements of Inference Common statistical modelsLikelihood-based functions Bayes theorem Exchangeability Sufficiency and exponential family Parameter elimination Prior Distribution Entirely subjective specification Specification through functional forms Conjugacy with the exponential family Non-informative priors Hierarchical priors Estimation Introduction to decision theoryBayesian point estimation Classical point estimation Empirical Bayes estimation Comparison of estimators Interval estimation Estimation in the Normal model Approximating Methods The general problem of inference Optimization techniquesAsymptotic theory Other analytical approximations Numerical integration methods Simulation methods Hypothesis Testing Introduction Classical hypothesis testingBayesian hypothesis testing Hypothesis testing and confidence intervalsAsymptotic tests Prediction...
A New Perspective on the Relationship Between Cloud Shade and Point Cloudiness
Czech Academy of Sciences Publication Activity Database
Brabec, Marek; Badescu, V.; Paulescu, M.; Dumitrescu, A.
172-173, 15 May (2016), s. 136-146 ISSN 0169-8095 Institutional support: RVO:67985807 Keywords : point cloud iness * cloud shade * statistical analysis * semi-parametric modeling Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 3.778, year: 2016
Elementary statistical thermodynamics a problems approach
Smith, Norman O
1982-01-01
This book is a sequel to my Chemical Thermodynamics: A Prob lems Approach published in 1967, which concerned classical thermodynamics almost exclusively. Most books on statistical thermodynamics now available are written either for the superior general chemistry student or for the specialist. The author has felt the need for a text which would bring the intermediate reader to the point where he could not only appreciate the roots of the subject but also have some facility in calculating thermodynamic quantities. Although statistical thermodynamics comprises an essential part of the college training of a chemist, its treatment in general physical chem istry texts is, of necessity, compressed to the point where the less competent student is unable to appreciate or comprehend its logic and beauty, and is reduced to memorizing a series of formulas. It has been my aim to fill this need by writing a logical account of the foundations and applications of the sub ject at a level which can be grasped by an under...
Charged-particle thermonuclear reaction rates: I. Monte Carlo method and statistical distributions
International Nuclear Information System (INIS)
Longland, R.; Iliadis, C.; Champagne, A.E.; Newton, J.R.; Ugalde, C.; Coc, A.; Fitzgerald, R.
2010-01-01
A method based on Monte Carlo techniques is presented for evaluating thermonuclear reaction rates. We begin by reviewing commonly applied procedures and point out that reaction rates that have been reported up to now in the literature have no rigorous statistical meaning. Subsequently, we associate each nuclear physics quantity entering in the calculation of reaction rates with a specific probability density function, including Gaussian, lognormal and chi-squared distributions. Based on these probability density functions the total reaction rate is randomly sampled many times until the required statistical precision is achieved. This procedure results in a median (Monte Carlo) rate which agrees under certain conditions with the commonly reported recommended 'classical' rate. In addition, we present at each temperature a low rate and a high rate, corresponding to the 0.16 and 0.84 quantiles of the cumulative reaction rate distribution. These quantities are in general different from the statistically meaningless 'minimum' (or 'lower limit') and 'maximum' (or 'upper limit') reaction rates which are commonly reported. Furthermore, we approximate the output reaction rate probability density function by a lognormal distribution and present, at each temperature, the lognormal parameters μ and σ. The values of these quantities will be crucial for future Monte Carlo nucleosynthesis studies. Our new reaction rates, appropriate for bare nuclei in the laboratory, are tabulated in the second paper of this issue (Paper II). The nuclear physics input used to derive our reaction rates is presented in the third paper of this issue (Paper III). In the fourth paper of this issue (Paper IV) we compare our new reaction rates to previous results.
Classical model of intermediate statistics
International Nuclear Information System (INIS)
Kaniadakis, G.
1994-01-01
In this work we present a classical kinetic model of intermediate statistics. In the case of Brownian particles we show that the Fermi-Dirac (FD) and Bose-Einstein (BE) distributions can be obtained, just as the Maxwell-Boltzmann (MD) distribution, as steady states of a classical kinetic equation that intrinsically takes into account an exclusion-inclusion principle. In our model the intermediate statistics are obtained as steady states of a system of coupled nonlinear kinetic equations, where the coupling constants are the transmutational potentials η κκ' . We show that, besides the FD-BE intermediate statistics extensively studied from the quantum point of view, we can also study the MB-FD and MB-BE ones. Moreover, our model allows us to treat the three-state mixing FD-MB-BE intermediate statistics. For boson and fermion mixing in a D-dimensional space, we obtain a family of FD-BE intermediate statistics by varying the transmutational potential η BF . This family contains, as a particular case when η BF =0, the quantum statistics recently proposed by L. Wu, Z. Wu, and J. Sun [Phys. Lett. A 170, 280 (1992)]. When we consider the two-dimensional FD-BE statistics, we derive an analytic expression of the fraction of fermions. When the temperature T→∞, the system is composed by an equal number of bosons and fermions, regardless of the value of η BF . On the contrary, when T=0, η BF becomes important and, according to its value, the system can be completely bosonic or fermionic, or composed both by bosons and fermions
Tenenbaum, Joel
This thesis applies statistical physics concepts and methods to quantitatively analyze complex systems. This thesis is separated into four parts: (i) characteristics of earthquake systems (ii) memory and volatility in data time series (iii) the application of part (ii) to world financial markets, and (iv) statistical observations on the evolution of word usage. In Part I, we observe statistical patterns in the occurrence of earthquakes. We select a 14-year earthquake catalog covering the archipelago of Japan. We find that regions traditionally thought of as being too distant from one another for causal contact display remarkably high correlations, and the networks that result have a tendency to link highly connected areas with other highly connected areas. In Part II, we introduce and apply the concept of "volatility asymmetry", the primary use of which is in financial data. We explain the relation between memory and "volatility asymmetry" in terms of an asymmetry parameter lambda. We define a litmus test for determining whether lambda is statistically significant and propose a stochastic model based on this parameter and use the model to further explain empirical data. In Part III, we expand on volatility asymmetry. Importing the concepts of time dependence and universality from physics, we explore the aspects of emerging (or "transition") economies in Eastern Europe as they relate to asymmetry. We find that these emerging markets in some instances behave like developed markets and in other instances do not, and that the distinction is a matter both of country and a matter of time period, crisis periods showing different asymmetry characteristics than "healthy" periods. In Part IV, we take note of a series of findings in econophysics, showing statistical growth similarities between a variety of different areas that all have in common the fact of taking place in areas that are both (i) competing and (ii) dynamic. We show that this same growth distribution can be
Directory of Open Access Journals (Sweden)
Santos T. J.
2014-04-01
Full Text Available One of the principal characteristics of nuclear multifragmentation is the emission of complex fragments of intermediate mass. The statistical multifragmentation model has been used for many years to describe the distribution of these fragments. An extension of the statistical multifragmentation model to include partial widths and lifetimes for emission, interprets the fragmentation process as the near simultaneous limit of a series of sequential binary decays. In this extension, the expression describing intermediate mass fragment emission is almost identical to that of light particle emission. At lower temperatures, similar expressions have been shown to furnish a good description of very light intermediate mass fragment emission. However, this is usually not considered a good approximation to the emission of heavier fragments. These emissions seem to be determined by the characteristics of the system at the saddle-point and its subsequent dynamical evolution rather than by the scission point. Here, we compare the barriers and decay widths of these different formulations of intermediate fragment emission and analyze the extent to which they remain distinguishable at high excitation energy.
Selimkhanov, J; Thompson, W C; Guo, J; Hall, K D; Musante, C J
2017-08-01
The design of well-powered in vivo preclinical studies is a key element in building the knowledge of disease physiology for the purpose of identifying and effectively testing potential antiobesity drug targets. However, as a result of the complexity of the obese phenotype, there is limited understanding of the variability within and between study animals of macroscopic end points such as food intake and body composition. This, combined with limitations inherent in the measurement of certain end points, presents challenges to study design that can have significant consequences for an antiobesity program. Here, we analyze a large, longitudinal study of mouse food intake and body composition during diet perturbation to quantify the variability and interaction of the key metabolic end points. To demonstrate how conclusions can change as a function of study size, we show that a simulated preclinical study properly powered for one end point may lead to false conclusions based on secondary end points. We then propose the guidelines for end point selection and study size estimation under different conditions to facilitate proper power calculation for a more successful in vivo study design.
Dynamical topology and statistical properties of spatiotemporal chaos.
Zhuang, Quntao; Gao, Xun; Ouyang, Qi; Wang, Hongli
2012-12-01
For spatiotemporal chaos described by partial differential equations, there are generally locations where the dynamical variable achieves its local extremum or where the time partial derivative of the variable vanishes instantaneously. To a large extent, the location and movement of these topologically special points determine the qualitative structure of the disordered states. We analyze numerically statistical properties of the topologically special points in one-dimensional spatiotemporal chaos. The probability distribution functions for the number of point, the lifespan, and the distance covered during their lifetime are obtained from numerical simulations. Mathematically, we establish a probabilistic model to describe the dynamics of these topologically special points. In spite of the different definitions in different spatiotemporal chaos, the dynamics of these special points can be described in a uniform approach.
Ginting, H.; Näring, G.W.B.; Veld, W.M. van der; Srisayekti, W.; Becker, E.S.
2013-01-01
This study assesses the validity and determines the cut-off point for the Beck Depression Inventory-II (the BDI-II) among Indonesians. The Indonesian version of the BDI-II (the Indo BDI-II) was administered to 720 healthy individuals from the general population, 215 Coronary Heart Disease (CHD)
Training experience at Experimental Breeder Reactor II
International Nuclear Information System (INIS)
Driscoll, J.W.; McCormick, R.P.; McCreery, H.I.
1978-01-01
The EBR-II Training Group develops, maintains,and oversees training programs and activities associated with the EBR-II Project. The group originally spent all its time on EBR-II plant-operations training, but has gradually spread its work into other areas. These other areas of training now include mechanical maintenance, fuel manufacturing facility, instrumentation and control, fissile fuel handling, and emergency activities. This report describes each of the programs and gives a statistical breakdown of the time spent by the Training Group for each program. The major training programs for the EBR-II Project are presented by multimedia methods at a pace controlled by the student. The Training Group has much experience in the use of audio-visual techniques and equipment, including video-tapes, 35 mm slides, Super 8 and 16 mm film, models, and filmstrips. The effectiveness of these techniques is evaluated in this report
Lagrangian statistics in weakly forced two-dimensional turbulence.
Rivera, Michael K; Ecke, Robert E
2016-01-01
Measurements of Lagrangian single-point and multiple-point statistics in a quasi-two-dimensional stratified layer system are reported. The system consists of a layer of salt water over an immiscible layer of Fluorinert and is forced electromagnetically so that mean-squared vorticity is injected at a well-defined spatial scale ri. Simultaneous cascades develop in which enstrophy flows predominately to small scales whereas energy cascades, on average, to larger scales. Lagrangian correlations and one- and two-point displacements are measured for random initial conditions and for initial positions within topological centers and saddles. Some of the behavior of these quantities can be understood in terms of the trapping characteristics of long-lived centers, the slow motion near strong saddles, and the rapid fluctuations outside of either centers or saddles. We also present statistics of Lagrangian velocity fluctuations using energy spectra in frequency space and structure functions in real space. We compare with complementary Eulerian velocity statistics. We find that simultaneous inverse energy and enstrophy ranges present in spectra are not directly echoed in real-space moments of velocity difference. Nevertheless, the spectral ranges line up well with features of moment ratios, indicating that although the moments are not exhibiting unambiguous scaling, the behavior of the probability distribution functions is changing over short ranges of length scales. Implications for understanding weakly forced 2D turbulence with simultaneous inverse and direct cascades are discussed.
Use of Statistical Information for Damage Assessment of Civil Engineering Structures
DEFF Research Database (Denmark)
Kirkegaard, Poul Henning; Andersen, P.
This paper considers the problem of damage assessment of civil engineering structures using statistical information. The aim of the paper is to review how researchers recently have tried to solve the problem. It is pointed out that the problem consists of not only how to use the statistical...
Statistical Network Analysis for Functional MRI: Mean Networks and Group Comparisons.
Directory of Open Access Journals (Sweden)
Cedric E Ginestet
2014-05-01
Full Text Available Comparing networks in neuroscience is hard, because the topological properties of a given network are necessarily dependent on the number of edges of that network. This problem arises in the analysis of both weighted and unweighted networks. The term density is often used in this context, in order to refer to the mean edge weight of a weighted network, or to the number of edges in an unweighted one. Comparing families of networks is therefore statistically difficult because differences in topology are necessarily associated with differences in density. In this review paper, we consider this problem from two different perspectives, which include (i the construction of summary networks, such as how to compute and visualize the mean network from a sample of network-valued data points; and (ii how to test for topological differences, when two families of networks also exhibit significant differences in density. In the first instance, we show that the issue of summarizing a family of networks can be conducted by either adopting a mass-univariate approach, which produces a statistical parametric network (SPN, or by directly computing the mean network, provided that a metric has been specified on the space of all networks with a given number of nodes. In the second part of this review, we then highlight the inherent problems associated with the comparison of topological functions of families of networks that differ in density. In particular, we show that a wide range of topological summaries, such as global efficiency and network modularity are highly sensitive to differences in density. Moreover, these problems are not restricted to unweighted metrics, as we demonstrate that the same issues remain present when considering the weighted versions of these metrics. We conclude by encouraging caution, when reporting such statistical comparisons, and by emphasizing the importance of constructing summary networks.
Transportation Statistics Annual Report 1997
Energy Technology Data Exchange (ETDEWEB)
Fenn, M.
1997-01-01
This document is the fourth Transportation Statistics Annual Report (TSAR) prepared by the Bureau of Transportation Statistics (BTS) for the President and Congress. As in previous years, it reports on the state of U.S. transportation system at two levels. First, in Part I, it provides a statistical and interpretive survey of the system—its physical characteristics, its economic attributes, aspects of its use and performance, and the scale and severity of unintended consequences of transportation, such as fatalities and injuries, oil import dependency, and environment impacts. Part I also explores the state of transportation statistics, and new needs of the rapidly changing world of transportation. Second, Part II of the report, as in prior years, explores in detail the performance of the U.S. transportation system from the perspective of desired social outcomes or strategic goals. This year, the performance aspect of transportation chosen for thematic treatment is “Mobility and Access,” which complements past TSAR theme sections on “The Economic Performance of Transportation” (1995) and “Transportation and the Environment” (1996). Mobility and access are at the heart of the transportation system’s performance from the user’s perspective. In what ways and to what extent does the geographic freedom provided by transportation enhance personal fulfillment of the nation’s residents and contribute to economic advancement of people and businesses? This broad question underlies many of the topics examined in Part II: What is the current level of personal mobility in the United States, and how does it vary by sex, age, income level, urban or rural location, and over time? What factors explain variations? Has transportation helped improve people’s access to work, shopping, recreational facilities, and medical services, and in what ways and in what locations? How have barriers, such as age, disabilities, or lack of an automobile, affected these
Czech Academy of Sciences Publication Activity Database
Machala, L.; Pospíšil, Jaroslav
40-41, - (2001), s. 155-162 ISSN 0231-9365 Institutional research plan: CEZ:AV0Z1010921 Keywords : biometric verification * biometric idntification * human eye`s iris * statistical error of type I * statistical erroer II * charasteristic iris vector Subject RIV: BH - Optics, Masers, Lasers
Choquet, Élodie; Pueyo, Laurent; Soummer, Rémi; Perrin, Marshall D.; Hagan, J. Brendan; Gofas-Salas, Elena; Rajan, Abhijith; Aguilar, Jonathan
2015-09-01
The ALICE program, for Archival Legacy Investigation of Circumstellar Environment, is currently conducting a virtual survey of about 400 stars, by re-analyzing the HST-NICMOS coronagraphic archive with advanced post-processing techniques. We present here the strategy that we adopted to identify detections and potential candidates for follow-up observations, and we give a preliminary overview of our detections. We present a statistical analysis conducted to evaluate the confidence level on these detection and the completeness of our candidate search.
Morphological caracteristics of malocclusion class II
Directory of Open Access Journals (Sweden)
Pavlović J.
2015-01-01
Full Text Available Class II malocclusion are complex anomalies of the skeletal and dental systems. The aim of this study is that the rengenkefalometrics analysis closer determine the morphological characteristics of this malocclusion. For this study were used 30 patients aged 18-30, previously clinically diagnosed class II, before the planned orthodontic treatment. The results analisis lateral cephalometric radiographs were compared with the 30 patients with class I malocclusion. Analyzed three linear and two angular cranial base dimensions and nine angular and four linear measures from the facial skeleton. The Results show: No statistically significant differensis in cranial base angle (SNBa and anterior cranial base length (S-N between class II and control Class I. Angle maxillar prognathism ( SNA is no signifikant different between class I and Class II but SNB angle were signifikant smaller. The length of maxillary base (A'-SnP is longer and the length of mandibule (Pg'-MT1/MT is signifficantly smaller. The gonial angle (ArGo-Me was smaller with open articular angle (GoArSN. Morphological characteristics of class II malocclusion are , retrognathic and smaller mandibular ligth, normognathic and longer maxilla, open articular angle with vertical tendency of the craniofacial growth pattern.
Zhao, Lingling; Zhong, Shuxian; Fang, Keming; Qian, Zhaosheng; Chen, Jianrong
2012-11-15
A dual-cloud point extraction (d-CPE) procedure has been developed for simultaneous pre-concentration and separation of heavy metal ions (Cd2+, Co2+, Ni2+, Pb2+, Zn2+, and Cu2+ ion) in water samples by inductively coupled plasma optical emission spectrometry (ICP-OES). The procedure is based on forming complexes of metal ion with 8-hydroxyquinoline (8-HQ) into the as-formed Triton X-114 surfactant rich phase. Instead of direct injection or analysis, the surfactant rich phase containing the complexes was treated by nitric acid, and the detected ions were back extracted again into aqueous phase at the second cloud point extraction stage, and finally determined by ICP-OES. Under the optimum conditions (pH=7.0, Triton X-114=0.05% (w/v), 8-HQ=2.0×10(-4) mol L(-1), HNO3=0.8 mol L(-1)), the detection limits for Cd2+, Co2+, Ni2+, Pb2+, Zn2+, and Cu2+ ions were 0.01, 0.04, 0.01, 0.34, 0.05, and 0.04 μg L(-1), respectively. Relative standard deviation (RSD) values for 10 replicates at 100 μg L(-1) were lower than 6.0%. The proposed method could be successfully applied to the determination of Cd2+, Co2+, Ni2+, Pb2+, Zn2+, and Cu2+ ion in water samples. Copyright © 2012 Elsevier B.V. All rights reserved.
A Statistical Project Control Tool for Engineering Managers
Bauch, Garland T.
2001-01-01
This slide presentation reviews the use of a Statistical Project Control Tool (SPCT) for managing engineering projects. A literature review pointed to a definition of project success, (i.e., A project is successful when the cost, schedule, technical performance, and quality satisfy the customer.) The literature review also pointed to project success factors, and traditional project control tools, and performance measures that are detailed in the report. The essential problem is that with resources becoming more limited, and an increasing number or projects, project failure is increasing, there is a limitation of existing methods and systematic methods are required. The objective of the work is to provide a new statistical project control tool for project managers. Graphs using the SPCT method plotting results of 3 successful projects and 3 failed projects are reviewed, with success and failure being defined by the owner.
Quantifying spatial and temporal trends in beach-dune volumetric changes using spatial statistics
Eamer, Jordan B. R.; Walker, Ian J.
2013-06-01
Spatial statistics are generally underutilized in coastal geomorphology, despite offering great potential for identifying and quantifying spatial-temporal trends in landscape morphodynamics. In particular, local Moran's Ii provides a statistical framework for detecting clusters of significant change in an attribute (e.g., surface erosion or deposition) and quantifying how this changes over space and time. This study analyzes and interprets spatial-temporal patterns in sediment volume changes in a beach-foredune-transgressive dune complex following removal of invasive marram grass (Ammophila spp.). Results are derived by detecting significant changes in post-removal repeat DEMs derived from topographic surveys and airborne LiDAR. The study site was separated into discrete, linked geomorphic units (beach, foredune, transgressive dune complex) to facilitate sub-landscape scale analysis of volumetric change and sediment budget responses. Difference surfaces derived from a pixel-subtraction algorithm between interval DEMs and the LiDAR baseline DEM were filtered using the local Moran's Ii method and two different spatial weights (1.5 and 5 m) to detect statistically significant change. Moran's Ii results were compared with those derived from a more spatially uniform statistical method that uses a simpler student's t distribution threshold for change detection. Morphodynamic patterns and volumetric estimates were similar between the uniform geostatistical method and Moran's Ii at a spatial weight of 5 m while the smaller spatial weight (1.5 m) consistently indicated volumetric changes of less magnitude. The larger 5 m spatial weight was most representative of broader site morphodynamics and spatial patterns while the smaller spatial weight provided volumetric changes consistent with field observations. All methods showed foredune deflation immediately following removal with increased sediment volumes into the spring via deposition at the crest and on lobes in the lee
Parametric Level Statistics in Random Matrix Theory: Exact Solution
International Nuclear Information System (INIS)
Kanzieper, E.
1999-01-01
During recent several years, the theory of non-Gaussian random matrix ensembles has experienced a sound progress motivated by new ideas in quantum chromodynamics (QCD) and mesoscopic physics. Invariant non-Gaussian random matrix models appear to describe universal features of low-energy part of the spectrum of Dirac operator in QCD, and electron level statistics in normal conducting-superconducting hybrid structures. They also serve as a basis for constructing the toy models of universal spectral statistics expected at the edge of the metal-insulator transition. While conventional spectral statistics has received a detailed study in the context of RMT, quite a bit is known about parametric level statistics in non-Gaussian random matrix models. In this communication we report about exact solution to the problem of parametric level statistics in unitary invariant, U(N), non-Gaussian ensembles of N x N Hermitian random matrices with either soft or strong level confinement. The solution is formulated within the framework of the orthogonal polynomial technique and is shown to depend on both the unfolded two-point scalar kernel and the level confinement through a double integral transformation which, in turn, provides a constructive tool for description of parametric level correlations in non-Gaussian RMT. In the case of soft level confinement, the formalism developed is potentially applicable to a study of parametric level statistics in an important class of random matrix models with finite level compressibility expected to describe a disorder-induced metal-insulator transition. In random matrix ensembles with strong level confinement, the solution presented takes a particular simple form in the thermodynamic limit: In this case, a new intriguing connection relation between the parametric level statistics and the scalar two-point kernel of an unperturbed ensemble is demonstrated to emerge. Extension of the results obtained to higher-order parametric level statistics is
Energy Technology Data Exchange (ETDEWEB)
Ahlburg, Patrick; Eyring, Andreas; Filimonov, Viacheslav; Krueger, Hans; Mari, Laura; Marinas, Carlos; Pohl, David-Leon; Wermes, Norbert; Dingfelder, Jochen [University of Bonn (Germany)
2016-07-01
Before the upgraded vertex detector for the Belle II experiment at the SuperKEKB collider in Japan will be installed, a dedicated detector system for machine commissioning (BEAST II) will be employed. One of its main objectives is to measure and characterize the different background types in order to ensure a safe environment before the installation of the actual silicon detector systems close to the interaction point. FANGS, a detector system at BEAST II, based on ATLAS-IBL front-end electronics and planar silicon sensors is currently being developed for this purpose. The unique feature of this detector system is the high energy resolution achieved by using an external FPGA clock to sample the time-over-threshold signal, while keeping the excellent timing properties. The complete detector system is presented in this talk.
Energy Technology Data Exchange (ETDEWEB)
Ishizuka, Toshiaki, E-mail: tishizu@ndmc.ac.jp [Department of Pharmacology, National Defense Medical College, Tokorozawa, Saitama 359-8513 (Japan); Goshima, Hazuki; Ozawa, Ayako; Watanabe, Yasuhiro [Department of Pharmacology, National Defense Medical College, Tokorozawa, Saitama 359-8513 (Japan)
2012-03-30
Highlights: Black-Right-Pointing-Pointer Treatment with angiotensin II enhanced LIF-induced DNA synthesis of mouse iPS cells. Black-Right-Pointing-Pointer Angiotensin II may enhance the DNA synthesis via induction of superoxide. Black-Right-Pointing-Pointer Treatment with angiotensin II significantly increased JAK/STAT3 phosphorylation. Black-Right-Pointing-Pointer Angiotensin II enhanced differentiation into mesodermal progenitor cells. Black-Right-Pointing-Pointer Angiotensin II may enhance the differentiation via activation of p38 MAPK. -- Abstract: Previous studies suggest that angiotensin receptor stimulation may enhance not only proliferation but also differentiation of undifferentiated stem/progenitor cells. Therefore, in the present study, we determined the involvement of the angiotensin receptor in the proliferation and differentiation of mouse induced pluripotent stem (iPS) cells. Stimulation with angiotensin II (Ang II) significantly increased DNA synthesis in mouse iPS cells cultured in a medium with leukemia inhibitory factor (LIF). Pretreatment of the cells with either candesartan (a selective Ang II type 1 receptor [AT{sub 1}R] antagonist) or Tempol (a cell-permeable superoxide scavenger) significantly inhibited Ang II-induced DNA synthesis. Treatment with Ang II significantly increased JAK/STAT3 phosphorylation. Pretreatment with candesartan significantly inhibited Ang II- induced JAK/STAT3 phosphorylation. In contrast, induction of mouse iPS cell differentiation into Flk-1-positive mesodermal progenitor cells was performed in type IV collagen (Col IV)- coated dishes in a differentiation medium without LIF. When Col IV-exposed iPS cells were treated with Ang II for 5 days, the expression of Flk-1 was significantly increased compared with that in the cells treated with the vehicle alone. Pretreatment of the cells with both candesartan and SB203580 (a p38 MAPK inhibitor) significantly inhibited the Ang II- induced increase in Flk-1 expression
Introduction to modern theoretical physics. Volume II. Quantum theory and statistical physics
International Nuclear Information System (INIS)
Harris, E.G.
1975-01-01
The topics discussed include the history and principles, some solvable problems, and symmetry in quantum mechanics, interference phenomena, approximation methods, some applications of nonrelativistic quantum mechanics, relativistic wave equations, quantum theory of radiation, second quantization, elementary particles and their interactions, thermodynamics, equilibrium statistical mechanics and its applications, the kinetic theory of gases, and collective phenomena
International Nuclear Information System (INIS)
Lauzier, Pascal Thériault; Chen Guanghong
2013-01-01
Purpose: The ionizing radiation imparted to patients during computed tomography exams is raising concerns. This paper studies the performance of a scheme called dose reduction using prior image constrained compressed sensing (DR-PICCS). The purpose of this study is to characterize the effects of a statistical model of x-ray detection in the DR-PICCS framework and its impact on spatial resolution. Methods: Both numerical simulations with known ground truth and in vivo animal dataset were used in this study. In numerical simulations, a phantom was simulated with Poisson noise and with varying levels of eccentricity. Both the conventional filtered backprojection (FBP) and the PICCS algorithms were used to reconstruct images. In PICCS reconstructions, the prior image was generated using two different denoising methods: a simple Gaussian blur and a more advanced diffusion filter. Due to the lack of shift-invariance in nonlinear image reconstruction such as the one studied in this paper, the concept of local spatial resolution was used to study the sharpness of a reconstructed image. Specifically, a directional metric of image sharpness, the so-called pseudopoint spread function (pseudo-PSF), was employed to investigate local spatial resolution. Results: In the numerical studies, the pseudo-PSF was reduced from twice the voxel width in the prior image down to less than 1.1 times the voxel width in DR-PICCS reconstructions when the statistical model was not included. At the same noise level, when statistical weighting was used, the pseudo-PSF width in DR-PICCS reconstructed images varied between 1.5 and 0.75 times the voxel width depending on the direction along which it was measured. However, this anisotropy was largely eliminated when the prior image was generated using diffusion filtering; the pseudo-PSF width was reduced to below one voxel width in that case. In the in vivo study, a fourfold improvement in CNR was achieved while qualitatively maintaining sharpness
International Nuclear Information System (INIS)
Mroziewicz, B.
1986-01-01
The most important requirements for the spectral properties of photodetectors are reviewed with particular attention to the fiber optics applications. Data on a number of materials are collected and presented. Pros and cons are pointed out for each type of photodetector-photoconductor, p-i-n photodiode and APD. A review is given of the relevant papers presented in the poster session 'Technology II' of the Symposium
Directory of Open Access Journals (Sweden)
Byron Lapo
2018-03-01
Full Text Available The present work describes the study of mercury Hg(II and lead Pb(II removal in single and binary component systems into easily prepared chitosan-iron(III bio-composite beads. Scanning electron microscopy and energy-dispersive X-ray (SEM-EDX analysis, Fourier transform infrared spectroscopy (FTIR, thermogravimetric analysis (TGA and point of zero charge (pHpzc analysis were carried out. The experimental set covered pH study, single and competitive equilibrium, kinetics, chloride and sulfate effects as well as sorption–desorption cycles. In single systems, the Langmuir nonlinear model fitted the experimental data better than the Freundlich and Sips equations. The sorbent material has more affinity to Hg(II rather than Pb(II ions, the maximum sorption capacities were 1.8 mmol·g−1 and 0.56 mmol·g−1 for Hg(II and Pb(II, respectively. The binary systems data were adjusted with competitive Langmuir isotherm model. The presence of sulfate ions in the multicomponent system [Hg(II-Pb(II] had a lesser impact on the sorption efficiency than did chloride ions, however, the presence of chloride ions improves the selectivity towards Hg(II ions. The bio-based material showed good recovery performance of metal ions along three sorption–desorption cycles.
Vortex pinning by point defect in superconductors
International Nuclear Information System (INIS)
Liao Hongyin; Zhou Shiping; Du Haochen
2003-01-01
We apply the periodic time-dependent Ginzburg-Landau model to study vortex distribution in type-II superconductors with a point-like defect and square pinning array. A defect site will pin vortices, and a periodic pinning array with right geometric parameters, which can be any form designed in advance, shapes the vortex pattern as external magnetic field varies. The maximum length over which an attractive interaction between a pinning centre and a vortex extends is estimated to be about 6.0ξ. We also derive spatial distribution expressions for the order parameter, vector potential, magnetic field and supercurrent induced by a point defect. Theoretical results and numerical simulations are compared with each other and they are consistent
A Discussion of the Statistical Investigation Process in the Australian Curriculum
McQuade, Vivienne
2013-01-01
Statistics and statistical literacy can be found in the Learning Areas of Mathematics, Geography, Science, History and the upcoming Business and Economics, as well as in the General Capability of Numeracy and all three Crosscurriculum priorities. The Australian Curriculum affords many exciting and varied entry points for the teaching of…
NSLS-II commissioning and operation
Energy Technology Data Exchange (ETDEWEB)
Wang, G., E-mail: gwang@bnl.gov; Shaftan, T.; Bassi, G.; Bengtsson, J.; Blednykh, A.; Blum, E.; Cheng, W.; Choi, J.; Davidsaver, M.; Doom, L.; Fliller, R.; Ganetis, G.; Guo, W.; Hidaka, Y.; Kramer, S.; Li, Y.; Podobedov, B.; Qian, K.; Rose, J.; Seletskiy, S. [Brookhaven National Laboratory, Upton, NY 11973 (United States); and others
2016-07-27
The National Synchrotron Light Source II at Brookhaven National Lab is a third-generation synchrotron radiation facility that has been commissioned in 2014. The facility is based on a 3 GeV electron storage ring, which will circulate 500 mA of beam current at 1 nm rad horizontal emittance. The storage ring is 792 meters in circumference and will accommodate more than 60 beamlines in the final built-out. The beamline sources range from insertion-devices located in straight sections, bending magnets or three-pole-wigglers configured in multiple branches. The NSLS-II storage ring commissioning was successfully completed in July 2014 and the facility delivered the first user light on October 23, 2014. Currently the storage ring reached 300 mA beam current and achieved 1 nm rad of horizontal emittance with 3 sets of Damping Wigglers. At this point six NSLS-II project beamlines are routinely taking photons with beam current at 150 mA. This paper reviews the NSLS-II accelerator design and commissioning experience.
The effectiveness of the Herbst appliance for patients with Class II malocclusion: a meta-analysis
Yang, Xin; Zhu, Yafen; Long, Hu; Zhou, Yang; Jian, Fan; Ye, Niansong; Gao, Meiya
2016-01-01
Summary Objective: To systematically investigate review in literature the effects of the Herbst appliance for patients with Class II malocclusion patients. Method: We performed a comprehensive literature survey on PubMed, Web of Science, Embase, CENTRAL, SIGLE, and ClinicalTrial.gov up to December 2014. The selection criteria: randomized controlled trials or clinical controlled trials; using any kind of Herbst appliances to correct Class II division 1 malocclusions; skeletal and/or dental changes evaluated through lateral cephalograms. And the exclusion criteria: syndromic patients; individual case reports and series of cases; surgical interventions. Article screening, data extraction, assessment of risk of bias, and evaluation of evidence quality through GRADE were conducted independently by two well-trained orthodontic doctors. Consensus was made via group discussion of all authors when there is inconsistent information from the two. After that, sensitivity analysis and subgroup analysis were performed to evaluate the robustness of the meta-analysis. Results: Twelve clinical controlled trials meet the above-mentioned criteria, and were included in this analysis. All included studies have eleven measures taken during both active treatment effect and long term effect periods, including four angular ones (i.e., SNA, SNB, ANB, mandibular plane angle) and seven linear ones (i.e. Co-Go, Co-Gn, overjet, overbite, molar relationship, A point-OLp, Pg-OLp) during active treatment effect period were statistically pooled. Meta-analysis and sensitivity analysis demonstrated that all these measures showed consistent results except for SNA, ANB, and overbite. Subgroup analysis showed significant changes in SNA, overbite, and Pg-OLp. Publication bias was detected in SNB, mandibular plane angle, and A point-OLp. Conclusion: The Herbst appliance is effective for patients with Class II malocclusion in active treatment period. Especially, there are obvious changes on dental
Belle II silicon vertex detector
Energy Technology Data Exchange (ETDEWEB)
Adamczyk, K. [H. Niewodniczanski Institute of Nuclear Physics, Krakow 31-342 (Poland); Aihara, H. [Department of Physics, University of Tokyo, Tokyo 113-0033 (Japan); Angelini, C. [Dipartimento di Fisica, Università di Pisa, I-56127 Pisa (Italy); INFN Sezione di Pisa, I-56127 Pisa (Italy); Aziz, T.; Babu, V. [Tata Institute of Fundamental Research, Mumbai 400005 (India); Bacher, S. [H. Niewodniczanski Institute of Nuclear Physics, Krakow 31-342 (Poland); Bahinipati, S. [Indian Institute of Technology Bhubaneswar, Satya Nagar (India); Barberio, E.; Baroncelli, Ti.; Baroncelli, To. [School of Physics, University of Melbourne, Melbourne, Victoria 3010 (Australia); Basith, A.K. [Indian Institute of Technology Madras, Chennai 600036 (India); Batignani, G. [Dipartimento di Fisica, Università di Pisa, I-56127 Pisa (Italy); INFN Sezione di Pisa, I-56127 Pisa (Italy); Bauer, A. [Institute of High Energy Physics, Austrian Academy of Sciences, 1050 Vienna (Austria); Behera, P.K. [Indian Institute of Technology Madras, Chennai 600036 (India); Bergauer, T. [Institute of High Energy Physics, Austrian Academy of Sciences, 1050 Vienna (Austria); Bettarini, S. [Dipartimento di Fisica, Università di Pisa, I-56127 Pisa (Italy); INFN Sezione di Pisa, I-56127 Pisa (Italy); Bhuyan, B. [Indian Institute of Technology Guwahati, Assam 781039 (India); Bilka, T. [Faculty of Mathematics and Physics, Charles University, 121 16 Prague (Czech Republic); Bosi, F. [INFN Sezione di Pisa, I-56127 Pisa (Italy); Bosisio, L. [Dipartimento di Fisica, Università di Trieste, I-34127 Trieste (Italy); INFN Sezione di Trieste, I-34127 Trieste (Italy); and others
2016-09-21
The Belle II experiment at the SuperKEKB collider in Japan is designed to indirectly probe new physics using approximately 50 times the data recorded by its predecessor. An accurate determination of the decay-point position of subatomic particles such as beauty and charm hadrons as well as a precise measurement of low-momentum charged particles will play a key role in this pursuit. These will be accomplished by an inner tracking device comprising two layers of pixelated silicon detector and four layers of silicon vertex detector based on double-sided microstrip sensors. We describe herein the design, prototyping and construction efforts of the Belle-II silicon vertex detector.
Zero-point energies in the two-center shell model. II
International Nuclear Information System (INIS)
Reinhard, P.-G.
1978-01-01
The zero-point energy (ZPE) contained in the potential-energy surface of a two-center shell model (TCSM) is evaluated. In extension of previous work, the author uses here the full TCSM with l.s force, smoothing and asymmetry. The results show a critical dependence on the height of the potential barrier between the centers. The ZPE turns out to be non-negligible along the fission path for 236 U, and even more so for lighter systems. It is negligible for surface quadrupole motion and it is just on the fringe of being negligible for motion along the asymmetry coordinate. (Auth.)
Functional summary statistics for the Johnson-Mehl model
DEFF Research Database (Denmark)
Møller, Jesper; Ghorbani, Mohammad
The Johnson-Mehl germination-growth model is a spatio-temporal point process model which among other things have been used for the description of neurotransmitters datasets. However, for such datasets parametric Johnson-Mehl models fitted by maximum likelihood have yet not been evaluated by means...... of functional summary statistics. This paper therefore invents four functional summary statistics adapted to the Johnson-Mehl model, with two of them based on the second-order properties and the other two on the nuclei-boundary distances for the associated Johnson-Mehl tessellation. The functional summary...... statistics theoretical properties are investigated, non-parametric estimators are suggested, and their usefulness for model checking is examined in a simulation study. The functional summary statistics are also used for checking fitted parametric Johnson-Mehl models for a neurotransmitters dataset....
Equilibrium statistical mechanics of lattice models
Lavis, David A
2015-01-01
Most interesting and difficult problems in equilibrium statistical mechanics concern models which exhibit phase transitions. For graduate students and more experienced researchers this book provides an invaluable reference source of approximate and exact solutions for a comprehensive range of such models. Part I contains background material on classical thermodynamics and statistical mechanics, together with a classification and survey of lattice models. The geometry of phase transitions is described and scaling theory is used to introduce critical exponents and scaling laws. An introduction is given to finite-size scaling, conformal invariance and Schramm—Loewner evolution. Part II contains accounts of classical mean-field methods. The parallels between Landau expansions and catastrophe theory are discussed and Ginzburg—Landau theory is introduced. The extension of mean-field theory to higher-orders is explored using the Kikuchi—Hijmans—De Boer hierarchy of approximations. In Part III the use of alge...
Statistical identification of effective input variables
International Nuclear Information System (INIS)
Vaurio, J.K.
1982-09-01
A statistical sensitivity analysis procedure has been developed for ranking the input data of large computer codes in the order of sensitivity-importance. The method is economical for large codes with many input variables, since it uses a relatively small number of computer runs. No prior judgemental elimination of input variables is needed. The sceening method is based on stagewise correlation and extensive regression analysis of output values calculated with selected input value combinations. The regression process deals with multivariate nonlinear functions, and statistical tests are also available for identifying input variables that contribute to threshold effects, i.e., discontinuities in the output variables. A computer code SCREEN has been developed for implementing the screening techniques. The efficiency has been demonstrated by several examples and applied to a fast reactor safety analysis code (Venus-II). However, the methods and the coding are general and not limited to such applications
Statistical reporting inconsistencies in experimental philosophy.
Colombo, Matteo; Duev, Georgi; Nuijten, Michèle B; Sprenger, Jan
2018-01-01
Experimental philosophy (x-phi) is a young field of research in the intersection of philosophy and psychology. It aims to make progress on philosophical questions by using experimental methods traditionally associated with the psychological and behavioral sciences, such as null hypothesis significance testing (NHST). Motivated by recent discussions about a methodological crisis in the behavioral sciences, questions have been raised about the methodological standards of x-phi. Here, we focus on one aspect of this question, namely the rate of inconsistencies in statistical reporting. Previous research has examined the extent to which published articles in psychology and other behavioral sciences present statistical inconsistencies in reporting the results of NHST. In this study, we used the R package statcheck to detect statistical inconsistencies in x-phi, and compared rates of inconsistencies in psychology and philosophy. We found that rates of inconsistencies in x-phi are lower than in the psychological and behavioral sciences. From the point of view of statistical reporting consistency, x-phi seems to do no worse, and perhaps even better, than psychological science.
Statistical reporting inconsistencies in experimental philosophy
Colombo, Matteo; Duev, Georgi; Nuijten, Michèle B.; Sprenger, Jan
2018-01-01
Experimental philosophy (x-phi) is a young field of research in the intersection of philosophy and psychology. It aims to make progress on philosophical questions by using experimental methods traditionally associated with the psychological and behavioral sciences, such as null hypothesis significance testing (NHST). Motivated by recent discussions about a methodological crisis in the behavioral sciences, questions have been raised about the methodological standards of x-phi. Here, we focus on one aspect of this question, namely the rate of inconsistencies in statistical reporting. Previous research has examined the extent to which published articles in psychology and other behavioral sciences present statistical inconsistencies in reporting the results of NHST. In this study, we used the R package statcheck to detect statistical inconsistencies in x-phi, and compared rates of inconsistencies in psychology and philosophy. We found that rates of inconsistencies in x-phi are lower than in the psychological and behavioral sciences. From the point of view of statistical reporting consistency, x-phi seems to do no worse, and perhaps even better, than psychological science. PMID:29649220
The Development of On-Line Statistics Program for Radiation Oncology
International Nuclear Information System (INIS)
Kim, Yoon Jong; Lee, Dong Hoon; Ji, Young Hoon; Lee, Dong Han; Jo, Chul Ku; Kim, Mi Sook; Ru, Sung Rul; Hong, Seung Hong
2001-01-01
Purpose : By developing on-line statistics program to record the information of radiation oncology to share the information with internet. It is possible to supply basic reference data for administrative plans to improve radiation oncology. Materials and methods : The information of radiation oncology statistics had been collected by paper forms about 52 hospitals in the past. Now, we can input the data by internet web browsers. The statistics program used windows NT 4.0 operation system, Internet Information Server 4.0 (IIS4.0) as a web server and the Microsoft Access MDB. We used Structured Query Language (SQL), Visual Basic, VBScript and JAVAScript to display the statistics according to years and hospitals. Results : This program shows present conditions about man power, research, therapy machines, technic, brachytherapy, clinic statistics, radiation safety management, institution, quality assurance and radioisotopes in radiation oncology department. The database consists of 38 inputs and 6 outputs windows. Statistical output windows can be increased continuously according to user need. Conclusion : We have developed statistics program to process all of the data in department of radiation oncology for reference information. Users easily could input the data by internet web browsers and share the information
Synthesis and characterisation of Cu(II), Ni(II), Mn(II), Zn(II) and VO(II ...
Indian Academy of Sciences (India)
Unknown
Synthesis and characterisation of Cu(II), Ni(II), Mn(II), Zn(II) and VO(II) Schiff base complexes derived from o-phenylenediamine and acetoacetanilide. N RAMAN*, Y PITCHAIKANI RAJA and A KULANDAISAMY. Department of Chemistry, VHNSN College, Virudhunagar 626 001, India e-mail: ra_man@123india.com.
SHAPE FROM TEXTURE USING LOCALLY SCALED POINT PROCESSES
Directory of Open Access Journals (Sweden)
Eva-Maria Didden
2015-09-01
Full Text Available Shape from texture refers to the extraction of 3D information from 2D images with irregular texture. This paper introduces a statistical framework to learn shape from texture where convex texture elements in a 2D image are represented through a point process. In a first step, the 2D image is preprocessed to generate a probability map corresponding to an estimate of the unnormalized intensity of the latent point process underlying the texture elements. The latent point process is subsequently inferred from the probability map in a non-parametric, model free manner. Finally, the 3D information is extracted from the point pattern by applying a locally scaled point process model where the local scaling function represents the deformation caused by the projection of a 3D surface onto a 2D image.
Ewald Electrostatics for Mixtures of Point and Continuous Line Charges.
Antila, Hanne S; Tassel, Paul R Van; Sammalkorpi, Maria
2015-10-15
Many charged macro- or supramolecular systems, such as DNA, are approximately rod-shaped and, to the lowest order, may be treated as continuous line charges. However, the standard method used to calculate electrostatics in molecular simulation, the Ewald summation, is designed to treat systems of point charges. We extend the Ewald concept to a hybrid system containing both point charges and continuous line charges. We find the calculated force between a point charge and (i) a continuous line charge and (ii) a discrete line charge consisting of uniformly spaced point charges to be numerically equivalent when the separation greatly exceeds the discretization length. At shorter separations, discretization induces deviations in the force and energy, and point charge-point charge correlation effects. Because significant computational savings are also possible, the continuous line charge Ewald method presented here offers the possibility of accurate and efficient electrostatic calculations.
Analysis of tree stand horizontal structure using random point field methods
Directory of Open Access Journals (Sweden)
O. P. Sekretenko
2015-06-01
Full Text Available This paper uses the model approach to analyze the horizontal structure of forest stands. The main types of models of random point fields and statistical procedures that can be used to analyze spatial patterns of trees of uneven and even-aged stands are described. We show how modern methods of spatial statistics can be used to address one of the objectives of forestry – to clarify the laws of natural thinning of forest stand and the corresponding changes in its spatial structure over time. Studying natural forest thinning, we describe the consecutive stages of modeling: selection of the appropriate parametric model, parameter estimation and generation of point patterns in accordance with the selected model, the selection of statistical functions to describe the horizontal structure of forest stands and testing of statistical hypotheses. We show the possibilities of a specialized software package, spatstat, which is designed to meet the challenges of spatial statistics and provides software support for modern methods of analysis of spatial data. We show that a model of stand thinning that does not consider inter-tree interaction can project the size distribution of the trees properly, but the spatial pattern of the modeled stand is not quite consistent with observed data. Using data of three even-aged pine forest stands of 25, 55, and 90-years old, we demonstrate that the spatial point process models are useful for combining measurements in the forest stands of different ages to study the forest stand natural thinning.
PowerCube: Integrated Power, Propulsion, and Pointing for CubeSats, Phase II
National Aeronautics and Space Administration — The PowerCube is a 1U CubeSat module that provides integrated propulsion, power, and precision pointing to enable the low-cost CubeSat platform to be used to conduct...
Applied statistics a handbook of BMDP analyses
Snell, E J
1987-01-01
This handbook is a realization of a long term goal of BMDP Statistical Software. As the software supporting statistical analysis has grown in breadth and depth to the point where it can serve many of the needs of accomplished statisticians it can also serve as an essential support to those needing to expand their knowledge of statistical applications. Statisticians should not be handicapped by heavy computation or by the lack of needed options. When Applied Statistics, Principle and Examples by Cox and Snell appeared we at BMDP were impressed with the scope of the applications discussed and felt that many statisticians eager to expand their capabilities in handling such problems could profit from having the solutions carried further, to get them started and guided to a more advanced level in problem solving. Who would be better to undertake that task than the authors of Applied Statistics? A year or two later discussions with David Cox and Joyce Snell at Imperial College indicated that a wedding of the proble...
Directory of Open Access Journals (Sweden)
C. SPÎNU
2008-04-01
Full Text Available Iron(II, cobalt(II, nickel (II, copper (II, zinc(II and cadmium(II complexes of the type ML2Cl2, where M is a metal and L is the Schiff base N-(2-thienylmethylenemethanamine (TNAM formed by the condensation of 2-thiophenecarboxaldehyde and methylamine, were prepared and characterized by elemental analysis as well as magnetic and spectroscopic measurements. The elemental analyses suggest the stoichiometry to be 1:2 (metal:ligand. Magnetic susceptibility data coupled with electronic, ESR and Mössbauer spectra suggest a distorted octahedral structure for the Fe(II, Co(II and Ni(II complexes, a square-planar geometry for the Cu(II compound and a tetrahedral geometry for the Zn(II and Cd(II complexes. The infrared and NMR spectra of the complexes agree with co-ordination to the central metal atom through nitrogen and sulphur atoms. Conductance measurements suggest the non-electrolytic nature of the complexes, except for the Cu(II, Zn(II and Cd(II complexes, which are 1:2 electrolytes. The Schiff base and its metal chelates were screened for their biological activity against Escherichia coli, Staphylococcus aureus and Pseudomonas aeruginosa and the metal chelates were found to possess better antibacterial activity than that of the uncomplexed Schiff base.
Generalized fixed point theorems for compatible mappings with some types in fuzzy metric spaces
International Nuclear Information System (INIS)
Cho, Yeol Je; Sedghi, Shaban; Shobe, Nabi
2009-01-01
In this paper, we give some new definitions of compatible mappings of types (I) and (II) in fuzzy metric spaces and prove some common fixed point theorems for four mappings under the condition of compatible mappings of types (I) and (II) in complete fuzzy metric spaces. Our results extend, generalize and improve the corresponding results given by many authors.
Reducing bias in the analysis of counting statistics data
International Nuclear Information System (INIS)
Hammersley, A.P.; Antoniadis, A.
1997-01-01
In the analysis of counting statistics data it is common practice to estimate the variance of the measured data points as the data points themselves. This practice introduces a bias into the results of further analysis which may be significant, and under certain circumstances lead to false conclusions. In the case of normal weighted least squares fitting this bias is quantified and methods to avoid it are proposed. (orig.)
Statistical aspects of tumor registries, Hiroshima and Nagasaki
Energy Technology Data Exchange (ETDEWEB)
Ishida, M
1961-02-24
Statistical considerations are presented on the tumor registries established for purpose of studying radiation induced carcinoma in Hiroshima and Nagasaki by observing tumors developing in the survivors of these cities. In addition to describing the background and purpose of the tumor registries the report consists of two parts: (1) accuracy of reported tumor cases and (2) statistical aspects of the incidence of tumors based both on a current population and on a fixed sample. Under the heading background, discussion includes the difficulties in attaining complete registration; the various problems associated with the tumor registries; and the special characteristics of tumor registries in Hiroshima and Nagasaki. Beye's a posteriori probability formula was applied to the Type I and Type II errors in the autopsy data of Hiroshima ABCC. (Type I, diagnosis of what is not cancer as cancer; Type II, diagnosis of what is cancer as noncancer.) Finally, the report discussed the difficulties in estimating a current population of survivors; the advantages and disadvantages of analyses based on a fixed sample and on an estimated current population; the comparison of incidence rates based on these populations using the 20 months' data of the tumor registry in Hiroshima; and the sample size required for studying radiation induced carcinoma. 10 references, 1 figure, 8 tables.
Self-Similar Spin Images for Point Cloud Matching
Pulido, Daniel
based on the concept of self-similarity to aid in the scale and feature matching steps. An open problem in fusion is how best to extract features from two point clouds and then perform feature-based matching. The proposed approach for this matching step is the use of local self-similarity as an invariant measure to match features. In particular, the proposed approach is to combine the concept of local self-similarity with a well-known feature descriptor, Spin Images, and thereby define "Self-Similar Spin Images". This approach is then extended to the case of matching two points clouds in very different coordinate systems (e.g., a geo-referenced Lidar point cloud and stereo-image derived point cloud without geo-referencing). The use of Self-Similar Spin Images is again applied to address this problem by introducing a "Self-Similar Keyscale" that matches the spatial scales of two point clouds. Another open problem is how best to detect changes in content between two point clouds. A method is proposed to find changes between two point clouds by analyzing the order statistics of the nearest neighbors between the two clouds, and thereby define the "Nearest Neighbor Order Statistic" method. Note that the well-known Hausdorff distance is a special case as being just the maximum order statistic. Therefore, by studying the entire histogram of these nearest neighbors it is expected to yield a more robust method to detect points that are present in one cloud but not the other. This approach is applied at multiple resolutions. Therefore, changes detected at the coarsest level will yield large missing targets and at finer levels will yield smaller targets.
Spectroscopy of gluonic states at LAMPF II
International Nuclear Information System (INIS)
Chanowitz, M.S.
1983-08-01
The properties of QCD which imply the existence of gluonic states are reviewed. The problem of discovering the spectrum of gluonic states is discussed in general and illustrated with examples from current data. Higher statistics fixed target experiments, such as could be performed at LAMPF II, are essential for further progress
Reproducing a Prospective Clinical Study as a Computational Retrospective Study in MIMIC-II.
Kury, Fabrício S P; Huser, Vojtech; Cimino, James J
2015-01-01
In this paper we sought to reproduce, as a computational retrospective study in an EHR database (MIMIC-II), a recent large prospective clinical study: the 2013 publication, by the Japanese Association for Acute Medicine (JAAM), about disseminated intravascular coagulation, in the journal Critical Care (PMID: 23787004). We designed in SQL and Java a set of electronic phenotypes that reproduced the study's data sampling, and used R to perform the same statistical inference procedures. All produced source code is available online at https://github.com/fabkury/paamia2015. Our program identified 2,257 eligible patients in MIMIC-II, and the results remarkably agreed with the prospective study. A minority of the needed data elements was not found in MIMIC-II, and statistically significant inferences were possible in the majority of the cases.
How Do Users Map Points Between Dissimilar Shapes?
Hecher, Michael
2017-07-25
Finding similar points in globally or locally similar shapes has been studied extensively through the use of various point descriptors or shape-matching methods. However, little work exists on finding similar points in dissimilar shapes. In this paper, we present the results of a study where users were given two dissimilar two-dimensional shapes and asked to map a given point in the first shape to the point in the second shape they consider most similar. We find that user mappings in this study correlate strongly with simple geometric relationships between points and shapes. To predict the probability distribution of user mappings between any pair of simple two-dimensional shapes, two distinct statistical models are defined using these relationships. We perform a thorough validation of the accuracy of these predictions and compare our models qualitatively and quantitatively to well-known shape-matching methods. Using our predictive models, we propose an approach to map objects or procedural content between different shapes in different design scenarios.
Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations
Energy Technology Data Exchange (ETDEWEB)
Kleijnen, J.P.C.; Helton, J.C.
1999-04-01
The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (1) linear relationships with correlation coefficients, (2) monotonic relationships with rank correlation coefficients, (3) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (4) trends in variability as defined by variances and interquartile ranges, and (5) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are considered for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (1) Type I errors are unavoidable, (2) Type II errors can occur when inappropriate analysis procedures are used, (3) physical explanations should always be sought for why statistical procedures identify variables as being important, and (4) the identification of important variables tends to be stable for independent Latin hypercube samples.
Photogrammetric computer vision statistics, geometry, orientation and reconstruction
Förstner, Wolfgang
2016-01-01
This textbook offers a statistical view on the geometry of multiple view analysis, required for camera calibration and orientation and for geometric scene reconstruction based on geometric image features. The authors have backgrounds in geodesy and also long experience with development and research in computer vision, and this is the first book to present a joint approach from the converging fields of photogrammetry and computer vision. Part I of the book provides an introduction to estimation theory, covering aspects such as Bayesian estimation, variance components, and sequential estimation, with a focus on the statistically sound diagnostics of estimation results essential in vision metrology. Part II provides tools for 2D and 3D geometric reasoning using projective geometry. This includes oriented projective geometry and tools for statistically optimal estimation and test of geometric entities and transformations and their relations, tools that are useful also in the context of uncertain reasoning in po...
Combined mode I-mode II fracture of 12-mol%-ceria-doped tetragonal zirconia polycrystalline ceramic
International Nuclear Information System (INIS)
Tikare, V.; Choi, S.R.
1997-01-01
The mode I, mode II, and combined mode I-mode II fracture behavior of ceria-doped tetragonal zirconia polycrystalline (Ce-TZP) ceramic was studied. The single-edge-precracked-beam (SEPB) samples were fractured using the asymmetric four-point-bend geometry. The ratio of mode I to mode II loading was varied by varying the degree of asymmetry in the four-point-bend geometry. The minimum strain energy density theory best described the mixed-mode fracture behavior of Ce-TZP with the mode I fracture toughness, K IC = 8.2 ± 0.6 MPa·m 1/2 , and the mode II fracture toughness, K IIC = 8.6 ± 1.3 MPa·m 1/2
Ibáñez, Sergio J.; García, Javier; Feu, Sebastian; Lorenzo, Alberto; Sampaio, Jaime
2009-01-01
The aim of the present study was to identify the game-related statistics that discriminated basketball winning and losing teams in each of the three consecutive games played in a condensed tournament format. The data were obtained from the Spanish Basketball Federation and included game-related statistics from the Under-20 league (2005-2006 and 2006-2007 seasons). A total of 223 games were analyzed with the following game-related statistics: two and three-point field goal (made and missed), free-throws (made and missed), offensive and defensive rebounds, assists, steals, turnovers, blocks (made and received), fouls committed, ball possessions and offensive rating. Results showed that winning teams in this competition had better values in all game-related statistics, with the exception of three point field goals made, free-throws missed and turnovers (p ≥ 0.05). The main effect of game number was only identified in turnovers, with a statistical significant decrease between the second and third game. No interaction was found in the analysed variables. A discriminant analysis allowed identifying the two-point field goals made, the defensive rebounds and the assists as discriminators between winning and losing teams in all three games. Additionally to these, only the three-point field goals made contributed to discriminate teams in game three, suggesting a moderate effect of fatigue. Coaches may benefit from being aware of this variation in game determinant related statistics and, also, from using offensive and defensive strategies in the third game, allowing to explore or hide the three point field-goals performance. Key points Overall team performances along the three consecutive games were very similar, not confirming an accumulated fatigue effect. The results from the three-point field goals in the third game suggested that winning teams were able to shoot better from longer distances and this could be the result of exhibiting higher conditioning status and
Cosmology constraints from shear peak statistics in Dark Energy Survey Science Verification data
International Nuclear Information System (INIS)
Kacprzak, T.; Kirk, D.; Friedrich, O.; Amara, A.; Refregier, A.
2016-01-01
Shear peak statistics has gained a lot of attention recently as a practical alternative to the two-point statistics for constraining cosmological parameters. We perform a shear peak statistics analysis of the Dark Energy Survey (DES) Science Verification (SV) data, using weak gravitational lensing measurements from a 139 deg"2 field. We measure the abundance of peaks identified in aperture mass maps, as a function of their signal-to-noise ratio, in the signal-to-noise range 0 4 would require significant corrections, which is why we do not include them in our analysis. We compare our results to the cosmological constraints from the two-point analysis on the SV field and find them to be in good agreement in both the central value and its uncertainty. Lastly, we discuss prospects for future peak statistics analysis with upcoming DES data.
Healing and relaxation in flows of helium II. Part II. First, second, and fourth sound
International Nuclear Information System (INIS)
Hills, R.N.; Roberts, P.H.
1978-01-01
In Part I of this series, a theory of helium II incorporating the effects of quantum healing and relaxation was developed. In this paper, the propagation of first, second, and fourth sound is discussed. Particular attention is paid to sound propagation in the vicinity of the lambda point where the effects of relaxation and quantum healing become important
Statistical ensembles and molecular dynamics studies of anisotropic solids. II
International Nuclear Information System (INIS)
Ray, J.R.; Rahman, A.
1985-01-01
We have recently discussed how the Parrinello--Rahman theory can be brought into accord with the theory of the elastic and thermodynamic behavior of anisotropic media. This involves the isoenthalpic--isotension ensemble of statistical mechanics. Nose has developed a canonical ensemble form of molecular dynamics. We combine Nose's ideas with the Parrinello--Rahman theory to obtain a canonical form of molecular dynamics appropriate to the study of anisotropic media subjected to arbitrary external stress. We employ this isothermal--isotension ensemble in a study of a fcc→ close-packed structural phase transformation in a Lennard-Jones solid subjected to uniaxial compression. Our interpretation of the Nose theory does not involve a scaling of the time variable. This latter fact leads to simplifications when studying the time dependence of quantities
Kreditní rizika z pohledu Basel II
Čabrada, Jiří
2007-01-01
The thesis "Credit risk from Basel II point of view" deals with new capital concept with main focus on the credit risk. The particular emphasis is laid on the chief issue of Basel II concept i.e. internal models. The thesis quite in detail describes the usage of basel parameters - LGD particularly - in various day-to-day business processes of credit institutions. An individual part of the thesis is devoted to credit risk mitigants and their impacts on the amount of capital requirements. The a...
He, Ping
2012-01-01
The long-standing puzzle surrounding the statistical mechanics of self-gravitating systems has not yet been solved successfully. We formulate a systematic theoretical framework of entropy-based statistical mechanics for spherically symmetric collisionless self-gravitating systems. We use an approach that is very different from that of the conventional statistical mechanics of short-range interaction systems. We demonstrate that the equilibrium states of self-gravitating systems consist of both mechanical and statistical equilibria, with the former characterized by a series of velocity-moment equations and the latter by statistical equilibrium equations, which should be derived from the entropy principle. The velocity-moment equations of all orders are derived from the steady-state collisionless Boltzmann equation. We point out that the ergodicity is invalid for the whole self-gravitating system, but it can be re-established locally. Based on the local ergodicity, using Fermi-Dirac-like statistics, with the non-degenerate condition and the spatial independence of the local microstates, we rederive the Boltzmann-Gibbs entropy. This is consistent with the validity of the collisionless Boltzmann equation, and should be the correct entropy form for collisionless self-gravitating systems. Apart from the usual constraints of mass and energy conservation, we demonstrate that the series of moment or virialization equations must be included as additional constraints on the entropy functional when performing the variational calculus; this is an extension to the original prescription by White & Narayan. Any possible velocity distribution can be produced by the statistical-mechanical approach that we have developed with the extended Boltzmann-Gibbs/White-Narayan statistics. Finally, we discuss the questions of negative specific heat and ensemble inequivalence for self-gravitating systems.
Universal Postquench Prethermalization at a Quantum Critical Point
Gagel, Pia; Orth, Peter P.; Schmalian, Jörg
2014-11-01
We consider an open system near a quantum critical point that is suddenly moved towards the critical point. The bath-dominated diffusive nonequilibrium dynamics after the quench is shown to follow scaling behavior, governed by a critical exponent that emerges in addition to the known equilibrium critical exponents. We determine this exponent and show that it describes universal prethermalized coarsening dynamics of the order parameter in an intermediate time regime. Implications of this quantum critical prethermalization are: (i) a power law rise of order and correlations after an initial collapse of the equilibrium state and (ii) a crossover to thermalization that occurs arbitrarily late for sufficiently shallow quenches.
Common pitfalls in statistical analysis: The perils of multiple testing
Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc
2016-01-01
Multiple testing refers to situations where a dataset is subjected to statistical testing multiple times - either at multiple time-points or through multiple subgroups or for multiple end-points. This amplifies the probability of a false-positive finding. In this article, we look at the consequences of multiple testing and explore various methods to deal with this issue. PMID:27141478
EMPIRE-II statistical model code for nuclear reaction calculations
Energy Technology Data Exchange (ETDEWEB)
Herman, M [International Atomic Energy Agency, Vienna (Austria)
2001-12-15
EMPIRE II is a nuclear reaction code, comprising various nuclear models, and designed for calculations in the broad range of energies and incident particles. A projectile can be any nucleon or Heavy Ion. The energy range starts just above the resonance region, in the case of neutron projectile, and extends up to few hundreds of MeV for Heavy Ion induced reactions. The code accounts for the major nuclear reaction mechanisms, such as optical model (SCATB), Multistep Direct (ORION + TRISTAN), NVWY Multistep Compound, and the full featured Hauser-Feshbach model. Heavy Ion fusion cross section can be calculated within the simplified coupled channels approach (CCFUS). A comprehensive library of input parameters covers nuclear masses, optical model parameters, ground state deformations, discrete levels and decay schemes, level densities, fission barriers (BARFIT), moments of inertia (MOMFIT), and {gamma}-ray strength functions. Effects of the dynamic deformation of a fast rotating nucleus can be taken into account in the calculations. The results can be converted into the ENDF-VI format using the accompanying code EMPEND. The package contains the full EXFOR library of experimental data. Relevant EXFOR entries are automatically retrieved during the calculations. Plots comparing experimental results with the calculated ones can be produced using X4TOC4 and PLOTC4 codes linked to the rest of the system through bash-shell (UNIX) scripts. The graphic user interface written in Tcl/Tk is provided. (author)
Shell model in large spaces and statistical spectroscopy
International Nuclear Information System (INIS)
Kota, V.K.B.
1996-01-01
For many nuclear structure problems of current interest it is essential to deal with shell model in large spaces. For this, three different approaches are now in use and two of them are: (i) the conventional shell model diagonalization approach but taking into account new advances in computer technology; (ii) the shell model Monte Carlo method. A brief overview of these two methods is given. Large space shell model studies raise fundamental questions regarding the information content of the shell model spectrum of complex nuclei. This led to the third approach- the statistical spectroscopy methods. The principles of statistical spectroscopy have their basis in nuclear quantum chaos and they are described (which are substantiated by large scale shell model calculations) in some detail. (author)
[Cephalometric analysis in individuals with Class II/2 malocclusions].
Rak, D
1990-06-01
Various orthodontic anomalies class II/2, classified into several experimental groups, and eugnathic occlusion serving as controls, were studied by roentgencephalometry. The objective of the study was to detect possible distinctions in the quantitative values of the chosen variables and to select those which discriminate the group of class II/2 orthodontic anomalies most significantly. Attempts were made to ascertain whether or not there were sex-related differences. The teleroentgenograma of 241 examines, aged 10 to 18 years, of both sexes, were analyzed. The experimental group consisted of 61 examinees class II/2 orthodontic anomalies. The control group consisted of 180 examinees with eugnathic occlusion. Latero-lateral skull roentgenograms were taken according to the rules of roentgencephalometry. Using acetate paper, the drawings of profile teleroentgenograms were elaborated and the reference points and lineas were entered. A total of 38 variables were analyzed, of which 10 were linear, 19 angular, 8 variables were obtained by mathematical calculations, and the age variable was also analyzed. For statistical analyses and electronic computer was used. The results are presented in tables and graphs. The results obtained have shown: that, when compared to the findings in the control group, the subjects in the experimental groups manifested significant changes in the following craniofacial characteristics: retroposition and retroinclination of the upper incisors; increased difference of the position of the apical basis of the jaw; marked convexity of the osseous profile; mandibular retrognathism and increased proportion of the maxillary compared to mandibular base; that, with regard to the sex of the examines, only linear variables of significantly discriminating character were selected. Thus it could be concluded that there were no significant sex differences among the morphological characteristics of the viscerocranium.
International Nuclear Information System (INIS)
Singh, Balwan; Misra, Harihar
1986-01-01
Metal complexes of thiosemicarbazides have been known for their pharmacological applications. Significant antitubercular, fungicidal and antiviral activities have been reported for thiosemicarbazides and their derivatives. The present study describes the systhesis and characterisation of complexes of Co II , Cu II , Zn II ,Cd II and UO II with thiosemicarbazone obtained by condensing thiophene-2-aldehyde with thiosemicarbazide. 17 refs., 2 tables. (author)
International Nuclear Information System (INIS)
Kole, J S; Beekman, F J
2006-01-01
Statistical reconstruction methods offer possibilities to improve image quality as compared with analytical methods, but current reconstruction times prohibit routine application in clinical and micro-CT. In particular, for cone-beam x-ray CT, the use of graphics hardware has been proposed to accelerate the forward and back-projection operations, in order to reduce reconstruction times. In the past, wide application of this texture hardware mapping approach was hampered owing to limited intrinsic accuracy. Recently, however, floating point precision has become available in the latest generation commodity graphics cards. In this paper, we utilize this feature to construct a graphics hardware accelerated version of the ordered subset convex reconstruction algorithm. The aims of this paper are (i) to study the impact of using graphics hardware acceleration for statistical reconstruction on the reconstructed image accuracy and (ii) to measure the speed increase one can obtain by using graphics hardware acceleration. We compare the unaccelerated algorithm with the graphics hardware accelerated version, and for the latter we consider two different interpolation techniques. A simulation study of a micro-CT scanner with a mathematical phantom shows that at almost preserved reconstructed image accuracy, speed-ups of a factor 40 to 222 can be achieved, compared with the unaccelerated algorithm, and depending on the phantom and detector sizes. Reconstruction from physical phantom data reconfirms the usability of the accelerated algorithm for practical cases
International Nuclear Information System (INIS)
Tang, F.; Harvey, K.; Bruner, M.; Kent, B.; Antonucci, E.
1982-01-01
Transition region and coronal observations of bright points by instruments aboard the Solar Maximum Mission and high resolution photospheric magnetograph observations on September 11, 1980 are presented. A total of 31 bipolar ephemeral regions were found in the photosphere from birth in 9.3 hours of combined magnetograph observations from three observatories. Two of the three ephemeral regions present in the field of view of the Ultraviolet Spectrometer-Polarimeter were observed in the C IV 1548 line. The unobserved ephemeral region was determined to be the shortest-lived (2.5 hr) and lowest in magnetic flux density (13G) of the three regions. The Flat Crystal Spectrometer observed only low level signals in the O VIII 18.969 A line, which were not statistically significant to be positively identified with any of the 16 ephemeral regions detected in the photosphere. In addition, the data indicate that at any given time there lacked a one-to-one correspondence between observable bright points and photospheric ephemeral regions, while more ephemeral regions were observed than their counterparts in the transition region and the corona
Swamp plots for dynamic aperture studies of PEP-II lattices
International Nuclear Information System (INIS)
Yan, Y.T.; Irwin, J.; Cai, Y.; Chen, T.; Ritson, D.
1995-01-01
With a newly developed algorithm using resonance basis Lie generators and their evaluation with action-angle Poisson bracket maps (nPB tracking) the authors have been able to perform fast tracking for dynamic aperture studies of PEP-II lattices as well as incorporate lattice nonlinearities in beam-beam studies. They have been able to better understand the relationship between dynamic apertures and the tune shift and resonance coefficients in the generators of the one-turn maps. To obtain swamp plots (dynamic aperture vs. working point) of the PEP-II lattices, they first compute a one-turn resonance basis map for a nominal working point and then perform nPB tracking by switching the working point while holding fixed all other terms in the map. Results have been spot-checked by comparing with element-by-element tracking
Gòmez, Miguel-Ángel; Lorenzo, Alberto; Ortega, Enrique; Sampaio, Jaime; Ibàñez, Sergio-José
2009-01-01
The aim of the present study was to identify the game-related statistics that allow discriminating between starters and nonstarter players in women’s basketball when related to winning or losing games and best or worst teams. The sample comprised all 216 regular season games from the 2005 Women’s National Basketball Association League (WNBA). The game-related statistics included were 2- and 3- point field-goals (both successful and unsuccessful), free-throws (both successful and unsuccessful), defensive and offensive rebounds, assists, blocks, fouls, steals, turnovers and minutes played. Results from multivariate analysis showed that when best teams won, the discriminant game-related statistics were successful 2-point field-goals (SC = 0.47), successful free-throws (SC = 0.44), fouls (SC = -0.41), assists (SC = 0.37), and defensive rebounds (SC = 0.37). When the worst teams won, the discriminant game-related statistics were successful 2-point field- goals (SC = 0.37), successful free-throws (SC = 0.45), assists (SC = 0.58), and steals (SC = 0.35). The results showed that the successful 2-point field-goals, successful free-throws and the assists were the most powerful variables discriminating between starters and nonstarters. These specific characteristics helped to point out the importance of starters’ players shooting and passing ability during competitions. Key points The players’ game-related statistical profile varied according to team status, game outcome and team quality in women’s basketball. The results of this work help to point out the different player’s performance described in women’s basketball compared with men’s basketball. The results obtained enhance the importance of starters and nonstarters contribution to team’s performance in different game contexts. Results showed the power of successful 2-point field-goals, successful free-throws and assists discriminating between starters and nonstarters in all the analyses. PMID:24149538
Statistical construction of a Japanese male liver phantom for internal radionuclide dosimetry
International Nuclear Information System (INIS)
Mofrad, F. B.; Zoroofi, R. A.; Tehrani-Fard, A. A.; Akhlaghpoor, S.; Hori, M.; Chen, Y. W.; Sato, Y.
2010-01-01
A computational framework is presented, based on statistical shape modelling, for construction of race-specific organ models for internal radionuclide dosimetry and other nuclear-medicine applications. This approach was applied to the construction of a Japanese liver phantom, using the liver of the digital Zubal phantom as the template and 35 liver computed tomography (CT) scans of male Japanese individuals as a training set. The first step was the automated object-space registration (to align all the liver surfaces in one orientation), using a coherent-point-drift maximum-likelihood alignment algorithm, of each CT scan-derived manually contoured liver surface and the template Zubal liver phantom. Six landmark points, corresponding to the intersection of the contours of the maximum-area sagittal, transaxial and coronal liver sections were employed to perform the above task. To find correspondence points in livers (i.e. 2000 points for each liver), each liver surface was transformed into a mesh, was mapped for the parameter space of a sphere (parameterization), yielding spherical harmonics (SPHARMs) shape descriptors. The resulting spherical transforms were then registered by minimising the root-mean-square distance among the SPHARMs coefficients. A mean shape (i.e. liver) and its dispersion (i.e. covariance matrix) were next calculated and analysed by principal components. Leave-one-out-tests using 5-35 principal components (or modes) demonstrated the fidelity of the foregoing statistical analysis. Finally, a voxelisation algorithm and a point-based registration is utilised to convert the SPHARM surfaces into its corresponding voxelised and adjusted the Zubal phantom data, respectively. The proposed technique used to create the race-specific statistical phantom maintains anatomic realism and provides the statistical parameters for application to radionuclide dosimetry. (authors)
Statistical and extra-statistical considerations in differential item functioning analyses
Directory of Open Access Journals (Sweden)
G. K. Huysamen
2004-10-01
Full Text Available This article briefly describes the main procedures for performing differential item functioning (DIF analyses and points out some of the statistical and extra-statistical implications of these methods. Research findings on the sources of DIF, including those associated with translated tests, are reviewed. As DIF analyses are oblivious of correlations between a test and relevant criteria, the elimination of differentially functioning items does not necessarily improve predictive validity or reduce any predictive bias. The implications of the results of past DIF research for test development in the multilingual and multi-cultural South African society are considered. Opsomming Hierdie artikel beskryf kortliks die hoofprosedures vir die ontleding van differensiële itemfunksionering (DIF en verwys na sommige van die statistiese en buite-statistiese implikasies van hierdie metodes. ’n Oorsig word verskaf van navorsingsbevindings oor die bronne van DIF, insluitend dié by vertaalde toetse. Omdat DIF-ontledings nie die korrelasies tussen ’n toets en relevante kriteria in ag neem nie, sal die verwydering van differensieel-funksionerende items nie noodwendig voorspellingsgeldigheid verbeter of voorspellingsydigheid verminder nie. Die implikasies van vorige DIF-navorsingsbevindings vir toetsontwikkeling in die veeltalige en multikulturele Suid-Afrikaanse gemeenskap word oorweeg.
Reliability and extended-life potential of EBR-II
International Nuclear Information System (INIS)
King, R.W.
1985-01-01
Although the longlife potential of liquid-metal-cooled reactors (LMRs) has been only partially demonstrated, many factors point to the potential for exceptionally long life. EBR-II has the opportunity to become the first LMR to achieve an operational lifetime of 30 years or more. In 1984 a study of the extended-life potential of EBR-II identified the factors that contribute to the continued successful operation of EBR-II as a power reactor and experimental facility. Also identified were factors that could cause disruptions in the useful life of the facility. Although no factors were found that would inherently limit the life of EBR-II, measures were identified that could help ensure continued plant availability. These measures include the implementation of more effective surveillance, diagnostic, and control systems to complement the inherent safety and reliability features of EBR-II. An operating lifetime of well beyond 30 years is certainly feasible
Multiplicative point process as a model of trading activity
Gontis, V.; Kaulakys, B.
2004-11-01
Signals consisting of a sequence of pulses show that inherent origin of the 1/ f noise is a Brownian fluctuation of the average interevent time between subsequent pulses of the pulse sequence. In this paper, we generalize the model of interevent time to reproduce a variety of self-affine time series exhibiting power spectral density S( f) scaling as a power of the frequency f. Furthermore, we analyze the relation between the power-law correlations and the origin of the power-law probability distribution of the signal intensity. We introduce a stochastic multiplicative model for the time intervals between point events and analyze the statistical properties of the signal analytically and numerically. Such model system exhibits power-law spectral density S( f)∼1/ fβ for various values of β, including β= {1}/{2}, 1 and {3}/{2}. Explicit expressions for the power spectra in the low-frequency limit and for the distribution density of the interevent time are obtained. The counting statistics of the events is analyzed analytically and numerically, as well. The specific interest of our analysis is related with the financial markets, where long-range correlations of price fluctuations largely depend on the number of transactions. We analyze the spectral density and counting statistics of the number of transactions. The model reproduces spectral properties of the real markets and explains the mechanism of power-law distribution of trading activity. The study provides evidence that the statistical properties of the financial markets are enclosed in the statistics of the time interval between trades. A multiplicative point process serves as a consistent model generating this statistics.
Gaya, Umar Ibrahim; Otene, Emmanuel; Abdullah, Abdul Halim
2015-01-01
Non-uniformly sized activated carbons were derived from doum palm shell, a new precursor, by carbonization in air and activation using KOH, NaOH and ZnCl2. The activated carbon fibres were characterised by X-ray diffraction, N2 adsorption–desorption, scanning electron microscopy, particle size analysis and evaluated for Cd(II) and Pb(II) removal. The 40–50 nm size, less graphitic, mesoporous NaOH activated carbon yielded high adsorption efficiency, pointing largely to the influence surface ar...
THE PITTSBURGH SLOAN DIGITAL SKY SURVEY Mg II QUASAR ABSORPTION-LINE SURVEY CATALOG
International Nuclear Information System (INIS)
Quider, Anna M.; Nestor, Daniel B.; Turnshek, David A.; Rao, Sandhya M.; Weyant, Anja N.; Monier, Eric M.; Busche, Joseph R.
2011-01-01
We present a catalog of intervening Mg II quasar absorption-line systems in the redshift interval 0.36 ≤ z ≤ 2.28. The catalog was built from Sloan Digital Sky Survey Data Release Four (SDSS DR4) quasar spectra. Currently, the catalog contains ∼17, 000 measured Mg II doublets. We also present data on the ∼44, 600 quasar spectra which were searched to construct the catalog, including redshift and magnitude information, continuum-normalized spectra, and corresponding arrays of redshift-dependent minimum rest equivalent widths detectable at our confidence threshold. The catalog is available online. A careful second search of 500 random spectra indicated that, for every 100 spectra searched, approximately one significant Mg II system was accidentally rejected. Current plans to expand the catalog beyond DR4 quasars are discussed. Many Mg II absorbers are known to be associated with galaxies. Therefore, the combination of large size and well understood statistics makes this catalog ideal for precision studies of the low-ionization and neutral gas regions associated with galaxies at low to moderate redshift. An analysis of the statistics of Mg II absorbers using this catalog will be presented in a subsequent paper.
Generalized statistics and the rishon hypothesis
Energy Technology Data Exchange (ETDEWEB)
Jarvis, P.D. (Tasmania Univ., Sandy Bay (Australia). Dept. of Physics); Green, H.S. (Adelaide Univ. (Australia). Dept. of Mathematical Physics)
1983-01-01
It is pointed out that the proposal of Harari and others, that leptons and quarks should be regarded as composites, consisting of rishons or quips, can be formulated as a field theory in terms of two fundamental spinor fields which satisfy a new generalization of quantum statistics. The requirement of macroscopic causality determines which of the many combinations of rishons may be observed as isolated particles.
Statistical tests for the Gaussian nature of primordial fluctuations through CBR experiments
International Nuclear Information System (INIS)
Luo, X.
1994-01-01
Information about the physical processes that generate the primordial fluctuations in the early Universe can be gained by testing the Gaussian nature of the fluctuations through cosmic microwave background radiation (CBR) temperature anisotropy experiments. One of the crucial aspects of density perturbations that are produced by the standard inflation scenario is that they are Gaussian, whereas seeds produced by topological defects left over from an early cosmic phase transition tend to be non-Gaussian. To carry out this test, sophisticated statistical tools are required. In this paper, we will discuss several such statistical tools, including multivariant skewness and kurtosis, Euler-Poincare characteristics, the three-point temperature correlation function, and Hotelling's T 2 statistic defined through bispectral estimates of a one-dimensional data set. The effect of noise present in the current data is discussed in detail and the COBE 53 GHz data set is analyzed. Our analysis shows that, on the large angular scale to which COBE is sensitive, the statistics are probably Gaussian. On the small angular scales, the importance of Hotelling's T 2 statistic is stressed, and the minimum sample size required to test Gaussianity is estimated. Although the current data set available from various experiments at half-degree scales is still too small, improvement of the data set by roughly a factor of 2 will be enough to test the Gaussianity statistically. On the arc min scale, we analyze the recent RING data through bispectral analysis, and the result indicates possible deviation from Gaussianity. Effects of point sources are also discussed. It is pointed out that the Gaussianity problem can be resolved in the near future by ground-based or balloon-borne experiments
Quantitative analysis and IBM SPSS statistics a guide for business and finance
Aljandali, Abdulkader
2016-01-01
This guide is for practicing statisticians and data scientists who use IBM SPSS for statistical analysis of big data in business and finance. This is the first of a two-part guide to SPSS for Windows, introducing data entry into SPSS, along with elementary statistical and graphical methods for summarizing and presenting data. Part I also covers the rudiments of hypothesis testing and business forecasting while Part II will present multivariate statistical methods, more advanced forecasting methods, and multivariate methods. IBM SPSS Statistics offers a powerful set of statistical and information analysis systems that run on a wide variety of personal computers. The software is built around routines that have been developed, tested, and widely used for more than 20 years. As such, IBM SPSS Statistics is extensively used in industry, commerce, banking, local and national governments, and education. Just a small subset of users of the package include the major clearing banks, the BBC, British Gas, British Airway...
Evaluation of TRIGA Mark II reactor in Turkey
International Nuclear Information System (INIS)
Bilge, Ali Nezihi
1990-01-01
There are two research reactors in Turkey and one of them is the university Triga Mark II reactor which was in service since 1979 both for education and industrial application purposes. The main aim of this paper is to evaluate the spectrum of the services carried by Turkish Triga Mark II reactor. In this work, statistical distribution of the graduate works and applications, by using Triga Mark II reactor is examined and evaluated. In addition to this, technical and scientific uses of this above mentioned reactor are also investigated. It was already showed that the uses and benefits of this reactor can not be limited. If the sufficient work and service is given, NDT and industrial applications can also be carried economically. (orig.)
Fractional statistics and the butterfly effect
International Nuclear Information System (INIS)
Gu, Yingfei; Qi, Xiao-Liang
2016-01-01
Fractional statistics and quantum chaos are both phenomena associated with the non-local storage of quantum information. In this article, we point out a connection between the butterfly effect in (1+1)-dimensional rational conformal field theories and fractional statistics in (2+1)-dimensional topologically ordered states. This connection comes from the characterization of the butterfly effect by the out-of-time-order-correlator proposed recently. We show that the late-time behavior of such correlators is determined by universal properties of the rational conformal field theory such as the modular S-matrix and conformal spins. Using the bulk-boundary correspondence between rational conformal field theories and (2+1)-dimensional topologically ordered states, we show that the late time behavior of out-of-time-order-correlators is intrinsically connected with fractional statistics in the topological order. We also propose a quantitative measure of chaos in a rational conformal field theory, which turns out to be determined by the topological entanglement entropy of the corresponding topological order.
Fractional statistics and the butterfly effect
Energy Technology Data Exchange (ETDEWEB)
Gu, Yingfei; Qi, Xiao-Liang [Department of Physics, Stanford University,Stanford, CA 94305 (United States)
2016-08-23
Fractional statistics and quantum chaos are both phenomena associated with the non-local storage of quantum information. In this article, we point out a connection between the butterfly effect in (1+1)-dimensional rational conformal field theories and fractional statistics in (2+1)-dimensional topologically ordered states. This connection comes from the characterization of the butterfly effect by the out-of-time-order-correlator proposed recently. We show that the late-time behavior of such correlators is determined by universal properties of the rational conformal field theory such as the modular S-matrix and conformal spins. Using the bulk-boundary correspondence between rational conformal field theories and (2+1)-dimensional topologically ordered states, we show that the late time behavior of out-of-time-order-correlators is intrinsically connected with fractional statistics in the topological order. We also propose a quantitative measure of chaos in a rational conformal field theory, which turns out to be determined by the topological entanglement entropy of the corresponding topological order.
Pressurized helium II-cooled magnet test facility
International Nuclear Information System (INIS)
Warren, R.P.; Lambertson, G.R.; Gilbert, W.S.; Meuser, R.B.; Caspi, S.; Schafer, R.V.
1980-06-01
A facility for testing superconducting magnets in a pressurized bath of helium II has been constructed and operated. The cryostat accepts magnets up to 0.32 m diameter and 1.32 m length with current to 3000 A. In initial tests, the volume of helium II surrounding the superconducting magnet was 90 liters. Minimum temperature reached was 1.7 K at which point the pumping system was throttled to maintain steady temperature. Helium II reservoir temperatures were easily controlled as long as the temperature upstream of the JT valve remained above T lambda; at lower temperatures control became difficult. Positive control of the temperature difference between the liquid and cold sink by means of an internal heat source appears necessary to avoid this problem. The epoxy-sealed vessel closures, with which we have had considerable experience with normal helium vacuum, also worked well in the helium II/vacuum environment
International Nuclear Information System (INIS)
Bhat, M.R.; Ozer, O.
1982-01-01
1 - Description of problem or function: RESEND generates infinitely- dilute, un-broadened, point cross sections in the ENDF format by combining ENDF File 3 background cross sections with points calculated from ENDF File 2 resonance parameter data. ADLER calculates total, capture, and fission cross sections from the corresponding Adler-Adler parameters in the ENDF/B File 2 Version II data and also Doppler-broadens cross sections. 2 - Method of solution: RESEND calculations are done in two steps by two separate sections of the program. The first section does the resonance calculation and stores the results on a scratch file. The second section combines the data from the scratch file with background cross sections and prints the results. ADLER uses the Adler-Adler formalism. 3 - Restrictions on the complexity of the problem: RESEND expects its input to be a standard mode BCD ENDF file (Version II/III). Since the output is also a standard mode BCD ENDF file, the program is limited by the six significant figure accuracy inherent in the ENDF formats. (If the cross section has been calculated at two points so close in energy that only their least significant figures differ, that interval is assumed to have converged, even if other convergence criteria may not be satisfied.) In the unresolved range the cross sections have been averaged over a Porter-Thomas distribution. In some regions the calculated resonance cross sections may be negative. In such cases the standard convergence criterion would cause an unnecessarily large number of points to be produced in the region where the cross section becomes zero. For this reason an additional input convergence criterion (AVERR) may be used. If the absolute value of the cross section at both ends of an interval is determined to be less than AVERR then the interval is assumed to have converged. There are no limitations on the total number of points generated. The present ENDF (Version II/III) formats restrict the total number of
Mn(II), Zn(II) and VO(II) Schiff
Indian Academy of Sciences (India)
Home; Journals; Journal of Chemical Sciences; Volume 113; Issue 3. Synthesis and characterisation of Cu(II), Ni(II), Mn(II), Zn(II) and VO(II) Schiff base complexes derived from o-phenylenediamine and acetoacetanilide. N Raman Y Pitchaikani Raja A Kulandaisamy. Inorganic Volume 113 Issue 3 June 2001 pp 183-189 ...
Compilation of accident statistics in PSE
International Nuclear Information System (INIS)
Jobst, C.
1983-04-01
The objective of the investigations on transportation carried out within the framework of the 'Project - Studies on Safety in Waste Management (PSE II)' is the determination of the risk of accidents in the transportation of radioactive materials by rail. The fault tree analysis is used for the determination of risks in the transportation system. This method offers a possibility for the determination of frequency and consequences of accidents which could lead to an unintended release of radionuclides. The study presented compiles all data obtained from the accident statistics of the Federal German Railways. (orig./RB) [de
Zero-point energy in early quantum theory
International Nuclear Information System (INIS)
Milonni, P.W.; Shih, M.-L.
1991-01-01
In modern physics the vacuum is not a tranquil void but a quantum state with fluctuations having observable consequences. The present concept of the vacuum has its roots in the zero-point energy of harmonic oscillators and the electromagnetic field, and arose before the development of the formalism of quantum mechanics. This article discusses these roots in the blackbody research of Planck and Einstein in 1912--1913, and the relation to Bose--Einstein statistics and the first indication of wave--particle duality uncovered by Einstein's fluctuation formula. Also considered are the Einstein--Stern theory of specific heats, which invoked zero-point energy in a way which turned out to be incorrect, and the experimental implications of zero-point energy recognized by Mulliken and Debye in vibrational spectroscopy and x-ray diffraction
Energy Technology Data Exchange (ETDEWEB)
Santos, T.J.; Carlson, B.V., E-mail: nztiago@gmail.com [Instituto Tecnologia de Aeronautica (ITA), Sao Jose dos Campos, SP (Brazil)
2014-07-01
One of the principal characteristics of nuclear multifragmentation is the emission of complex fragments of intermediate mass. The statistical multifragmentation model has been used for many years to describe the distribution of these fragments. An extension of the statistical multifragmentation model to include partial widths and lifetimes for emission, interprets the fragmentation process as the near simultaneous limit of a series of sequential binary decays. In this extension, the expression describing intermediate mass fragment emission is almost identical to that of light particle emission. At lower temperatures, similar expressions have been shown to furnish a good description of very light intermediate mass fragment emission. However, this is usually not considered a good approximation to the emission of heavier fragments. These emissions seem to be determined by the characteristics of the system at the saddle-point and its subsequent dynamical evolution rather than by the scission point. Here, we compare the barriers and decay widths of these different formulations of intermediate fragment emission and analyze the extent to which they remain distinguishable at high excitation energy. (author)
The critical point of quantum chromodynamics through lattice and ...
Indian Academy of Sciences (India)
The Padé approximants are the rational functions. PL. M (z) = .... Deviations from a smooth behaviour near the critical point are visible in these extrap- ... see that there is evidence, albeit statistically not very significant, that the kurtosis changes.
1861-1981: Statistics teaching in Italian universities
Directory of Open Access Journals (Sweden)
Donata Marasini
2013-05-01
Full Text Available This paper aims to outline the development of Statistics from 1861 to 1981 with respect to its contents. The paper pays particular attention to some statistical topics which have been covered by basic introductory courses in the Italian Universities since the beginning of the Italian unification. The review takes as its starting point the well-known book “Filosofia della Statistica” of Melchiorre Gioja. This volume was published 35 years before Italian unification but it already contains the fundamental topics of exploratory and inductive Statistics. These topics give the opportunity to mention Italian statisticians who are considered the pioneers of this discipline. In particular, the attention is focused on four statisticians: Corrado Gini, well-known for its modern insights; Marcello Boldrini, high cultured man also in the epistemological field; Bruno de Finetti, founder of subjective school and Bayesian reasoning; Giuseppe Pompilj, precursor of random variables and sampling theory. The paper browses the indexes of three well-known Italian handbooks that, although published after the period 1861-1981, deal with topics covered in some basic teachings of exploratory statistics, statistical inference and sampling theory from finite population.
Changing statistics of storms in the North Atlantic?
International Nuclear Information System (INIS)
Storch, H. von; Guddal, J.; Iden, K.A.; Jonsson, T.; Perlwitz, J.; Reistad, M.; Ronde, J. de; Schmidt, H.; Zorita, E.
1993-01-01
Problems in the present discussion about increasing storminess in the North Atlantic area are discusesd. Observational data so far available do not indicate a change in the storm statistics. Output from climate models points to an itensified storm track in the North Atlantic, but because of the limited skill of present-day climate models in simulating high-frequency variability and regional details any such 'forecast' has to be considered with caution. A downscaling procedure which relates large-scale time-mean aspects of the state of the atmosphere and ocean to the local statistics of storms is proposed to reconstruct past variations of high-frequency variability in the atmosphere (storminess) and in the sea state (wave statistics). First results are presented. (orig.)
Liao, Hua-Fang; Cheng, Ling-Yee; Hsieh, Wu-Shiun; Yang, Ming-Chin
2010-03-01
A cutoff point in a test with sounded validity and professional preferences can help to make an accurate clinical decision. This study aimed to determine a cutoff point between two strategies for a developmental screening checklist (referred to as Taipei II). Cutoff point A was set as one or more item failed and cutoff point B was set as two or more items failed or one or more marked item failed. This study was based on the total expected utilities of professional preferences and overall diagnostic indices. A self-administered questionnaire was developed to collect the estimated utility from professionals involved in early childhood interventions (n = 81) regarding four screening outcomes (probabilities of true positive, false positive, true negative, or false negative) and costs. The total expected utilities were calculated from the probabilities of four screening outcomes and utility values. The diagnostic odds ratio was higher for strategy B (695 and 209, respectively) than that of strategy A (184 and 150, respectively) when using the Taipei II on children under 3 years of age and age 3 and over. Strategy B also had a higher median total expected utilities score than strategy A (0.78 vs. 0.72 for children or = 3). If only one cutoff point can be chosen, the authors suggest that clinicians should choose cutoff point B when using the Taipei II for screening. However, two cutoff points of Taipei II, a combination of strategy A and B, can also be used clinically. 2010 Formosan Medical Association & Elsevier. Published by Elsevier B.V. All rights reserved.
Catalogue of oscillator strengths for Ti II lines
International Nuclear Information System (INIS)
Savanov, I.S.; Huovelin, J.; Tuominen, I.
1990-01-01
We have revised the published values of oscillator strengths for ionized titanium. The zero point of gf-values has been established using the lifetime measurements of excited states of atoms. The data on the adopted oscillator strengths for 419 Ti II lines are compiled. Using the adopted gf-values and the analysis by Biemont for the titanium in the solar atmosphere determined from the Ti II lines and the HOLMU model, we obtained the abundance log A(Ti) = 4.96 ± 0.05
International Nuclear Information System (INIS)
1983-01-01
Volume II A contains appendices for: stacked profiles; geologic histograms; geochemical histograms; speed and altitude histograms; geologic statistical tables; geochemical statistical tables; magnetic and ancillary profiles; and test line data
The points for attention in retrospective personal dose estimate
International Nuclear Information System (INIS)
Wang Wuan
1994-01-01
The points which the attention should be paid to in the retrospective personal dose estimate are discussed. They are representative of the dose data, truthfulness of the operation history, accuracy of the man-hour statistics, and rationality of the parameters selection
SASSYS-1 computer code verification with EBR-II test data
International Nuclear Information System (INIS)
Warinner, D.K.; Dunn, F.E.
1985-01-01
The EBR-II natural circulation experiment, XX08 Test 8A, is simulated with the SASSYS-1 computer code and the results for the latter are compared with published data taken during the transient at selected points in the core. The SASSYS-1 results provide transient temperature and flow responses for all points of interest simultaneously during one run, once such basic parameters as pipe sizes, initial core flows, and elevations are specified. The SASSYS-1 simulation results for the EBR-II experiment XX08 Test 8A, conducted in March 1979, are within the published plant data uncertainties and, thereby, serve as a partial verification/validation of the SASSYS-1 code
A comparison of Landsat point and rectangular field training sets for land-use classification
Tom, C. H.; Miller, L. D.
1984-01-01
Rectangular training fields of homogeneous spectroreflectance are commonly used in supervised pattern recognition efforts. Trial image classification with manually selected training sets gives irregular and misleading results due to statistical bias. A self-verifying, grid-sampled training point approach is proposed as a more statistically valid feature extraction technique. A systematic pixel sampling network of every ninth row and ninth column efficiently replaced the full image scene with smaller statistical vectors which preserved the necessary characteristics for classification. The composite second- and third-order average classification accuracy of 50.1 percent for 331,776 pixels in the full image substantially agreed with the 51 percent value predicted by the grid-sampled, 4,100-point training set.
Gaber, Mohamed; El-Ghamry, Hoda; Atlam, Faten; Fathalla, Shaimaa
2015-02-25
Ni(II), Pd(II) and Pt(II) complexes of 5-mercapto-1,2,4-triazole-3-imine-2'-hydroxynaphthaline have been isolated and characterized by elemental analysis, IR, (1)H NMR, EI-mass, UV-vis, molar conductance, magnetic moment measurements and thermogravimetric analysis. The molar conductance values indicated that the complexes are non-electrolytes. The magnetic moment values of the complexes displayed diamagnetic behavior for Pd(II) and Pt(II) complexes and tetrahedral geometrical structure for Ni(II) complex. From the bioinorganic applications point of view, the interaction of the ligand and its metal complexes with CT-DNA was investigated using absorption and viscosity titration techniques. The Schiff-base ligand and its metal complexes have also been screened for their antimicrobial and antitumor activities. Also, theoretical investigation of molecular and electronic structures of the studied ligand and its metal complexes has been carried out. Molecular orbital calculations were performed using DFT (density functional theory) at B3LYP level with standard 6-31G(d,p) and LANL2DZ basis sets to access reliable results to the experimental values. The calculations were performed to obtain the optimized molecular geometry, charge density distribution, extent of distortion from regular geometry, the highest occupied molecular orbital (HOMO), the lowest unoccupied molecular orbital (LUMO), Mulliken atomic charges, reactivity index (ΔE), dipole moment (D), global hardness (η), softness (σ), electrophilicity index (ω), chemical potential and Mulliken electronegativity (χ). Copyright © 2014 Elsevier B.V. All rights reserved.
Effect of incisor inclination changes on cephalometric points a and b
International Nuclear Information System (INIS)
Hassan, S.; Shaikh, A.; Fida, M.
2015-01-01
The position of cephalometric points A and B are liable to be affected by alveolar remodelling caused by orthodontic tooth movement during incisor retraction. This study was conducted to evaluate the change in positions of cephalometric points A and B in sagittal and vertical dimensions due to change in incisor inclinations. Methods: Total sample of 31 subjects were recruited into the study. The inclusion criteria were extraction of premolars in upper and lower arches, completion of growth and orthodontic treatment. The exclusion criteria were patients with craniofacial anomalies and history of orthodontic treatment. By superimposition of pre and post treatment tracings, various linear and angular parameters were measured. Various tests and multiple linear regression analysis were performed to determine changes in outcome variables. Statistically significant p-value was <0.05. Results:One-sample t-test showed that change in position of only point A was statistically significant which was 1.61mm (p<0.01) in sagittal direction and 1.49mm (p<0.01) in vertical direction. Multiple linear regression analysis showed that if we retrocline upper incisor by 100, the point A will move superiorly by 0.6mm. Conclusions: Total change in the position of point A is in a downward and forward direction. Total Change in upper incisors inclinations causes change in position of point A only in vertical direction. (author)
Hadronic equation of state in the statistical bootstrap model and linear graph theory
International Nuclear Information System (INIS)
Fre, P.; Page, R.
1976-01-01
Taking a statistical mechanical point og view, the statistical bootstrap model is discussed and, from a critical analysis of the bootstrap volume comcept, it is reached a physical ipothesis, which leads immediately to the hadronic equation of state provided by the bootstrap integral equation. In this context also the connection between the statistical bootstrap and the linear graph theory approach to interacting gases is analyzed
Statistical monitoring of linear antenna arrays
Harrou, Fouzi
2016-11-03
The paper concerns the problem of monitoring linear antenna arrays using the generalized likelihood ratio (GLR) test. When an abnormal event (fault) affects an array of antenna elements, the radiation pattern changes and significant deviation from the desired design performance specifications can resulted. In this paper, the detection of faults is addressed from a statistical point of view as a fault detection problem. Specifically, a statistical method rested on the GLR principle is used to detect potential faults in linear arrays. To assess the strength of the GLR-based monitoring scheme, three case studies involving different types of faults were performed. Simulation results clearly shown the effectiveness of the GLR-based fault-detection method to monitor the performance of linear antenna arrays.
Jong, Tony; Parry, David L
2004-07-01
The adsorption of Pb(II), Cu(II), Cd(II), Zn(II), Ni(II), Fe(II) and As(V) onto bacterially produced metal sulfide (BPMS) material was investigated using a batch equilibrium method. It was found that the sulfide material had adsorptive properties comparable with those of other adsorbents with respect to the specific uptake of a range of metals and, the levels to which dissolved metal concentrations in solution can be reduced. The percentage of adsorption increased with increasing pH and adsorbent dose, but decreased with increasing initial dissolved metal concentration. The pH of the solution was the most important parameter controlling adsorption of Cd(II), Cu(II), Fe(II), Ni(II), Pb(II), Zn(II), and As(V) by BPMS. The adsorption data were successfully modeled using the Langmuir adsorption isotherm. Desorption experiments showed that the reversibility of adsorption was low, suggesting high-affinity adsorption governed by chemisorption. The mechanism of adsorption for the divalent metals was thought to be the formation of strong, inner-sphere complexes involving surface hydroxyl groups. However, the mechanism for the adsorption of As(V) by BPMS appears to be distinct from that of surface hydroxyl exchange. These results have important implications to the management of metal sulfide sludge produced by bacterial sulfate reduction.
Vapor Pressure Data Analysis and Statistics
2016-12-01
near 8, 2000, and 200, respectively. The A (or a) value is directly related to vapor pressure and will be greater for high vapor pressure materials...1, (10) where n is the number of data points, Yi is the natural logarithm of the i th experimental vapor pressure value, and Xi is the...VAPOR PRESSURE DATA ANALYSIS AND STATISTICS ECBC-TR-1422 Ann Brozena RESEARCH AND TECHNOLOGY DIRECTORATE
Impulse breakdown of small air gap in electric field Part II: Statistical ...
African Journals Online (AJOL)
The patterns of shot distribution and maximum coverage at impulse breakdown voltage for positive point electr-odes (needle and cone electrodes) in small air gaps in non-uniform electric fields were investigated. During the breakdown test, a sheet of paper was placed on the plate electrode (-ve), and each breakdown shot ...
Quality of reporting in oncology phase II trials: A 5-year assessment through systematic review.
Langrand-Escure, Julien; Rivoirard, Romain; Oriol, Mathieu; Tinquaut, Fabien; Rancoule, Chloé; Chauvin, Frank; Magné, Nicolas; Bourmaud, Aurélie
2017-01-01
Phase II clinical trials are a cornerstone of the development in experimental treatments They work as a "filter" for phase III trials confirmation. Surprisingly the attrition ratio in Phase III trials in oncology is significantly higher than in any other medical specialty. This suggests phase II trials in oncology fail to achieve their goal. Objective The present study aims at estimating the quality of reporting in published oncology phase II clinical trials. A literature review was conducted among all phase II and phase II/III clinical trials published during a 5-year period (2010-2015). All articles electronically published by three randomly-selected oncology journals with Impact-Factors>4 were included: Journal of Clinical Oncology, Annals of Oncology and British Journal of Cancer. Quality of reporting was assessed using the Key Methodological Score. 557 articles were included. 315 trials were single-arm studies (56.6%), 193 (34.6%) were randomized and 49 (8.8%) were non-randomized multiple-arm studies. The Methodological Score was equal to 0 (lowest level), 1, 2, 3 (highest level) respectively for 22 (3.9%), 119 (21.4%), 270 (48.5%) and 146 (26.2%) articles. The primary end point is almost systematically reported (90.5%), while sample size calculation is missing in 66% of the articles. 3 variables were independently associated with reporting of a high standard: presence of statistical design (p-value <0.001), multicenter trial (p-value = 0.012), per-protocol analysis (p-value <0.001). Screening was mainly performed by a sole author. The Key Methodological Score was based on only 3 items, making grey zones difficult to translate. This literature review highlights the existence of gaps concerning the quality of reporting. It therefore raised the question of the suitability of the methodology as well as the quality of these trials, reporting being incomplete in the corresponding articles.
Koparan, Timur; Güven, Bülent
2015-07-01
The point of this study is to define the effect of project-based learning approach on 8th Grade secondary-school students' statistical literacy levels for data representation. To achieve this goal, a test which consists of 12 open-ended questions in accordance with the views of experts was developed. Seventy 8th grade secondary-school students, 35 in the experimental group and 35 in the control group, took this test twice, one before the application and one after the application. All the raw scores were turned into linear points by using the Winsteps 3.72 modelling program that makes the Rasch analysis and t-tests, and an ANCOVA analysis was carried out with the linear points. Depending on the findings, it was concluded that the project-based learning approach increases students' level of statistical literacy for data representation. Students' levels of statistical literacy before and after the application were shown through the obtained person-item maps.
Energy Technology Data Exchange (ETDEWEB)
Baniassadi, Majid; Mortazavi, Behzad; Hamedani, Amani; Garmestani, Hamid; Ahzi, Said; Fathi-Torbaghan, Madjid; Ruch, David; Khaleel, Mohammad A.
2012-01-31
In this study, a previously developed reconstruction methodology is extended to three-dimensional reconstruction of a three-phase microstructure, based on two-point correlation functions and two-point cluster functions. The reconstruction process has been implemented based on hybrid stochastic methodology for simulating the virtual microstructure. While different phases of the heterogeneous medium are represented by different cells, growth of these cells is controlled by optimizing parameters such as rotation, shrinkage, translation, distribution and growth rates of the cells. Based on the reconstructed microstructure, finite element method (FEM) was used to compute the effective elastic modulus and effective thermal conductivity. A statistical approach, based on two-point correlation functions, was also used to directly estimate the effective properties of the developed microstructures. Good agreement between the predicted results from FEM analysis and statistical methods was found confirming the efficiency of the statistical methods for prediction of thermo-mechanical properties of three-phase composites.
Status Report on PEP-II Performance
International Nuclear Information System (INIS)
Matter, Regina S.
2002-01-01
PEP-II [1-9] is an e+e- collider with asymmetric energies (3.1 and 9 GeV, respectively) in a 2200 m tunnel at the Stanford Linear Accelerator Center. The collider produces B mesons to study a particle physics effect called CP violation as well as other physics topics. PEP-II was completed in 1998 with the first luminosity generated in July of that year. The installation of the BaBar Detector was finished in May 1999. The overall layout of PEP-II is shown in Figure 1 and the interaction region of PEP-II in Figure 2. The accelerator parameters and achievements of the High Energy Ring (HER) are listed in Table 1 and those for the Low Energy Ring (LER) in Table 2. The two beams collide at a single point in the IR2 hall where the BaBar detector is located. Beam parameters at the best luminosity are shown in Table 3 and PEP-II milestones in Table 4. In August 1999 PEP-II passed the world's record for luminosity which was 8.1x10 32 /cm 2 /s. The present luminosity in PEP-II is 2.15x10 33 /cm 2 /s which is 72% of the design. In June 2000 PEP-II delivered an integrated luminosity of 150 pb -1 in one day, which is above the design integrated luminosity per day of 135 pb -1 . Over the past year PEP-II has delivered over 12 fb -1 to BaBar. BaBar has logged over 11 pb -1 . The present plan is collide until the end of October 2000 followed by a three month installation period
Renyi statistics in equilibrium statistical mechanics
International Nuclear Information System (INIS)
Parvan, A.S.; Biro, T.S.
2010-01-01
The Renyi statistics in the canonical and microcanonical ensembles is examined both in general and in particular for the ideal gas. In the microcanonical ensemble the Renyi statistics is equivalent to the Boltzmann-Gibbs statistics. By the exact analytical results for the ideal gas, it is shown that in the canonical ensemble, taking the thermodynamic limit, the Renyi statistics is also equivalent to the Boltzmann-Gibbs statistics. Furthermore it satisfies the requirements of the equilibrium thermodynamics, i.e. the thermodynamical potential of the statistical ensemble is a homogeneous function of first degree of its extensive variables of state. We conclude that the Renyi statistics arrives at the same thermodynamical relations, as those stemming from the Boltzmann-Gibbs statistics in this limit.
Hagen, Brad; Awosoga, Oluwagbohunmi A; Kellett, Peter; Damgaard, Marie
2013-04-23
This article describes the results of a qualitative research study evaluating nursing students' experiences of a mandatory course in applied statistics, and the perceived effectiveness of teaching methods implemented during the course. Fifteen nursing students in the third year of a four-year baccalaureate program in nursing participated in focus groups before and after taking the mandatory course in statistics. The interviews were transcribed and analyzed using content analysis to reveal four major themes: (i) "one of those courses you throw out?," (ii) "numbers and terrifying equations," (iii) "first aid for statistics casualties," and (iv) "re-thinking curriculum." Overall, the data revealed that although nursing students initially enter statistics courses with considerable skepticism, fear, and anxiety, there are a number of concrete actions statistics instructors can take to reduce student fear and increase the perceived relevance of courses in statistics.
The Critical Point Entanglement and Chaos in the Dicke Model
Directory of Open Access Journals (Sweden)
Lina Bao
2015-07-01
Full Text Available Ground state properties and level statistics of the Dicke model for a finite number of atoms are investigated based on a progressive diagonalization scheme (PDS. Particle number statistics, the entanglement measure and the Shannon information entropy at the resonance point in cases with a finite number of atoms as functions of the coupling parameter are calculated. It is shown that the entanglement measure defined in terms of the normalized von Neumann entropy of the reduced density matrix of the atoms reaches its maximum value at the critical point of the quantum phase transition where the system is most chaotic. Noticeable change in the Shannon information entropy near or at the critical point of the quantum phase transition is also observed. In addition, the quantum phase transition may be observed not only in the ground state mean photon number and the ground state atomic inversion as shown previously, but also in fluctuations of these two quantities in the ground state, especially in the atomic inversion fluctuation.
Rational Points on Curves of Genus 2: Experiments and Speculations
Stoll, Michael
2009-01-01
I will present results of computations providing statistics on rational points on (small) curves of genus 2 and use them to present several conjectures. Some of them are based on heuristic considerations, others are not.
Statistical properties of the nuclear shell-model Hamiltonian
International Nuclear Information System (INIS)
Dias, H.; Hussein, M.S.; Oliveira, N.A. de
1986-01-01
The statistical properties of realistic nuclear shell-model Hamiltonian are investigated in sd-shell nuclei. The probability distribution of the basic-vector amplitude is calculated and compared with the Porter-Thomas distribution. Relevance of the results to the calculation of the giant resonance mixing parameter is pointed out. (Author) [pt
Incentivizing with Bonus in a College Statistics Course
Ingalls, Victoria
2018-01-01
Many studies have argued the negative effects of external rewards on internal motivation while others assert that external motivation does not necessarily undermine intrinsic motivation. At a private university, students were given the option to earn bonus points for achieving mastery in the online homework systems associated with Statistics and…
Statistical mechanics of lattice Boson field theory
International Nuclear Information System (INIS)
1976-01-01
A lattice approximation to Euclidean, boson quantum field theory is expressed in terms of the thermodynamic properties of a classical statistical mechanical system near its critical point in a sufficiently general way to permit the inclusion of an anomalous dimension of the vacuum. Using the thermodynamic properties of the Ising model, one can begin to construct nontrivial (containing scattering) field theories in 2, 3 and 4 dimensions. It is argued that, depending on the choice of the bare coupling constant, there are three types of behavior to be expected: the perturbation theory region, the renormalization group fixed point region, and the Ising model region
Statistical mechanics of lattice boson field theory
International Nuclear Information System (INIS)
Baker, G.A. Jr.
1977-01-01
A lattice approximation to Euclidean, boson quantum field theory is expressed in terms of the thermodynamic properties of a classical statistical mechanical system near its critical point in a sufficiently general way to permit the inclusion of an anomalous dimension of the vacuum. Using the thermodynamic properties of the Ising model, one can begin to construct nontrivial (containing scattering) field theories in 2, 3, and 4 dimensions. It is argued that, depending on the choice of the bare coupling constant, there are three types of behavior to be expected: the perturbation theory region, the renormalization group fixed point region, and the Ising model region. 24 references
New England observed and predicted median July stream/river temperature points
U.S. Environmental Protection Agency — The shapefile contains points with associated observed and predicted median July stream/river temperatures in New England based on a spatial statistical network...
New England observed and predicted median August stream/river temperature points
U.S. Environmental Protection Agency — The shapefile contains points with associated observed and predicted median August stream/river temperatures in New England based on a spatial statistical network...
Creatine Kinase Activity in Patients with Diabetes Mellitus Type I and Type II
Directory of Open Access Journals (Sweden)
Adlija Jevrić-Čaušević
2006-08-01
Full Text Available Diabetes mellitus can be looked upon as an array of diseases, all of which exhibit common symptoms. While pathogenesis of IDDM (insulin dependant diabetes mellitus is well understood, the same is not true for diabetes mellitus type II. In the latter case, relative contribution of the two factors (insulin resistance or decreased insulin secretion varies individually, being highly increased in peripheral tissues and strictly dependant on insulin for glucose uptake. Moreover, in patients with diabetes mellitus type II, disbalance at the level of regulation of glucose metabolism as well as lipid metabolism has been noted in skeletal muscles. It is normal to assume that in this type of diabetes, these changes are reflected at the level of total activity of enzyme creatine kinase. This experimental work was performed on a group of 80 regular patients of Sarajevo General Hospital. Forty of those patients were classified as patients with diabetes type I and forty as patients with diabetes type II. Each group of patients was carefully chosen and constituted of equal number of males and females. The same was applied for adequate controls. Concentration of glucose was determined for each patient with GOD method, while activity of creatine kinase was determined with CK-NAC activated kit. Statistical analysis of the results was performed with SPSS software for Windows. Obtained results point out highly expressed differences in enzyme activity between two populations examined. Changes in enzyme activity are more expressed in patients with diabetes type II. Positive correlation between concentration of glucose and serum activity of the enzyme is seen in both categories of diabetic patients which is not the case for the patients in control group. At the same time, correlation between age and type of diabetes does exist . This is not followed at the level of enzyme activity or concentration of glucose.
Inclusive neutral current ep cross sections with HERA II and two-dimensional unfolding
International Nuclear Information System (INIS)
Fischer, David-Johannes
2011-06-01
In this thesis, the inclusive neutral current ep → eX cross section at small e - scattering angles has been measured using the electromagnetic SpaCal calorimeter in the backward region of the H1 detector. This calorimeter constructed of lead and scintillating fiber was designed to measure the scattered electron with high resolution in both energy and polar angle. The analysis comprises the kinematic range of 0.06 e 2 e 2 2 for the squared momentum exchange. The data sample consists of positron proton collisions of the years 2006 and 2007, adding up to an integrated luminosity of ∝141 pb -1 . Due to the high luminosity of the HERA II run phase the accuracy is no longer limited by the data statistics but rather by the detector resolution and systematics. The migration becomes increasingly influential; an effect which leads to distortions of the measured distribution as well as to statistical correlations between adjacent data points. At this stage, the correction of detector effects as well as the precise determination of statistical correlations become important features of a rigorous error treatment. In this analysis two-dimensional unfolding has been applied. This is a novel approach to H1 inclusive cross section measurements, which are usually based on a bin-by-bin efficiency correction (bin-by-bin method). With unfolding, the detector effect to the measurements is modelled by a linear transformation (''response matrix'') which is used to correct any distortion of the data. The inclusion of off-diagonal elements results in a coherent assessment of the statistical uncertainties and correlations. The model dependence can be optimally evaluated. In this context, the bin-by-bin method can be viewed as an approximation based on a diagonal response matrix. In a scenario of limited detector resolution, the unfolded data distributions will typically exhibit strong fluctuations and correlations between the data points. This issue can be addressed by smoothing
Inclusive neutral current ep cross sections with HERA II and two-dimensional unfolding
Energy Technology Data Exchange (ETDEWEB)
Fischer, David-Johannes
2011-06-15
In this thesis, the inclusive neutral current ep {yields} eX cross section at small e{sup -} scattering angles has been measured using the electromagnetic SpaCal calorimeter in the backward region of the H1 detector. This calorimeter constructed of lead and scintillating fiber was designed to measure the scattered electron with high resolution in both energy and polar angle. The analysis comprises the kinematic range of 0.06 < y{sub e} < 0.6 for the inelasticity and 14 GeV{sup 2} < Q{sub e}{sup 2} < 110 GeV{sup 2} for the squared momentum exchange. The data sample consists of positron proton collisions of the years 2006 and 2007, adding up to an integrated luminosity of {proportional_to}141 pb{sup -1}. Due to the high luminosity of the HERA II run phase the accuracy is no longer limited by the data statistics but rather by the detector resolution and systematics. The migration becomes increasingly influential; an effect which leads to distortions of the measured distribution as well as to statistical correlations between adjacent data points. At this stage, the correction of detector effects as well as the precise determination of statistical correlations become important features of a rigorous error treatment. In this analysis two-dimensional unfolding has been applied. This is a novel approach to H1 inclusive cross section measurements, which are usually based on a bin-by-bin efficiency correction (bin-by-bin method). With unfolding, the detector effect to the measurements is modelled by a linear transformation (''response matrix'') which is used to correct any distortion of the data. The inclusion of off-diagonal elements results in a coherent assessment of the statistical uncertainties and correlations. The model dependence can be optimally evaluated. In this context, the bin-by-bin method can be viewed as an approximation based on a diagonal response matrix. In a scenario of limited detector resolution, the unfolded data distributions will
Directory of Open Access Journals (Sweden)
Monika Tyagi
2014-01-01
Full Text Available Complexes of Mn(II, Co(II, Ni(II, Pd(II and Pt(II were synthesized with the macrocyclic ligand, i.e., 2,3,9,10-tetraketo-1,4,8,11-tetraazacycoletradecane. The ligand was prepared by the [2 + 2] condensation of diethyloxalate and 1,3-diamino propane and characterized by elemental analysis, mass, IR and 1H NMR spectral studies. All the complexes were characterized by elemental analysis, molar conductance, magnetic susceptibility measurements, IR, electronic and electron paramagnetic resonance spectral studies. The molar conductance measurements of Mn(II, Co(II and Ni(II complexes in DMF correspond to non electrolyte nature, whereas Pd(II and Pt(II complexes are 1:2 electrolyte. On the basis of spectral studies an octahedral geometry has been assigned for Mn(II, Co(II and Ni(II complexes, whereas square planar geometry assigned for Pd(II and Pt(II. In vitro the ligand and its metal complexes were evaluated against plant pathogenic fungi (Fusarium odum, Aspergillus niger and Rhizoctonia bataticola and some compounds found to be more active as commercially available fungicide like Chlorothalonil.
Statistic methods for searching inundated radioactive entities
International Nuclear Information System (INIS)
Dubasov, Yu.V.; Krivokhatskij, A.S.; Khramov, N.N.
1993-01-01
The problem of searching flooded radioactive object in a present area was considered. Various models of the searching route plotting are discussed. It is shown that spiral route by random points from the centre of the area examined is the most efficient one. The conclusion is made that, when searching flooded radioactive objects, it is advisable to use multidimensional statistical methods of classification
Directory of Open Access Journals (Sweden)
Korobeynikov S.M.
2017-08-01
Full Text Available In this paper, we consider the problems related to measuring and analyzing the characteristics of partial discharges which are the main instrument for oil-filled high-voltage electrical equipment diagnosing. The experiments on recording of partial discharges in transformer oil have been carried out in the “point-plane” electrode system at alternating current. The instantaneous voltage and the apparent charge have been measured depending on the root-mean-square voltage and the phase angle of partial discharges. This paper aimes at carrying out a statistical analysis of the obtained experimental results, in particular, the construction of a parametric probabilistic model of the dependence of the partial discharge inception voltage distribution on the value of the root-mean-square voltage. It differs from usual discharges which occur in liquid dielectric materials in case of sharp inhomogeneous electrode system. It has been suggested that discharges of a different type are the discharges in gas bubbles that occur when partial discharges in a liquid emerge. This assumption is confirmed by the fact that the number of such discharges increases with increasing the root-mean-square voltage value. It is the main novelty of this paper. This corresponds to the nature of the occurrence of such discharges. After rejecting the observations corresponding to discharges in gas bubbles, a parametric probabilistic model has been constructed. The model obtained makes it possible to determine the probability of partial discharge occurrence in a liquid at a given value of the instantaneous voltage depending on the root-mean-square voltage.
Gaber, Mohamed; El-Ghamry, Hoda; Atlam, Faten; Fathalla, Shaimaa
2015-02-01
Ni(II), Pd(II) and Pt(II) complexes of 5-mercapto-1,2,4-triazole-3-imine-2‧-hydroxynaphthaline have been isolated and characterized by elemental analysis, IR, 1H NMR, EI-mass, UV-vis, molar conductance, magnetic moment measurements and thermogravimetric analysis. The molar conductance values indicated that the complexes are non-electrolytes. The magnetic moment values of the complexes displayed diamagnetic behavior for Pd(II) and Pt(II) complexes and tetrahedral geometrical structure for Ni(II) complex. From the bioinorganic applications point of view, the interaction of the ligand and its metal complexes with CT-DNA was investigated using absorption and viscosity titration techniques. The Schiff-base ligand and its metal complexes have also been screened for their antimicrobial and antitumor activities. Also, theoretical investigation of molecular and electronic structures of the studied ligand and its metal complexes has been carried out. Molecular orbital calculations were performed using DFT (density functional theory) at B3LYP level with standard 6-31G(d,p) and LANL2DZ basis sets to access reliable results to the experimental values. The calculations were performed to obtain the optimized molecular geometry, charge density distribution, extent of distortion from regular geometry, the highest occupied molecular orbital (HOMO), the lowest unoccupied molecular orbital (LUMO), Mulliken atomic charges, reactivity index (ΔE), dipole moment (D), global hardness (η), softness (σ), electrophilicity index (ω), chemical potential and Mulliken electronegativity (χ).
Advanced statistics: linear regression, part II: multiple linear regression.
Marill, Keith A
2004-01-01
The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed.
On tests of randomness for spatial point patterns
International Nuclear Information System (INIS)
Doguwa, S.I.
1990-11-01
New tests of randomness for spatial point patterns are introduced. These test statistics are then compared in a power study with the existing alternatives. These results of the power study suggest that one of the tests proposed is extremely powerful against both aggregated and regular alternatives. (author). 9 refs, 7 figs, 3 tabs
Quenching points of dimeric single-molecule magnets: Exchange interaction effects
International Nuclear Information System (INIS)
Florez, J.M.; Nunez, Alvaro S.; Vargas, P.
2010-01-01
We study the quenched energy-splitting (Δ E ) of a single-molecule magnet (SMM) conformed by two exchange coupled giant-spins. An assessment of two nontrivial characteristics of this quenching is presented: (i) The quenching-points of a strongly exchange-coupled dimer differ from the ones of their respective giant-spin modeled SMM and such a difference can be well described by using the Solari-Kochetov extra phase; (ii) the dependence on the exchange coupling of the magnetic field values at the quenching-points when Δ E passes from monomeric to dimeric behavior. The physics behind these exchange-modified points, their relation with the Δ E -oscillations experimentally obtained by the Landau-Zener method and with the diabolical-plane of a SMM, is discussed.
Quenching points of dimeric single-molecule magnets: Exchange interaction effects
Energy Technology Data Exchange (ETDEWEB)
Florez, J.M., E-mail: juanmanuel.florez@alumnos.usm.c [Departamento de Fisica, Universidad Tecnica Federico Santa Maria, P.O. Box 110-V, Valparaiso (Chile); Nunez, Alvaro S., E-mail: alnunez@dfi.uchile.c [Departamento de Fisica, Facultad de Ciencias Fisicas y Matematicas, Universidad de Chile, Casilla 487-3, Santiago (Chile); Vargas, P., E-mail: patricio.vargas@usm.c [Departamento de Fisica, Universidad Tecnica Federico Santa Maria, P.O. Box 110-V, Valparaiso (Chile)
2010-11-15
We study the quenched energy-splitting ({Delta}{sub E}) of a single-molecule magnet (SMM) conformed by two exchange coupled giant-spins. An assessment of two nontrivial characteristics of this quenching is presented: (i) The quenching-points of a strongly exchange-coupled dimer differ from the ones of their respective giant-spin modeled SMM and such a difference can be well described by using the Solari-Kochetov extra phase; (ii) the dependence on the exchange coupling of the magnetic field values at the quenching-points when {Delta}{sub E} passes from monomeric to dimeric behavior. The physics behind these exchange-modified points, their relation with the {Delta}{sub E}-oscillations experimentally obtained by the Landau-Zener method and with the diabolical-plane of a SMM, is discussed.
Luminosity upgrade possibilities for the PEP-II B-Factory
Sullivan, M
2003-01-01
PEP-II is an asymmetric e sup + e sup - collider being constructed in the SLAC PEP tunnel by SLAC, LBNL, and LLNL. The two beams have energies of 3.1 GeV and 9.0 GeV and are made to collide at a single interaction point. PEP-II has a 2200 m circumference. The nominal parameters for PEP-II are listed in Table 1. The High Energy Ring (HER) of PEP-II started commissioning in 1997. The Low Energy Ring (LER) will be commissioned in the summer of 1998. The BaBar detector is to be installed starting January 1999. Studies for increasing the luminosity in PEP-II beyond the design are underway. A brief summary of the possibilities are presented here. Improvements to the integrated luminosity will be implemented gradually. Major luminosity improvements will likely come in two phased upgrades. Several of these possibilities are summarized in Table 1.
LUMINOSITY UPGRADE POSSIBILITIES FOR THE PEP-II B-FACTORY
International Nuclear Information System (INIS)
Sullivan, Michael K
2003-01-01
PEP-II is an asymmetric e + e - collider being constructed in the SLAC PEP tunnel by SLAC, LBNL, and LLNL. The two beams have energies of 3.1 GeV and 9.0 GeV and are made to collide at a single interaction point. PEP-II has a 2200 m circumference. The nominal parameters for PEP-II are listed in Table 1. The High Energy Ring (HER) of PEP-II started commissioning in 1997. The Low Energy Ring (LER) will be commissioned in the summer of 1998. The BaBar detector is to be installed starting January 1999. Studies for increasing the luminosity in PEP-II beyond the design are underway. A brief summary of the possibilities are presented here. Improvements to the integrated luminosity will be implemented gradually. Major luminosity improvements will likely come in two phased upgrades. Several of these possibilities are summarized in Table 1
Big Data as a Source for Official Statistics
Directory of Open Access Journals (Sweden)
Daas Piet J.H.
2015-06-01
Full Text Available More and more data are being produced by an increasing number of electronic devices physically surrounding us and on the internet. The large amount of data and the high frequency at which they are produced have resulted in the introduction of the term ‘Big Data’. Because these data reflect many different aspects of our daily lives and because of their abundance and availability, Big Data sources are very interesting from an official statistics point of view. This article discusses the exploration of both opportunities and challenges for official statistics associated with the application of Big Data. Experiences gained with analyses of large amounts of Dutch traffic loop detection records and Dutch social media messages are described to illustrate the topics characteristic of the statistical analysis and use of Big Data.
Angiotensin II increases CTGF expression via MAPKs/TGF-{beta}1/TRAF6 pathway in atrial fibroblasts
Energy Technology Data Exchange (ETDEWEB)
Gu, Jun [Department of Cardiology, Shanghai Chest Hospital, Shanghai Jiaotong University School of medicine, Shanghai (China); Liu, Xu, E-mail: xkliuxu@yahoo.cn [Department of Cardiology, Shanghai Chest Hospital, Shanghai Jiaotong University School of medicine, Shanghai (China); Wang, Quan-xing, E-mail: shmywqx@126.com [National Key Laboratory of Medical Immunology, Second Military Medical University, Shanghai (China); Tan, Hong-wei [Department of Cardiology, Shanghai Chest Hospital, Shanghai Jiaotong University School of medicine, Shanghai (China); Guo, Meng [National Key Laboratory of Medical Immunology, Second Military Medical University, Shanghai (China); Jiang, Wei-feng; Zhou, Li [Department of Cardiology, Shanghai Chest Hospital, Shanghai Jiaotong University School of medicine, Shanghai (China)
2012-10-01
The activation of transforming growth factor-{beta}1(TGF-{beta}1)/Smad signaling pathway and increased expression of connective tissue growth factor (CTGF) induced by angiotensin II (AngII) have been proposed as a mechanism for atrial fibrosis. However, whether TGF{beta}1/non-Smad signaling pathways involved in AngII-induced fibrogenetic factor expression remained unknown. Recently tumor necrosis factor receptor associated factor 6 (TRAF6)/TGF{beta}-associated kinase 1 (TAK1) has been shown to be crucial for the activation of TGF-{beta}1/non-Smad signaling pathways. In the present study, we explored the role of TGF-{beta}1/TRAF6 pathway in AngII-induced CTGF expression in cultured adult atrial fibroblasts. AngII (1 {mu}M) provoked the activation of P38 mitogen activated protein kinase (P38 MAPK), extracellular signal-regulated kinase 1/2(ERK1/2) and c-Jun NH(2)-terminal kinase (JNK). AngII (1 {mu}M) also promoted TGF{beta}1, TRAF6, CTGF expression and TAK1 phosphorylation, which were suppressed by angiotensin type I receptor antagonist (Losartan) as well as p38 MAPK inhibitor (SB202190), ERK1/2 inhibitor (PD98059) and JNK inhibitor (SP600125). Meanwhile, both TGF{beta}1 antibody and TRAF6 siRNA decreased the stimulatory effect of AngII on TRAF6, CTGF expression and TAK1 phosphorylation, which also attenuated AngII-induced atrial fibroblasts proliferation. In summary, the MAPKs/TGF{beta}1/TRAF6 pathway is an important signaling pathway in AngII-induced CTGF expression, and inhibition of TRAF6 may therefore represent a new target for reversing Ang II-induced atrial fibrosis. -- Highlights: Black-Right-Pointing-Pointer MAPKs/TGF{beta}1/TRAF6 participates in AngII-induced CTGF expression in atrial fibroblasts. Black-Right-Pointing-Pointer TGF{beta}1/TRAF6 participates in AngII-induced atrial fibroblasts proliferation. Black-Right-Pointing-Pointer TRAF6 may represent a new target for reversing Ang II-induced atrial fibrosis.
International Nuclear Information System (INIS)
1983-01-01
Volume II A contains appendices for: stacked profiles; geologic histograms; geochemical histograms; speed and altitude histograms; geologic statistical tables; geochemical statistical tables; magnetic and ancillary profiles; and test line data
Zero-point length from string fluctuations
International Nuclear Information System (INIS)
Fontanini, Michele; Spallucci, Euro; Padmanabhan, T.
2006-01-01
One of the leading candidates for quantum gravity, viz. string theory, has the following features incorporated in it. (i) The full spacetime is higher-dimensional, with (possibly) compact extra-dimensions; (ii) there is a natural minimal length below which the concept of continuum spacetime needs to be modified by some deeper concept. On the other hand, the existence of a minimal length (zero-point length) in four-dimensional spacetime, with obvious implications as UV regulator, has been often conjectured as a natural aftermath of any correct quantum theory of gravity. We show that one can incorporate the apparently unrelated pieces of information-zero-point length, extra-dimensions, string T-duality-in a consistent framework. This is done in terms of a modified Kaluza-Klein theory that interpolates between (high-energy) string theory and (low-energy) quantum field theory. In this model, the zero-point length in four dimensions is a 'virtual memory' of the length scale of compact extra-dimensions. Such a scale turns out to be determined by T-duality inherited from the underlying fundamental string theory. From a low energy perspective short distance infinities are cutoff by a minimal length which is proportional to the square root of the string slope, i.e., α ' . Thus, we bridge the gap between the string theory domain and the low energy arena of point-particle quantum field theory
Perencanaan Kegiatan Maintenance Dengan Metode Reability Centered Maintenance (Rcm) II
Rachmad Hidayat; Nachnul Ansori; Ali Imron
2010-01-01
Maintenance Activity Planning by Reability Centered Maintenance (RCM) II Method. This research discussmaintenance activity by using RCM II method to determine failure function risk at compresor screw. Calculation isgiven to magnitude optimum time maintenance interval by considering the cost maintenance and the cost reparation.From the research results with RPN points out that critical component that needs to get main priority in givemaintenance on compresor screw are bust logistic on timeworn...
Cross-cultural examination of measurement invariance of the Beck Depression Inventory-II.
Dere, Jessica; Watters, Carolyn A; Yu, Stephanie Chee-Min; Bagby, R Michael; Ryder, Andrew G; Harkness, Kate L
2015-03-01
Given substantial rates of major depressive disorder among college and university students, as well as the growing cultural diversity on many campuses, establishing the cross-cultural validity of relevant assessment tools is important. In the current investigation, we examined the Beck Depression Inventory-Second Edition (BDI-II; Beck, Steer, & Brown, 1996) among Chinese-heritage (n = 933) and European-heritage (n = 933) undergraduates in North America. The investigation integrated 3 distinct lines of inquiry: (a) the literature on cultural variation in depressive symptom reporting between people of Chinese and Western heritage; (b) recent developments regarding the factor structure of the BDI-II; and (c) the application of advanced statistical techniques to the issue of cross-cultural measurement invariance. A bifactor model was found to represent the optimal factor structure of the BDI-II. Multigroup confirmatory factor analysis showed that the BDI-II had strong measurement invariance across both culture and gender. In group comparisons with latent and observed variables, Chinese-heritage students scored higher than European-heritage students on cognitive symptoms of depression. This finding deviates from the commonly held view that those of Chinese heritage somatize depression. These findings hold implications for the study and use of the BDI-II, highlight the value of advanced statistical techniques such as multigroup confirmatory factor analysis, and offer methodological lessons for cross-cultural psychopathology research more broadly. 2015 APA, all rights reserved
Testing for changes using permutations of U-statistics
Czech Academy of Sciences Publication Activity Database
Horvath, L.; Hušková, Marie
2005-01-01
Roč. 2005, č. 128 (2005), s. 351-371 ISSN 0378-3758 R&D Projects: GA ČR GA201/00/0769 Institutional research plan: CEZ:AV0Z10750506 Keywords : U-statistics * permutations * change-point * weighted approximation * Brownian bridge Subject RIV: BD - Theory of Information Impact factor: 0.481, year: 2005
Energy Technology Data Exchange (ETDEWEB)
Li Ruijun; He Qun; Hu Zheng; Zhang Shengrui [Department of Chemistry, Lanzhou University, Lanzhou 730000 (China); Key Laboratory of Nonferrous Metal Chemistry and Resources Utilization of Gansu Province, Lanzhou 730000 (China); Zhang Lijun [Faculty of Science and Engineer, Curtin University, Perth, WA 6845 (Australia); Chang Xijun, E-mail: lirj2010@lzu.edu.cn [Department of Chemistry, Lanzhou University, Lanzhou 730000 (China); Key Laboratory of Nonferrous Metal Chemistry and Resources Utilization of Gansu Province, Lanzhou 730000 (China)
2012-02-03
Graphical abstract: Murexide functionalized halloysite nanotubes have been developed to separate and concentrate trace Pd(II) from aqueous samples. Parameters that affected the sorption and elution efficiency were studied in column mode, and the new adsorbent presented high selectivity and adsorption capacity for the solid phase extraction of trace Pd(II). Highlights: Black-Right-Pointing-Pointer Murexide modified halloysite nanotubes as adsorbent has been reported originally. Black-Right-Pointing-Pointer This adsorbent has a unique selectivity for Pd(II) at pH 1.0. Black-Right-Pointing-Pointer This adsorbent had high adsorption capacity for Pd(II). Black-Right-Pointing-Pointer The precision and accuracy of the method are satisfactory. - Abstract: The originality on the high efficiency of murexide modified halloysite nanotubes as a new adsorbent of solid phase extraction has been reported to preconcentrate and separate Pd(II) in solution samples. The new adsorbent was confirmed by Fourier transformed infrared spectra, X-ray diffraction, scanning electron microscope, transmission electron microscope and N{sub 2} adsorption-desorption isotherms. Effective preconcentration conditions of analyte were examined using column procedures prior to detection by inductively coupled plasma-optical emission spectrometry (ICP-OES). The effects of pH, the amount of adsorbent, the sample flow rate and volume, the elution condition and the interfering ions were optimized in detail. Under the optimized conditions, Pd(II) could be retained on the column at pH 1.0 and quantitatively eluted by 2.5 mL of 0.01 mol L{sup -1} HCl-3% thiourea solution at a flow rate of 2.0 mL min{sup -1}. The analysis time was 5 min. An enrichment factor of 120 was accomplished. Common interfering ions did not interfere in both separation and determination. The maximum adsorption capacity of the adsorbent at optimum conditions was found to be 42.86 mg g{sup -1} for Pd(II). The detection limit (3{sigma}) of
Managing Macroeconomic Risks by Using Statistical Simulation
Directory of Open Access Journals (Sweden)
Merkaš Zvonko
2017-06-01
Full Text Available The paper analyzes the possibilities of using statistical simulation in the macroeconomic risks measurement. At the level of the whole world, macroeconomic risks are, due to the excessive imbalance, significantly increased. Using analytical statistical methods and Monte Carlo simulation, the authors interpret the collected data sets, compare and analyze them in order to mitigate potential risks. The empirical part of the study is a qualitative case study that uses statistical methods and Monte Carlo simulation for managing macroeconomic risks, which is the central theme of this work. Application of statistical simulation is necessary because the system, for which it is necessary to specify the model, is too complex for an analytical approach. The objective of the paper is to point out the previous need for consideration of significant macroeconomic risks, particularly in terms of the number of the unemployed in the society, the movement of gross domestic product and the country’s credit rating, and the use of data previously processed by statistical methods, through statistical simulation, to analyze the existing model of managing the macroeconomic risks and suggest elements for a management model development that will allow, with the lowest possible probability and consequences, the emergence of the recent macroeconomic risks. The stochastic characteristics of the system, defined by random variables as input values defined by probability distributions, require the performance of a large number of iterations on which to record the output of the model and calculate the mathematical expectations. The paper expounds the basic procedures and techniques of discrete statistical simulation applied to systems that can be characterized by a number of events which represent a set of circumstances that have caused a change in the system’s state and the possibility of its application in the field of assessment of macroeconomic risks. The method has no
Conference: Statistical Physics and Biological Information
International Nuclear Information System (INIS)
Gross, David J.; Hwa, Terence
2001-01-01
In the spring of 2001, the Institute for Theoretical Physics ran a 6 month scientific program on Statistical Physics and Biological Information. This program was organized by Walter Fitch (UC Irvine), Terence Hwa (UC San Diego), Luca Peliti (University Federico II), Naples Gary Stormo (Washington University School of Medicine) and Chao Tang (NEC). Overall scientific supervision was provided by David Gross, Director, ITP. The ITP has an online conference/program proceeding which consists of audio and transparencies of almost all of the talks held during this program. Over 100 talks are available on the site at http://online.kitp.ucsb.edu/online/infobio01/
Directory of Open Access Journals (Sweden)
Agans Robert P.
2014-06-01
Full Text Available To receive federal homeless funds, communities are required to produce statistically reliable, unduplicated counts or estimates of homeless persons in sheltered and unsheltered locations during a one-night period (within the last ten days of January called a point-in-time (PIT count. In Los Angeles, a general population telephone survey was implemented to estimate the number of unsheltered homeless adults who are hidden from view during the PIT count. Two estimation approaches were investigated: i the number of homeless persons identified as living on private property, which employed a conventional household weight for the estimated total (Horvitz-Thompson approach; and ii the number of homeless persons identified as living on a neighbor’s property, which employed an additional adjustment derived from the size of the neighborhood network to estimate the total (multiplicity-based approach. This article compares the results of these two methods and discusses the implications therein.
[Low grade renal trauma (Part II): diagnostic validity of ultrasonography].
Grill, R; Báca, V; Otcenásek, M; Zátura, F
2010-04-01
The aim of the study was to verify whether ultrasonography can be considered a reliable method for the diagnosis of low-grade renal trauma. The group investigated included patients with grade I or grade II blunt renal trauma, as classified by the AAST grading system, in whom ultrasonography alone or in conjunction with computed tomography was used as a primary diagnostic method. B-mode ultrasound with a transabdominal probe working at frequencies of 2.5 to 5.0 MHz was used. Every finding of post-traumatic changes in the renal tissues, i.e., post-contusion hypotonic infiltration of the renal parenchyma or subcapsular haematoma, was included. The results were statistically evaluated by the Chi-square test with the level of significance set at 5%, using Epi Info Version 6 CZ software. The group comprised 112 patients (43 women, 69 men) aged between 17 and 82 years (average, 38 years). It was possible to diagnose grade I or grade II renal injury by ultrasonography in only 60 (54%) of them. The statistical significance of ultrasonography as the only imaging method for the diagnosis of low-grade renal injury was not confirmed (p=0.543) Low-grade renal trauma is a problem from the diagnostic point of view. It usually does not require revision surgery and, if found during repeat surgery for more serious injury of another organ, it usually does not receive attention. Therefore, the macroscopic presentation of grade I and grade II renal injury is poorly understood, nor are their microscopic findings known, because during revision surgery these the traumatised kidneys are not usually removed and their injuries at autopsy on the patients who died of multiple trauma are not recorded either. The results of this study demonstrated that the validity of ultrasonography for the diagnosis of low-grade renal injury is not significant, because this examination can reveal only some of the renal injuries such as perirenal haematoma. An injury to the renal parenchyma is also indicated by
Removal of Cu (II and Zn (II from water with natural adsorbents from cassava agroindustry residues
Directory of Open Access Journals (Sweden)
Daniel Schwantes
2015-07-01
Full Text Available Current study employs solid residues from the processing industry of the cassava (Manihot esculenta Crantz (bark, bagasse and bark + bagasse as natural adsorbents for the removal of metal ions Cu(II and Zn(II from contaminated water. The first stage comprised surface morphological characterization (SEM, determination of functional groups (IR, point of zero charge and the composition of naturally existent minerals in the biomass. Further, tests were carried out to evaluate the sorption process by kinetic, equilibrium and thermodynamic studies. The adsorbents showed a surface with favorable adsorption characteristics, with adsorption sites possibly derived from lignin, cellulose and hemicellulose. The dynamic equilibrium time for adsorption was 60 min. Results followed pseudo-second-order, Langmuir and Dubinin-Radushkevich models, suggesting a chemisorption monolayer. The thermodynamic parameters suggested that the biosorption process of Cu and Zn was endothermic, spontaneous or independent according to conditions. Results showed that the studied materials were potential biosorbents in the decontamination of water contaminated by Cu(II and Zn(II. Thus, the above practice complements the final stages of the cassava production chain of cassava, with a new disposal of solid residues from the cassava agroindustry activity.
Angiotensin II for the Treatment of Vasodilatory Shock.
Khanna, Ashish; English, Shane W; Wang, Xueyuan S; Ham, Kealy; Tumlin, James; Szerlip, Harold; Busse, Laurence W; Altaweel, Laith; Albertson, Timothy E; Mackey, Caleb; McCurdy, Michael T; Boldt, David W; Chock, Stefan; Young, Paul J; Krell, Kenneth; Wunderink, Richard G; Ostermann, Marlies; Murugan, Raghavan; Gong, Michelle N; Panwar, Rakshit; Hästbacka, Johanna; Favory, Raphael; Venkatesh, Balasubramanian; Thompson, B Taylor; Bellomo, Rinaldo; Jensen, Jeffrey; Kroll, Stew; Chawla, Lakhmir S; Tidmarsh, George F; Deane, Adam M
2017-08-03
Vasodilatory shock that does not respond to high-dose vasopressors is associated with high mortality. We investigated the effectiveness of angiotensin II for the treatment of patients with this condition. We randomly assigned patients with vasodilatory shock who were receiving more than 0.2 μg of norepinephrine per kilogram of body weight per minute or the equivalent dose of another vasopressor to receive infusions of either angiotensin II or placebo. The primary end point was a response with respect to mean arterial pressure at hour 3 after the start of infusion, with response defined as an increase from baseline of at least 10 mm Hg or an increase to at least 75 mm Hg, without an increase in the dose of background vasopressors. A total of 344 patients were assigned to one of the two regimens; 321 received a study intervention (163 received angiotensin II, and 158 received placebo) and were included in the analysis. The primary end point was reached by more patients in the angiotensin II group (114 of 163 patients, 69.9%) than in the placebo group (37 of 158 patients, 23.4%) (odds ratio, 7.95; 95% confidence interval [CI], 4.76 to 13.3; Pthe mean improvement in the cardiovascular Sequential Organ Failure Assessment (SOFA) score (scores range from 0 to 4, with higher scores indicating more severe dysfunction) was greater in the angiotensin II group than in the placebo group (-1.75 vs. -1.28, P=0.01). Serious adverse events were reported in 60.7% of the patients in the angiotensin II group and in 67.1% in the placebo group. Death by day 28 occurred in 75 of 163 patients (46%) in the angiotensin II group and in 85 of 158 patients (54%) in the placebo group (hazard ratio, 0.78; 95% CI, 0.57 to 1.07; P=0.12). Angiotensin II effectively increased blood pressure in patients with vasodilatory shock that did not respond to high doses of conventional vasopressors. (Funded by La Jolla Pharmaceutical Company; ATHOS-3 ClinicalTrials.gov number, NCT02338843 .).
spatial statistics of poultry production in anambra state of nigeria
African Journals Online (AJOL)
user
case study. Spatial statistics toolbox in ArcGIS was used to generate point density map which reveal the regional .... Global Positioning System (GPS) .... report generated is shown in Figure . .... for the analysis of crime incident locations. Ned.
Experimental uncertainty estimation and statistics for data having interval uncertainty.
Energy Technology Data Exchange (ETDEWEB)
Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)
2007-05-01
This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.
Nonextensive statistical mechanics and high energy physics
Directory of Open Access Journals (Sweden)
Tsallis Constantino
2014-04-01
Full Text Available The use of the celebrated Boltzmann-Gibbs entropy and statistical mechanics is justified for ergodic-like systems. In contrast, complex systems typically require more powerful theories. We will provide a brief introduction to nonadditive entropies (characterized by indices like q, which, in the q → 1 limit, recovers the standard Boltzmann-Gibbs entropy and associated nonextensive statistical mechanics. We then present somerecent applications to systems such as high-energy collisions, black holes and others. In addition to that, we clarify and illustrate the neat distinction that exists between Lévy distributions and q-exponential ones, a point which occasionally causes some confusion in the literature, very particularly in the LHC literature
Processing Terrain Point Cloud Data
DeVore, Ronald
2013-01-10
Terrain point cloud data are typically acquired through some form of Light Detection And Ranging sensing. They form a rich resource that is important in a variety of applications including navigation, line of sight, and terrain visualization. Processing terrain data has not received the attention of other forms of surface reconstruction or of image processing. The goal of terrain data processing is to convert the point cloud into a succinct representation system that is amenable to the various application demands. The present paper presents a platform for terrain processing built on the following principles: (i) measuring distortion in the Hausdorff metric, which we argue is a good match for the application demands, (ii) a multiscale representation based on tree approximation using local polynomial fitting. The basic elements held in the nodes of the tree can be efficiently encoded, transmitted, visualized, and utilized for the various target applications. Several challenges emerge because of the variable resolution of the data, missing data, occlusions, and noise. Techniques for identifying and handling these challenges are developed. © 2013 Society for Industrial and Applied Mathematics.
PRECISION POINTING OF IBEX-Lo OBSERVATIONS
International Nuclear Information System (INIS)
Hłond, M.; Bzowski, M.; Möbius, E.; Kucharek, H.; Heirtzler, D.; Schwadron, N. A.; Neill, M. E. O'; Clark, G.; Crew, G. B.; Fuselier, S.; McComas, D. J.
2012-01-01
Post-launch boresight of the IBEX-Lo instrument on board the Interstellar Boundary Explorer (IBEX) is determined based on IBEX-Lo Star Sensor observations. Accurate information on the boresight of the neutral gas camera is essential for precise determination of interstellar gas flow parameters. Utilizing spin-phase information from the spacecraft attitude control system (ACS), positions of stars observed by the Star Sensor during two years of IBEX measurements were analyzed and compared with positions obtained from a star catalog. No statistically significant differences were observed beyond those expected from the pre-launch uncertainty in the Star Sensor mounting. Based on the star observations and their positions in the spacecraft reference system, pointing of the IBEX satellite spin axis was determined and compared with the pointing obtained from the ACS. Again, no statistically significant deviations were observed. We conclude that no systematic correction for boresight geometry is needed in the analysis of IBEX-Lo observations to determine neutral interstellar gas flow properties. A stack-up of uncertainties in attitude knowledge shows that the instantaneous IBEX-Lo pointing is determined to within ∼0. 0 1 in both spin angle and elevation using either the Star Sensor or the ACS. Further, the Star Sensor can be used to independently determine the spacecraft spin axis. Thus, Star Sensor data can be used reliably to correct the spin phase when the Star Tracker (used by the ACS) is disabled by bright objects in its field of view. The Star Sensor can also determine the spin axis during most orbits and thus provides redundancy for the Star Tracker.
Multiple Monte Carlo Testing with Applications in Spatial Point Processes
DEFF Research Database (Denmark)
Mrkvička, Tomáš; Myllymäki, Mari; Hahn, Ute
with a function as the test statistic, 3) several Monte Carlo tests with functions as test statistics. The rank test has correct (global) type I error in each case and it is accompanied with a p-value and with a graphical interpretation which shows which subtest or which distances of the used test function......(s) lead to the rejection at the prescribed significance level of the test. Examples of null hypothesis from point process and random set statistics are used to demonstrate the strength of the rank envelope test. The examples include goodness-of-fit test with several test functions, goodness-of-fit test...
International Nuclear Information System (INIS)
Gurkan, R.; Altunay, N.
2013-01-01
A new cloud point extraction (CPE) method for the preconcentration of trace iron speciation in natural waters prior to determination by flame atomic absorption spectrometry (FAAS) was developed in the present study. In this method, Fe(II) sensitively and selectively reacts with Calcon carboxylic acid (CCA) in presence of cetylpyridinium chloride (CPC) yielding a hydrophobic complex at pH 10.5, which is then entrapped in surfactant-rich phase. Total Fe was accurately and reliably determined after the reduction of Fe(III) to Fe(II) with sulfite. The amount of Fe(III) in samples was determined from the difference between total Fe and Fe(II). CPC was used not only as an auxiliary ligand in CPE, but also as sensitivity enhancement agent in FAAS. The nonionic surfactant, polyethylene glycol tert-octylphenyl ether (Triton X-114) was used as an extracting agent. The analytical variables affecting CPE efficiency were investigated in detail. The preconcentration/enhancement factors of 50 and 82 respectively, were obtained for the preconcentration of Fe(II) with 50 mL solution. Under the optimized conditions, the detection limit of Fe(II) in linear range of 0.2-60 μg L/sup -1/ was 0.06 μg L/sup -1/. The relative standard deviation was 2.7 percentage (20 μg L/sup -1/, N: 5), recoveries for Fe(II) were in range of 99.0-102.0 percentage for all water samples including certified reference materials (CRMs). In order to verify its accuracy, two CRMs were analyzed and the results obtained were statistically in good agreement with the certified values. (author)
A random point process model for the score in sport matches
Czech Academy of Sciences Publication Activity Database
Volf, Petr
2009-01-01
Roč. 20, č. 2 (2009), s. 121-131 ISSN 1471-678X R&D Projects: GA AV ČR(CZ) IAA101120604 Institutional research plan: CEZ:AV0Z10750506 Keywords : sport statistics * scoring intensity * Cox’s regression model Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2009/SI/volf-a random point process model for the score in sport matches.pdf
Understanding Statistics - Cancer Statistics
Annual reports of U.S. cancer statistics including new cases, deaths, trends, survival, prevalence, lifetime risk, and progress toward Healthy People targets, plus statistical summaries for a number of common cancer types.
Nagmoti, Jyoti Mahantesh
2017-01-01
PowerPoint (PPT™) presentation has become an integral part of day-to-day teaching in medicine. Most often, PPT™ is used in its default mode which in fact, is known to cause boredom and ineffective learning. Research has shown improved short-term memory by applying multimedia principles for designing and delivering lectures. However, such evidence in medical education is scarce. Therefore, we attempted to evaluate the effect of multimedia principles on enhanced learning of parasitology. Second-year medical students received a series of lectures, half of the lectures used traditionally designed PPT™ and the rest used slides designed by Mayer's multimedia principles. Students answered pre and post-tests at the end of each lecture (test-I) and an essay test after six months (test-II) which assessed their short and long term knowledge retention respectively. Students' feedback on quality and content of lectures were collected. Statistically significant difference was found between post test scores of traditional and modified lectures (P = 0.019) indicating, improved short-term memory after modified lectures. Similarly, students scored better in test II on the contents learnt through modified lectures indicating, enhanced comprehension and improved long-term memory (P learning through multimedia designed PPT™ and suggested for their continued use. It is time to depart from default PPT™ and adopt multimedia principles to enhance comprehension and improve short and long term knowledge retention. Further, medical educators may be trained and encouraged to apply multimedia principles for designing and delivering effective lectures.
Visualization of the variability of 3D statistical shape models by animation.
Lamecker, Hans; Seebass, Martin; Lange, Thomas; Hege, Hans-Christian; Deuflhard, Peter
2004-01-01
Models of the 3D shape of anatomical objects and the knowledge about their statistical variability are of great benefit in many computer assisted medical applications like images analysis, therapy or surgery planning. Statistical model of shapes have successfully been applied to automate the task of image segmentation. The generation of 3D statistical shape models requires the identification of corresponding points on two shapes. This remains a difficult problem, especially for shapes of complicated topology. In order to interpret and validate variations encoded in a statistical shape model, visual inspection is of great importance. This work describes the generation and interpretation of statistical shape models of the liver and the pelvic bone.
Directory of Open Access Journals (Sweden)
Federici, Stefano
2009-06-01
instruments have been translated into 11 languages and administered to a total of 88,844 subjects. Finally, the WHODAS II is prevalently used in the medical field, with major emphasis in the specialities of psychiatry, general medicine, and rehabilitation. All studies point out that WHODAS II as an effective and reliable instrument in order to assess the disability, individual functioning and participation levels. Furthermore, they often suggest administering the WHODAS II along with quality of life measures. Finally, the studies about the psychometric properties of the instrument agree in considering the WHODAS II a reliable and valid tool for disability assessment.
Santavirta, Nina; Santavirta, Torsten
2014-03-01
This paper combined data collected from war time government records with survey data including background characteristics, such as factors that affected eligibility, to examine the adult depression outcomes of individuals who were evacuated from Finland to temporary foster care in Sweden during World War II. Using war time government records and survey data for a random sample of 723 exposed individuals and 1321 matched unexposed individuals, the authors conducted least squares adjusted means comparison to examine the association between evacuation and adult depression (Beck Depression Inventory). The random sample was representative for the whole population of evacuees who returned to their biological families after World War II. The authors found no statistically significant difference in depressive symptoms during late adulthood between the two groups; for example, the exposed group had a 0.41 percentage points lower average Beck Depression Inventory score than the unexposed group (p = 0.907). This study provides no support for family disruption during early childhood because of the onset of sudden shocks elevating depressive symptoms during late adulthood. Copyright © 2013 John Wiley & Sons, Ltd.
Effect of Cu(II), Cd(II) and Zn(II) on Pb(II) biosorption by algae Gelidium-derived materials.
Vilar, Vítor J P; Botelho, Cidália M S; Boaventura, Rui A R
2008-06-15
Biosorption of Pb(II), Cu(II), Cd(II) and Zn(II) from binary metal solutions onto the algae Gelidium sesquipedale, an algal industrial waste and a waste-based composite material was investigated at pH 5.3, in a batch system. Binary Pb(II)/Cu(II), Pb(II)/Cd(II) and Pb(II)/Zn(II) solutions have been tested. For the same equilibrium concentrations of both metal ions (1 mmol l(-1)), approximately 66, 85 and 86% of the total uptake capacity of the biosorbents is taken by lead ions in the systems Pb(II)/Cu(II), Pb(II)/Cd(II) and Pb(II)/Zn(II), respectively. Two-metal results were fitted to a discrete and a continuous model, showing the inhibition of the primary metal biosorption by the co-cation. The model parameters suggest that Cd(II) and Zn(II) have the same decreasing effect on the Pb(II) uptake capacity. The uptake of Pb(II) was highly sensitive to the presence of Cu(II). From the discrete model it was possible to obtain the Langmuir affinity constant for Pb(II) biosorption. The presence of the co-cations decreases the apparent affinity of Pb(II). The experimental results were successfully fitted by the continuous model, at different pH values, for each biosorbent. The following sequence for the equilibrium affinity constants was found: Pb>Cu>Cd approximately Zn.
A Studentized Permutation Test for the Comparison of Spatial Point Patterns
DEFF Research Database (Denmark)
Hahn, Ute
of empirical K-functions are compared by a permutation test using a studentized test statistic. The proposed test performs convincingly in terms of empirical level and power in a simulation study, even for point patterns where the K-function estimates on neighboring subsamples are not strictly exchangeable....... It also shows improved behavior compared to a test suggested by Diggle et al. (1991, 2000) for the comparison of groups of independently replicated point patterns. In an application to two point patterns from pathology that represent capillary positions in sections of healthy and tumorous tissue, our...
Statistical Process Control: Going to the Limit for Quality.
Training, 1987
1987-01-01
Defines the concept of statistical process control, a quality control method used especially in manufacturing. Generally, concept users set specific standard levels that must be met. Makes the point that although employees work directly with the method, management is responsible for its success within the plant. (CH)
Point defects and atomic transport in crystals
International Nuclear Information System (INIS)
Lidiard, A.B.
1981-02-01
There are two principle aspects to the theory of atomic transport in crystals as caused by the action of point defects, namely (1) the calculation of relevant properties of the point defects (energies and other thermodynamic characteristics of the different possible defects, activation energies and other mobility parameters) and (2) the statistical mechanics of assemblies of defects, both equilibrium and non-equilibrium assemblies. In the five lectures given here both these aspects are touched on. The first two lectures are concerned with the calculation of relevant point defect properties, particularly in ionic crystals. The first lecture is more general, the second is concerned particularly with some recent calculations of the free volumes of formation of defects in various ionic solids; these solve a rather long-standing problem in this area. The remaining three lectures are concerned with the kinetic theory of defects mainly in relaxation, drift and diffusion situations
First international 26Al interlaboratory comparison - Part II
International Nuclear Information System (INIS)
Merchel, Silke; Bremser, Wolfram
2005-01-01
After finishing Part I of the first international 26 Al interlaboratory comparison with accelerator mass spectrometry (AMS) laboratories [S. Merchel, W. Bremser, Nucl. Instr. and Meth. B 223-224 (2004) 393], the evaluation of Part II with radionuclide counting laboratories took place. The evaluation of the results of the seven participating laboratories on four meteorite samples shows a good overall agreement between laboratories, i.e. it does not reveal any statistically significant differences if results are compared sample-by-sample. However, certain interlaboratory bias is observed with a more detailed statistical analysis including some multivariate approaches
Hearing loss in enlarged vestibular aqueduct and incomplete partition type II.
Ahadizadeh, Emily; Ascha, Mustafa; Manzoor, Nauman; Gupta, Amit; Semaan, Maroun; Megerian, Cliff; Otteson, Todd
The purpose of this work is to identify the role of incomplete partition type II on hearing loss among patients with enlarged vestibular aqueduct (EVA). EVA is a common congenital inner ear malformation among children with hearing loss, where vestibular aqueduct morphology in this population has been shown to correlate to hearing loss. However, the impact of incomplete partition between cochlear turns on hearing loss has not been, despite meaningful implications for EVA pathophysiology. A retrospective review of radiology reports for patients who had computed tomography (CT) scans with diagnoses of hearing loss at a tertiary medical center between January 2000 and June 2016 were screened for EVA. CT scans of the internal auditory canal (IAC) for those patients with EVA were examined for evidence of incomplete partition type II (IP-II), measurements of midpoint width and operculum width a second time, and patients meeting Cincinnati criteria for EVA selected for analysis. Statistical analysis including chi-square, Wilcoxon rank-sum, and t-tests were used to identify differences in outcomes and clinical predictors, as appropriate for the distribution of the data. Linear mixed models of hearing test results for all available tests were constructed, both univariable and adjusting for vestibular aqueduct morphometric features, with ear-specific intercepts and slopes over time. There were no statistically significant differences in any hearing test results or vestibular aqueduct midpoint and operculum widths. Linear mixed models, both univariable and those adjusting for midpoint and operculum widths, did not indicate a statistically significant effect of incomplete partition type II on hearing test results. Hearing loss due to enlarged vestibular aqueduct does not appear to be affected by the presence of incomplete partition type II. Our results suggest that the pathophysiological processes underlying hearing loss in enlarged vestibular aqueduct may not be a result of
David, Gergely; Freund, Patrick; Mohammadi, Siawoosh
2017-09-01
Diffusion tensor imaging (DTI) is a promising approach for investigating the white matter microstructure of the spinal cord. However, it suffers from severe susceptibility, physiological, and instrumental artifacts present in the cord. Retrospective correction techniques are popular approaches to reduce these artifacts, because they are widely applicable and do not increase scan time. In this paper, we present a novel outlier rejection approach (reliability masking) which is designed to supplement existing correction approaches by excluding irreversibly corrupted and thus unreliable data points from the DTI index maps. Then, we investigate how chains of retrospective correction techniques including (i) registration, (ii) registration and robust fitting, and (iii) registration, robust fitting, and reliability masking affect the statistical power of a previously reported finding of lower fractional anisotropy values in the posterior column and lateral corticospinal tracts in cervical spondylotic myelopathy (CSM) patients. While established post-processing steps had small effect on the statistical power of the clinical finding (slice-wise registration: -0.5%, robust fitting: +0.6%), adding reliability masking to the post-processing chain increased it by 4.7%. Interestingly, reliability masking and registration affected the t-score metric differently: while the gain in statistical power due to reliability masking was mainly driven by decreased variability in both groups, registration slightly increased variability. In conclusion, reliability masking is particularly attractive for neuroscience and clinical research studies, as it increases statistical power by reducing group variability and thus provides a cost-efficient alternative to increasing the group size. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Kamp, Derek van der [University of Victoria, Pacific Climate Impacts Consortium, Victoria, BC (Canada); University of Victoria, School of Earth and Ocean Sciences, Victoria, BC (Canada); Curry, Charles L. [Environment Canada University of Victoria, Canadian Centre for Climate Modelling and Analysis, Victoria, BC (Canada); University of Victoria, School of Earth and Ocean Sciences, Victoria, BC (Canada); Monahan, Adam H. [University of Victoria, School of Earth and Ocean Sciences, Victoria, BC (Canada)
2012-04-15
A regression-based downscaling technique was applied to monthly mean surface wind observations from stations throughout western Canada as well as from buoys in the Northeast Pacific Ocean over the period 1979-2006. A predictor set was developed from principal component analysis of the three wind components at 500 hPa and mean sea-level pressure taken from the NCEP Reanalysis II. Building on the results of a companion paper, Curry et al. (Clim Dyn 2011), the downscaling was applied to both wind speed and wind components, in an effort to evaluate the utility of each type of predictand. Cross-validated prediction skill varied strongly with season, with autumn and summer displaying the highest and lowest skill, respectively. In most cases wind components were predicted with better skill than wind speeds. The predictive ability of wind components was found to be strongly related to their orientation. Wind components with the best predictions were often oriented along topographically significant features such as constricted valleys, mountain ranges or ocean channels. This influence of directionality on predictive ability is most prominent during autumn and winter at inland sites with complex topography. Stations in regions with relatively flat terrain (where topographic steering is minimal) exhibit inter-station consistencies including region-wide seasonal shifts in the direction of the best predicted wind component. The conclusion that wind components can be skillfully predicted only over a limited range of directions at most stations limits the scope of statistically downscaled wind speed predictions. It seems likely that such limitations apply to other regions of complex terrain as well. (orig.)
Archform comparisons between skeletal class II and III malocclusions.
Directory of Open Access Journals (Sweden)
Wei Zou
Full Text Available The purpose of this cross-sectional research was to explore the relationship of the mandibular dental and basal bone archforms between severe Skeletal Class II (SC2 and Skeletal Class III (SC3 malocclusions. We also compared intercanine and intermolar widths in these two malocclusion types. Thirty-three virtual pretreatment mandibular models (Skeletal Class III group and Thirty-five Skeletal Class II group pretreatment models were created with a laser scanning system. FA (the midpoint of the facial axis of the clinical crownand WALA points (the most prominent point on the soft-tissue ridgewere employed to produce dental and basal bone archforms, respectively. Gained scatter diagrams of the samples were processed by nonlinear regression analysis via SPSS 17.0. The mandibular dental and basal bone intercanine and intermolar widths were significantly greater in the Skeletal Class III group compared to the Skeletal Class II group. In both groups, a moderate correlation existed between dental and basal bone arch widths in the canine region, and a high correlation existed between dental and basal bone arch widths in the molar region. The coefficient of correlation of the Skeletal Class III group was greater than the Skeletal Class II group. Fourth degree, even order power functions were used as best-fit functions to fit the scatter plots. The radius of curvature was larger in Skeletal Class III malocclusions compared to Skeletal Class II malocclusions (rWALA3>rWALA2>rFA3>rFA2. In conclusion, mandibular dental and basal intercanine and intermolar widths were significantly different between the two groups. Compared with Skeletal Class II subjects, the mandibular archform was more flat for Skeletal Class III subjects.
Logarithmic transformed statistical models in calibration
International Nuclear Information System (INIS)
Zeis, C.D.
1975-01-01
A general type of statistical model used for calibration of instruments having the property that the standard deviations of the observed values increase as a function of the mean value is described. The application to the Helix Counter at the Rocky Flats Plant is primarily from a theoretical point of view. The Helix Counter measures the amount of plutonium in certain types of chemicals. The method described can be used also for other calibrations. (U.S.)
Initiating statistical maintenance optimization
International Nuclear Information System (INIS)
Doyle, E. Kevin; Tuomi, Vesa; Rowley, Ian
2007-01-01
Since the 1980 s maintenance optimization has been centered around various formulations of Reliability Centered Maintenance (RCM). Several such optimization techniques have been implemented at the Bruce Nuclear Station. Further cost refinement of the Station preventive maintenance strategy includes evaluation of statistical optimization techniques. A review of successful pilot efforts in this direction is provided as well as initial work with graphical analysis. The present situation reguarding data sourcing, the principle impediment to use of stochastic methods in previous years, is discussed. The use of Crowe/AMSAA (Army Materials Systems Analysis Activity) plots is demonstrated from the point of view of justifying expenditures in optimization efforts. (author)
Hypothesis testing of a change point during cognitive decline among Alzheimer's disease patients.
Ji, Ming; Xiong, Chengjie; Grundman, Michael
2003-10-01
In this paper, we present a statistical hypothesis test for detecting a change point over the course of cognitive decline among Alzheimer's disease patients. The model under the null hypothesis assumes a constant rate of cognitive decline over time and the model under the alternative hypothesis is a general bilinear model with an unknown change point. When the change point is unknown, however, the null distribution of the test statistics is not analytically tractable and has to be simulated by parametric bootstrap. When the alternative hypothesis that a change point exists is accepted, we propose an estimate of its location based on the Akaike's Information Criterion. We applied our method to a data set from the Neuropsychological Database Initiative by implementing our hypothesis testing method to analyze Mini Mental Status Exam scores based on a random-slope and random-intercept model with a bilinear fixed effect. Our result shows that despite large amount of missing data, accelerated decline did occur for MMSE among AD patients. Our finding supports the clinical belief of the existence of a change point during cognitive decline among AD patients and suggests the use of change point models for the longitudinal modeling of cognitive decline in AD research.
Statistics of an advected passive scalar
International Nuclear Information System (INIS)
Kimura, Y.; Kraichnan, R.H.
1993-01-01
An elementary argument shows that non-Gaussian fluctuations in the temperature at a point in space are induced by random advection of a passive temperature field that has a nonlinear mean gradient, whether or not there is molecular diffusion. This is corroborated by exact analysis for the nondiffusive case and by direct numerical simulation for diffusive cases. Eulerian mapping closure gives results close to the simulation data. Non-Gaussian fluctuations of temperature at a point also are induced by a more subtle mechanism that requires both advection and molecular diffusion and is effective even when the statistics are strictly homogeneous. It operates through selectively strong dissipation of regions where intense temperature gradients have been induced by advective straining. This phenomenon is demonstrated by simulations and explored by means of an idealized analytical model
Energy Technology Data Exchange (ETDEWEB)
Wiley, H. S.
2011-02-28
There comes a time in every field of science when things suddenly change. While it might not be immediately apparent that things are different, a tipping point has occurred. Biology is now at such a point. The reason is the introduction of high-throughput genomics-based technologies. I am not talking about the consequences of the sequencing of the human genome (and every other genome within reach). The change is due to new technologies that generate an enormous amount of data about the molecular composition of cells. These include proteomics, transcriptional profiling by sequencing, and the ability to globally measure microRNAs and post-translational modifications of proteins. These mountains of digital data can be mapped to a common frame of reference: the organism’s genome. With the new high-throughput technologies, we can generate tens of thousands of data points from each sample. Data are now measured in terabytes and the time necessary to analyze data can now require years. Obviously, we can’t wait to interpret the data fully before the next experiment. In fact, we might never be able to even look at all of it, much less understand it. This volume of data requires sophisticated computational and statistical methods for its analysis and is forcing biologists to approach data interpretation as a collaborative venture.
Emancipation through interaction--how eugenics and statistics converged and diverged.
Louçã, Francisco
2009-01-01
The paper discusses the scope and influence of eugenics in defining the scientific programme of statistics and the impact of the evolution of biology on social scientists. It argues that eugenics was instrumental in providing a bridge between sciences, and therefore created both the impulse and the institutions necessary for the birth of modern statistics in its applications first to biology and then to the social sciences. Looking at the question from the point of view of the history of statistics and the social sciences, and mostly concentrating on evidence from the British debates, the paper discusses how these disciplines became emancipated from eugenics precisely because of the inspiration of biology. It also relates how social scientists were fascinated and perplexed by the innovations taking place in statistical theory and practice.
Paechter, Manuela; Macher, Daniel; Martskvishvili, Khatuna; Wimmer, Sigrid; Papousek, Ilona
2017-01-01
In many social science majors, e.g., psychology, students report high levels of statistics anxiety. However, these majors are often chosen by students who are less prone to mathematics and who might have experienced difficulties and unpleasant feelings in their mathematics courses at school. The present study investigates whether statistics anxiety is a genuine form of anxiety that impairs students' achievements or whether learners mainly transfer previous experiences in mathematics and their anxiety in mathematics to statistics. The relationship between mathematics anxiety and statistics anxiety, their relationship to learning behaviors and to performance in a statistics examination were investigated in a sample of 225 undergraduate psychology students (164 women, 61 men). Data were recorded at three points in time: At the beginning of term students' mathematics anxiety, general proneness to anxiety, school grades, and demographic data were assessed; 2 weeks before the end of term, they completed questionnaires on statistics anxiety and their learning behaviors. At the end of term, examination scores were recorded. Mathematics anxiety and statistics anxiety correlated highly but the comparison of different structural equation models showed that they had genuine and even antagonistic contributions to learning behaviors and performance in the examination. Surprisingly, mathematics anxiety was positively related to performance. It might be that students realized over the course of their first term that knowledge and skills in higher secondary education mathematics are not sufficient to be successful in statistics. Part of mathematics anxiety may then have strengthened positive extrinsic effort motivation by the intention to avoid failure and may have led to higher effort for the exam preparation. However, via statistics anxiety mathematics anxiety also had a negative contribution to performance. Statistics anxiety led to higher procrastination in the structural
Paechter, Manuela; Macher, Daniel; Martskvishvili, Khatuna; Wimmer, Sigrid; Papousek, Ilona
2017-01-01
In many social science majors, e.g., psychology, students report high levels of statistics anxiety. However, these majors are often chosen by students who are less prone to mathematics and who might have experienced difficulties and unpleasant feelings in their mathematics courses at school. The present study investigates whether statistics anxiety is a genuine form of anxiety that impairs students' achievements or whether learners mainly transfer previous experiences in mathematics and their anxiety in mathematics to statistics. The relationship between mathematics anxiety and statistics anxiety, their relationship to learning behaviors and to performance in a statistics examination were investigated in a sample of 225 undergraduate psychology students (164 women, 61 men). Data were recorded at three points in time: At the beginning of term students' mathematics anxiety, general proneness to anxiety, school grades, and demographic data were assessed; 2 weeks before the end of term, they completed questionnaires on statistics anxiety and their learning behaviors. At the end of term, examination scores were recorded. Mathematics anxiety and statistics anxiety correlated highly but the comparison of different structural equation models showed that they had genuine and even antagonistic contributions to learning behaviors and performance in the examination. Surprisingly, mathematics anxiety was positively related to performance. It might be that students realized over the course of their first term that knowledge and skills in higher secondary education mathematics are not sufficient to be successful in statistics. Part of mathematics anxiety may then have strengthened positive extrinsic effort motivation by the intention to avoid failure and may have led to higher effort for the exam preparation. However, via statistics anxiety mathematics anxiety also had a negative contribution to performance. Statistics anxiety led to higher procrastination in the structural
Directory of Open Access Journals (Sweden)
Manuela Paechter
2017-07-01
Full Text Available In many social science majors, e.g., psychology, students report high levels of statistics anxiety. However, these majors are often chosen by students who are less prone to mathematics and who might have experienced difficulties and unpleasant feelings in their mathematics courses at school. The present study investigates whether statistics anxiety is a genuine form of anxiety that impairs students' achievements or whether learners mainly transfer previous experiences in mathematics and their anxiety in mathematics to statistics. The relationship between mathematics anxiety and statistics anxiety, their relationship to learning behaviors and to performance in a statistics examination were investigated in a sample of 225 undergraduate psychology students (164 women, 61 men. Data were recorded at three points in time: At the beginning of term students' mathematics anxiety, general proneness to anxiety, school grades, and demographic data were assessed; 2 weeks before the end of term, they completed questionnaires on statistics anxiety and their learning behaviors. At the end of term, examination scores were recorded. Mathematics anxiety and statistics anxiety correlated highly but the comparison of different structural equation models showed that they had genuine and even antagonistic contributions to learning behaviors and performance in the examination. Surprisingly, mathematics anxiety was positively related to performance. It might be that students realized over the course of their first term that knowledge and skills in higher secondary education mathematics are not sufficient to be successful in statistics. Part of mathematics anxiety may then have strengthened positive extrinsic effort motivation by the intention to avoid failure and may have led to higher effort for the exam preparation. However, via statistics anxiety mathematics anxiety also had a negative contribution to performance. Statistics anxiety led to higher procrastination in
Statistical techniques for sampling and monitoring natural resources
Hans T. Schreuder; Richard Ernst; Hugo Ramirez-Maldonado
2004-01-01
We present the statistical theory of inventory and monitoring from a probabilistic point of view. We start with the basics and show the interrelationships between designs and estimators illustrating the methods with a small artificial population as well as with a mapped realistic population. For such applications, useful open source software is given in Appendix 4....