Information-theoretic methods for estimating of complicated probability distributions
Zong, Zhi
2006-01-01
Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur
Information-theoretic security proof for quantum-key-distribution protocols
Renner, Renato; Gisin, Nicolas; Kraus, Barbara
2005-01-01
We present a technique for proving the security of quantum-key-distribution (QKD) protocols. It is based on direct information-theoretic arguments and thus also applies if no equivalent entanglement purification scheme can be found. Using this technique, we investigate a general class of QKD protocols with one-way classical post-processing. We show that, in order to analyze the full security of these protocols, it suffices to consider collective attacks. Indeed, we give new lower and upper bounds on the secret-key rate which only involve entropies of two-qubit density operators and which are thus easy to compute. As an illustration of our results, we analyze the Bennett-Brassard 1984, the six-state, and the Bennett 1992 protocols with one-way error correction and privacy amplification. Surprisingly, the performance of these protocols is increased if one of the parties adds noise to the measurement data before the error correction. In particular, this additional noise makes the protocols more robust against noise in the quantum channel
Information-theoretic security proof for quantum-key-distribution protocols
Renner, Renato; Gisin, Nicolas; Kraus, Barbara
2005-07-01
We present a technique for proving the security of quantum-key-distribution (QKD) protocols. It is based on direct information-theoretic arguments and thus also applies if no equivalent entanglement purification scheme can be found. Using this technique, we investigate a general class of QKD protocols with one-way classical post-processing. We show that, in order to analyze the full security of these protocols, it suffices to consider collective attacks. Indeed, we give new lower and upper bounds on the secret-key rate which only involve entropies of two-qubit density operators and which are thus easy to compute. As an illustration of our results, we analyze the Bennett-Brassard 1984, the six-state, and the Bennett 1992 protocols with one-way error correction and privacy amplification. Surprisingly, the performance of these protocols is increased if one of the parties adds noise to the measurement data before the error correction. In particular, this additional noise makes the protocols more robust against noise in the quantum channel.
Gonzalez, Elias; Kish, Laszlo B; Balog, Robert S; Enjeti, Prasad
2013-01-01
We introduce a protocol with a reconfigurable filter system to create non-overlapping single loops in the smart power grid for the realization of the Kirchhoff-Law-Johnson-(like)-Noise secure key distribution system. The protocol is valid for one-dimensional radial networks (chain-like power line) which are typical of the electricity distribution network between the utility and the customer. The speed of the protocol (the number of steps needed) versus grid size is analyzed. When properly generalized, such a system has the potential to achieve unconditionally secure key distribution over the smart power grid of arbitrary geometrical dimensions.
Bernstein, R.B.
1976-01-01
An information-theoretic approach to the analysis of rotational excitation cross sections was developed by Levine, Bernstein, Johnson, Procaccia, and coworkers and applied to state-to-state cross sections available from numerical computations of reactive and nonreactive scattering (for example, by Wyatt and Kuppermann and their coworkers and by Pack and Pattengill and others). The rotational surprisals are approximately linear in the energy transferred, thereby accounting for the so-called ''exponential gap law'' for rotational relaxation discovered experimentally by Polanyi, Woodall, and Ding. For the ''linear surprisal'' case the unique relation between the surprisal parameter theta/sub R/ and the first moment of the rotational energy distribution provides a link between the pattern of the rotational state distribution and those features of the potential surface which govern the average energy transfer
2008-02-01
Information Theoretic Proceedures Frank Mufalli Rakesh Nagi Jim Llinas Sumita Mishra SUNY at Buffalo— CUBRC 4455 Genessee Street Buffalo...5f. WORK UNIT NUMBER NY 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) SUNY at Buffalo— CUBRC * Paine College ** 4455 Genessee
Information theoretic preattentive saliency
Loog, Marco
2011-01-01
Employing an information theoretic operational definition of bottom-up attention from the field of computational visual perception a very general expression for saliency is provided. As opposed to many of the current approaches to determining a saliency map there is no need for an explicit data...... of which features, image information is described. We illustrate our result by determining a few specific saliency maps based on particular choices of features. One of them makes the link with the mapping underlying well-known Harris interest points, which is a result recently obtained in isolation...
Guisan Antoine
2009-04-01
Full Text Available Abstract Background Multiple logistic regression is precluded from many practical applications in ecology that aim to predict the geographic distributions of species because it requires absence data, which are rarely available or are unreliable. In order to use multiple logistic regression, many studies have simulated "pseudo-absences" through a number of strategies, but it is unknown how the choice of strategy influences models and their geographic predictions of species. In this paper we evaluate the effect of several prevailing pseudo-absence strategies on the predictions of the geographic distribution of a virtual species whose "true" distribution and relationship to three environmental predictors was predefined. We evaluated the effect of using a real absences b pseudo-absences selected randomly from the background and c two-step approaches: pseudo-absences selected from low suitability areas predicted by either Ecological Niche Factor Analysis: (ENFA or BIOCLIM. We compared how the choice of pseudo-absence strategy affected model fit, predictive power, and information-theoretic model selection results. Results Models built with true absences had the best predictive power, best discriminatory power, and the "true" model (the one that contained the correct predictors was supported by the data according to AIC, as expected. Models based on random pseudo-absences had among the lowest fit, but yielded the second highest AUC value (0.97, and the "true" model was also supported by the data. Models based on two-step approaches had intermediate fit, the lowest predictive power, and the "true" model was not supported by the data. Conclusion If ecologists wish to build parsimonious GLM models that will allow them to make robust predictions, a reasonable approach is to use a large number of randomly selected pseudo-absences, and perform model selection based on an information theoretic approach. However, the resulting models can be expected to have
Information theoretic quantification of diagnostic uncertainty.
Westover, M Brandon; Eiseman, Nathaniel A; Cash, Sydney S; Bianchi, Matt T
2012-01-01
Diagnostic test interpretation remains a challenge in clinical practice. Most physicians receive training in the use of Bayes' rule, which specifies how the sensitivity and specificity of a test for a given disease combine with the pre-test probability to quantify the change in disease probability incurred by a new test result. However, multiple studies demonstrate physicians' deficiencies in probabilistic reasoning, especially with unexpected test results. Information theory, a branch of probability theory dealing explicitly with the quantification of uncertainty, has been proposed as an alternative framework for diagnostic test interpretation, but is even less familiar to physicians. We have previously addressed one key challenge in the practical application of Bayes theorem: the handling of uncertainty in the critical first step of estimating the pre-test probability of disease. This essay aims to present the essential concepts of information theory to physicians in an accessible manner, and to extend previous work regarding uncertainty in pre-test probability estimation by placing this type of uncertainty within a principled information theoretic framework. We address several obstacles hindering physicians' application of information theoretic concepts to diagnostic test interpretation. These include issues of terminology (mathematical meanings of certain information theoretic terms differ from clinical or common parlance) as well as the underlying mathematical assumptions. Finally, we illustrate how, in information theoretic terms, one can understand the effect on diagnostic uncertainty of considering ranges instead of simple point estimates of pre-test probability.
Information theoretic description of networks
Wilhelm, Thomas; Hollunder, Jens
2007-11-01
We present a new information theoretic approach for network characterizations. It is developed to describe the general type of networks with n nodes and L directed and weighted links, i.e., it also works for the simpler undirected and unweighted networks. The new information theoretic measures for network characterizations are based on a transmitter-receiver analogy of effluxes and influxes. Based on these measures, we classify networks as either complex or non-complex and as either democracy or dictatorship networks. Directed networks, in particular, are furthermore classified as either information spreading and information collecting networks. The complexity classification is based on the information theoretic network complexity measure medium articulation (MA). It is proven that special networks with a medium number of links ( L∼n1.5) show the theoretical maximum complexity MA=(log n)2/2. A network is complex if its MA is larger than the average MA of appropriately randomized networks: MA>MAr. A network is of the democracy type if its redundancy Rdictatorship network. In democracy networks all nodes are, on average, of similar importance, whereas in dictatorship networks some nodes play distinguished roles in network functioning. In other words, democracy networks are characterized by cycling of information (or mass, or energy), while in dictatorship networks there is a straight through-flow from sources to sinks. The classification of directed networks into information spreading and information collecting networks is based on the conditional entropies of the considered networks ( H(A/B)=uncertainty of sender node if receiver node is known, H(B/A)=uncertainty of receiver node if sender node is known): if H(A/B)>H(B/A), it is an information collecting network, otherwise an information spreading network. Finally, different real networks (directed and undirected, weighted and unweighted) are classified according to our general scheme.
Hash functions and information theoretic security
Bagheri, Nasoor; Knudsen, Lars Ramkilde; Naderi, Majid
2009-01-01
Information theoretic security is an important security notion in cryptography as it provides a true lower bound for attack complexities. However, in practice attacks often have a higher cost than the information theoretic bound. In this paper we study the relationship between information theoretic...
Robust recognition via information theoretic learning
He, Ran; Yuan, Xiaotong; Wang, Liang
2014-01-01
This Springer Brief represents a comprehensive review of information theoretic methods for robust recognition. A variety of information theoretic methods have been proffered in the past decade, in a large variety of computer vision applications; this work brings them together, attempts to impart the theory, optimization and usage of information entropy.The?authors?resort to a new information theoretic concept, correntropy, as a robust measure and apply it to solve robust face recognition and object recognition problems. For computational efficiency,?the brief?introduces the additive and multip
Information-Theoretic Inference of Common Ancestors
Bastian Steudel
2015-04-01
Full Text Available A directed acyclic graph (DAG partially represents the conditional independence structure among observations of a system if the local Markov condition holds, that is if every variable is independent of its non-descendants given its parents. In general, there is a whole class of DAGs that represents a given set of conditional independence relations. We are interested in properties of this class that can be derived from observations of a subsystem only. To this end, we prove an information-theoretic inequality that allows for the inference of common ancestors of observed parts in any DAG representing some unknown larger system. More explicitly, we show that a large amount of dependence in terms of mutual information among the observations implies the existence of a common ancestor that distributes this information. Within the causal interpretation of DAGs, our result can be seen as a quantitative extension of Reichenbach’s principle of common cause to more than two variables. Our conclusions are valid also for non-probabilistic observations, such as binary strings, since we state the proof for an axiomatized notion of “mutual information” that includes the stochastic as well as the algorithmic version.
Information-Theoretic Perspectives on Geophysical Models
Nearing, Grey
2016-04-01
To test any hypothesis about any dynamic system, it is necessary to build a model that places that hypothesis into the context of everything else that we know about the system: initial and boundary conditions and interactions between various governing processes (Hempel and Oppenheim, 1948, Cartwright, 1983). No hypothesis can be tested in isolation, and no hypothesis can be tested without a model (for a geoscience-related discussion see Clark et al., 2011). Science is (currently) fundamentally reductionist in the sense that we seek some small set of governing principles that can explain all phenomena in the universe, and such laws are ontological in the sense that they describe the object under investigation (Davies, 1990 gives several competing perspectives on this claim). However, since we cannot build perfect models of complex systems, any model that does not also contain an epistemological component (i.e., a statement, like a probability distribution, that refers directly to the quality of of the information from the model) is falsified immediately (in the sense of Popper, 2002) given only a small number of observations. Models necessarily contain both ontological and epistemological components, and what this means is that the purpose of any robust scientific method is to measure the amount and quality of information provided by models. I believe that any viable philosophy of science must be reducible to this statement. The first step toward a unified theory of scientific models (and therefore a complete philosophy of science) is a quantitative language that applies to both ontological and epistemological questions. Information theory is one such language: Cox' (1946) theorem (see Van Horn, 2003) tells us that probability theory is the (only) calculus that is consistent with Classical Logic (Jaynes, 2003; chapter 1), and information theory is simply the integration of convex transforms of probability ratios (integration reduces density functions to scalar
System identification with information theoretic criteria
A.A. Stoorvogel; J.H. van Schuppen (Jan)
1995-01-01
textabstractAttention is focused in this paper on the approximation problem of system identification with information theoretic criteria. For a class of problems it is shown that the criterion of mutual information rate is identical to the criterion of exponential-of-quadratic cost and to
Information Theoretic-Learning Auto-Encoder
Santana, Eder; Emigh, Matthew; Principe, Jose C
2016-01-01
We propose Information Theoretic-Learning (ITL) divergence measures for variational regularization of neural networks. We also explore ITL-regularized autoencoders as an alternative to variational autoencoding bayes, adversarial autoencoders and generative adversarial networks for randomly generating sample data without explicitly defining a partition function. This paper also formalizes, generative moment matching networks under the ITL framework.
Role of information theoretic uncertainty relations in quantum theory
Jizba, Petr, E-mail: p.jizba@fjfi.cvut.cz [FNSPE, Czech Technical University in Prague, Břehová 7, 115 19 Praha 1 (Czech Republic); ITP, Freie Universität Berlin, Arnimallee 14, D-14195 Berlin (Germany); Dunningham, Jacob A., E-mail: J.Dunningham@sussex.ac.uk [Department of Physics and Astronomy, University of Sussex, Falmer, Brighton, BN1 9QH (United Kingdom); Joo, Jaewoo, E-mail: j.joo@surrey.ac.uk [Advanced Technology Institute and Department of Physics, University of Surrey, Guildford, GU2 7XH (United Kingdom)
2015-04-15
Uncertainty relations based on information theory for both discrete and continuous distribution functions are briefly reviewed. We extend these results to account for (differential) Rényi entropy and its related entropy power. This allows us to find a new class of information-theoretic uncertainty relations (ITURs). The potency of such uncertainty relations in quantum mechanics is illustrated with a simple two-energy-level model where they outperform both the usual Robertson–Schrödinger uncertainty relation and Shannon entropy based uncertainty relation. In the continuous case the ensuing entropy power uncertainty relations are discussed in the context of heavy tailed wave functions and Schrödinger cat states. Again, improvement over both the Robertson–Schrödinger uncertainty principle and Shannon ITUR is demonstrated in these cases. Further salient issues such as the proof of a generalized entropy power inequality and a geometric picture of information-theoretic uncertainty relations are also discussed.
Role of information theoretic uncertainty relations in quantum theory
Jizba, Petr; Dunningham, Jacob A.; Joo, Jaewoo
2015-01-01
Uncertainty relations based on information theory for both discrete and continuous distribution functions are briefly reviewed. We extend these results to account for (differential) Rényi entropy and its related entropy power. This allows us to find a new class of information-theoretic uncertainty relations (ITURs). The potency of such uncertainty relations in quantum mechanics is illustrated with a simple two-energy-level model where they outperform both the usual Robertson–Schrödinger uncertainty relation and Shannon entropy based uncertainty relation. In the continuous case the ensuing entropy power uncertainty relations are discussed in the context of heavy tailed wave functions and Schrödinger cat states. Again, improvement over both the Robertson–Schrödinger uncertainty principle and Shannon ITUR is demonstrated in these cases. Further salient issues such as the proof of a generalized entropy power inequality and a geometric picture of information-theoretic uncertainty relations are also discussed
Information-theoretic lengths of Jacobi polynomials
Guerrero, A; Dehesa, J S [Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, Granada (Spain); Sanchez-Moreno, P, E-mail: agmartinez@ugr.e, E-mail: pablos@ugr.e, E-mail: dehesa@ugr.e [Instituto ' Carlos I' de Fisica Teorica y Computacional, Universidad de Granada, Granada (Spain)
2010-07-30
The information-theoretic lengths of the Jacobi polynomials P{sup ({alpha}, {beta})}{sub n}(x), which are information-theoretic measures (Renyi, Shannon and Fisher) of their associated Rakhmanov probability density, are investigated. They quantify the spreading of the polynomials along the orthogonality interval [- 1, 1] in a complementary but different way as the root-mean-square or standard deviation because, contrary to this measure, they do not refer to any specific point of the interval. The explicit expressions of the Fisher length are given. The Renyi lengths are found by the use of the combinatorial multivariable Bell polynomials in terms of the polynomial degree n and the parameters ({alpha}, {beta}). The Shannon length, which cannot be exactly calculated because of its logarithmic functional form, is bounded from below by using sharp upper bounds to general densities on [- 1, +1] given in terms of various expectation values; moreover, its asymptotics is also pointed out. Finally, several computational issues relative to these three quantities are carefully analyzed.
Distributed Energy Resources Test Facility
Federal Laboratory Consortium — NREL's Distributed Energy Resources Test Facility (DERTF) is a working laboratory for interconnection and systems integration testing. This state-of-the-art facility...
An Information Theoretic Characterisation of Auditory Encoding
Overath, Tobias; Cusack, Rhodri; Kumar, Sukhbinder; von Kriegstein, Katharina; Warren, Jason D; Grube, Manon; Carlyon, Robert P; Griffiths, Timothy D
2007-01-01
The entropy metric derived from information theory provides a means to quantify the amount of information transmitted in acoustic streams like speech or music. By systematically varying the entropy of pitch sequences, we sought brain areas where neural activity and energetic demands increase as a function of entropy. Such a relationship is predicted to occur in an efficient encoding mechanism that uses less computational resource when less information is present in the signal: we specifically tested the hypothesis that such a relationship is present in the planum temporale (PT). In two convergent functional MRI studies, we demonstrated this relationship in PT for encoding, while furthermore showing that a distributed fronto-parietal network for retrieval of acoustic information is independent of entropy. The results establish PT as an efficient neural engine that demands less computational resource to encode redundant signals than those with high information content. PMID:17958472
Information-Theoretical Analysis of EEG Microstate Sequences in Python
Frederic von Wegner
2018-06-01
Full Text Available We present an open-source Python package to compute information-theoretical quantities for electroencephalographic data. Electroencephalography (EEG measures the electrical potential generated by the cerebral cortex and the set of spatial patterns projected by the brain's electrical potential on the scalp surface can be clustered into a set of representative maps called EEG microstates. Microstate time series are obtained by competitively fitting the microstate maps back into the EEG data set, i.e., by substituting the EEG data at a given time with the label of the microstate that has the highest similarity with the actual EEG topography. As microstate sequences consist of non-metric random variables, e.g., the letters A–D, we recently introduced information-theoretical measures to quantify these time series. In wakeful resting state EEG recordings, we found new characteristics of microstate sequences such as periodicities related to EEG frequency bands. The algorithms used are here provided as an open-source package and their use is explained in a tutorial style. The package is self-contained and the programming style is procedural, focusing on code intelligibility and easy portability. Using a sample EEG file, we demonstrate how to perform EEG microstate segmentation using the modified K-means approach, and how to compute and visualize the recently introduced information-theoretical tests and quantities. The time-lagged mutual information function is derived as a discrete symbolic alternative to the autocorrelation function for metric time series and confidence intervals are computed from Markov chain surrogate data. The software package provides an open-source extension to the existing implementations of the microstate transform and is specifically designed to analyze resting state EEG recordings.
Information theoretic bounds for compressed sensing in SAR imaging
Jingxiong, Zhang; Ke, Yang; Jianzhong, Guo
2014-01-01
Compressed sensing (CS) is a new framework for sampling and reconstructing sparse signals from measurements significantly fewer than those prescribed by Nyquist rate in the Shannon sampling theorem. This new strategy, applied in various application areas including synthetic aperture radar (SAR), relies on two principles: sparsity, which is related to the signals of interest, and incoherence, which refers to the sensing modality. An important question in CS-based SAR system design concerns sampling rate necessary and sufficient for exact or approximate recovery of sparse signals. In the literature, bounds of measurements (or sampling rate) in CS have been proposed from the perspective of information theory. However, these information-theoretic bounds need to be reviewed and, if necessary, validated for CS-based SAR imaging, as there are various assumptions made in the derivations of lower and upper bounds on sub-Nyquist sampling rates, which may not hold true in CS-based SAR imaging. In this paper, information-theoretic bounds of sampling rate will be analyzed. For this, the SAR measurement system is modeled as an information channel, with channel capacity and rate-distortion characteristics evaluated to enable the determination of sampling rates required for recovery of sparse scenes. Experiments based on simulated data will be undertaken to test the theoretic bounds against empirical results about sampling rates required to achieve certain detection error probabilities
Information-theoretic approach to uncertainty importance
Park, C.K.; Bari, R.A.
1985-01-01
A method is presented for importance analysis in probabilistic risk assessments (PRA) for which the results of interest are characterized by full uncertainty distributions and not just point estimates. The method is based on information theory in which entropy is a measure of uncertainty of a probability density function. We define the relative uncertainty importance between two events as the ratio of the two exponents of the entropies. For the log-normal and log-uniform distributions the importance measure is comprised of the median (central tendency) and of the logarithm of the error factor (uncertainty). Thus, if accident sequences are ranked this way, and the error factors are not all equal, then a different rank order would result than if the sequences were ranked by the central tendency measure alone. As an illustration, the relative importance of internal events and in-plant fires was computed on the basis of existing PRA results
Exploring super-gaussianity towards robust information-theoretical time delay estimation
Petsatodis, Theodoros; Talantzis, Fotios; Boukis, Christos
2013-01-01
the effect upon TDE when modeling the source signal with different speech-based distributions. An information theoretical TDE method indirectly encapsulating higher order statistics (HOS) formed the basis of this work. The underlying assumption of Gaussian distributed source has been replaced...
Information-Theoretic Approaches for Evaluating Complex Adaptive Social Simulation Systems
Omitaomu, Olufemi A [ORNL; Ganguly, Auroop R [ORNL; Jiao, Yu [ORNL
2009-01-01
In this paper, we propose information-theoretic approaches for comparing and evaluating complex agent-based models. In information theoretic terms, entropy and mutual information are two measures of system complexity. We used entropy as a measure of the regularity of the number of agents in a social class; and mutual information as a measure of information shared by two social classes. Using our approaches, we compared two analogous agent-based (AB) models developed for regional-scale social-simulation system. The first AB model, called ABM-1, is a complex AB built with 10,000 agents on a desktop environment and used aggregate data; the second AB model, ABM-2, was built with 31 million agents on a highperformance computing framework located at Oak Ridge National Laboratory, and fine-resolution data from the LandScan Global Population Database. The initializations were slightly different, with ABM-1 using samples from a probability distribution and ABM-2 using polling data from Gallop for a deterministic initialization. The geographical and temporal domain was present-day Afghanistan, and the end result was the number of agents with one of three behavioral modes (proinsurgent, neutral, and pro-government) corresponding to the population mindshare. The theories embedded in each model were identical, and the test simulations focused on a test of three leadership theories - legitimacy, coercion, and representative, and two social mobilization theories - social influence and repression. The theories are tied together using the Cobb-Douglas utility function. Based on our results, the hypothesis that performance measures can be developed to compare and contrast AB models appears to be supported. Furthermore, we observed significant bias in the two models. Even so, further tests and investigations are required not only with a wider class of theories and AB models, but also with additional observed or simulated data and more comprehensive performance measures.
Information Theoretic Tools for Parameter Fitting in Coarse Grained Models
Kalligiannaki, Evangelia; Harmandaris, Vagelis; Katsoulakis, Markos A.; Plechac, Petr
2015-01-01
We study the application of information theoretic tools for model reduction in the case of systems driven by stochastic dynamics out of equilibrium. The model/dimension reduction is considered by proposing parametrized coarse grained dynamics
Information theoretic learning Renyi's entropy and Kernel perspectives
Principe, Jose C
2010-01-01
This book presents the first cohesive treatment of Information Theoretic Learning (ITL) algorithms to adapt linear or nonlinear learning machines both in supervised or unsupervised paradigms. ITL is a framework where the conventional concepts of second order statistics (covariance, L2 distances, correlation functions) are substituted by scalars and functions with information theoretic underpinnings, respectively entropy, mutual information and correntropy. ITL quantifies the stochastic structure of the data beyond second order statistics for improved performance without using full-blown Bayesi
Nonlocal correlations as an information-theoretic resource
Barrett, Jonathan; Massar, Serge; Pironio, Stefano; Linden, Noah; Popescu, Sandu; Roberts, David
2005-01-01
It is well known that measurements performed on spatially separated entangled quantum systems can give rise to correlations that are nonlocal, in the sense that a Bell inequality is violated. They cannot, however, be used for superluminal signaling. It is also known that it is possible to write down sets of 'superquantum' correlations that are more nonlocal than is allowed by quantum mechanics, yet are still nonsignaling. Viewed as an information-theoretic resource, superquantum correlations are very powerful at reducing the amount of communication needed for distributed computational tasks. An intriguing question is why quantum mechanics does not allow these more powerful correlations. We aim to shed light on the range of quantum possibilities by placing them within a wider context. With this in mind, we investigate the set of correlations that are constrained only by the no-signaling principle. These correlations form a polytope, which contains the quantum correlations as a (proper) subset. We determine the vertices of the no-signaling polytope in the case that two observers each choose from two possible measurements with d outcomes. We then consider how interconversions between different sorts of correlations may be achieved. Finally, we consider some multipartite examples
Robust and distributed hypothesis testing
Gül, Gökhan
2017-01-01
This book generalizes and extends the available theory in robust and decentralized hypothesis testing. In particular, it presents a robust test for modeling errors which is independent from the assumptions that a sufficiently large number of samples is available, and that the distance is the KL-divergence. Here, the distance can be chosen from a much general model, which includes the KL-divergence as a very special case. This is then extended by various means. A minimax robust test that is robust against both outliers as well as modeling errors is presented. Minimax robustness properties of the given tests are also explicitly proven for fixed sample size and sequential probability ratio tests. The theory of robust detection is extended to robust estimation and the theory of robust distributed detection is extended to classes of distributions, which are not necessarily stochastically bounded. It is shown that the quantization functions for the decision rules can also be chosen as non-monotone. Finally, the boo...
Information-theoretic temporal Bell inequality and quantum computation
Morikoshi, Fumiaki
2006-01-01
An information-theoretic temporal Bell inequality is formulated to contrast classical and quantum computations. Any classical algorithm satisfies the inequality, while quantum ones can violate it. Therefore, the violation of the inequality is an immediate consequence of the quantumness in the computation. Furthermore, this approach suggests a notion of temporal nonlocality in quantum computation
Biometric security from an information-theoretical perspective
Ignatenko, T.; Willems, F.M.J.
2012-01-01
In this review, biometric systems are studied from an information theoretical point of view. In the first part biometric authentication systems are studied. The objective of these systems is, observing correlated enrollment and authentication biometric sequences, to generate or convey as large as
Physics Without Physics. The Power of Information-theoretical Principles
D'Ariano, Giacomo Mauro
2017-01-01
David Finkelstein was very fond of the new information-theoretic paradigm of physics advocated by John Archibald Wheeler and Richard Feynman. Only recently, however, the paradigm has concretely shown its full power, with the derivation of quantum theory (Chiribella et al., Phys. Rev. A 84:012311, 2011; D'Ariano et al., 2017) and of free quantum field theory (D'Ariano and Perinotti, Phys. Rev. A 90:062106, 2014; Bisio et al., Phys. Rev. A 88:032301, 2013; Bisio et al., Ann. Phys. 354:244, 2015; Bisio et al., Ann. Phys. 368:177, 2016) from informational principles. The paradigm has opened for the first time the possibility of avoiding physical primitives in the axioms of the physical theory, allowing a re-foundation of the whole physics over logically solid grounds. In addition to such methodological value, the new information-theoretic derivation of quantum field theory is particularly interesting for establishing a theoretical framework for quantum gravity, with the idea of obtaining gravity itself as emergent from the quantum information processing, as also suggested by the role played by information in the holographic principle (Susskind, J. Math. Phys. 36:6377, 1995; Bousso, Rev. Mod. Phys. 74:825, 2002). In this paper I review how free quantum field theory is derived without using mechanical primitives, including space-time, special relativity, Hamiltonians, and quantization rules. The theory is simply provided by the simplest quantum algorithm encompassing a countable set of quantum systems whose network of interactions satisfies the three following simple principles: homogeneity, locality, and isotropy. The inherent discrete nature of the informational derivation leads to an extension of quantum field theory in terms of a quantum cellular automata and quantum walks. A simple heuristic argument sets the scale to the Planck one, and the currently observed regime where discreteness is not visible is the so-called "relativistic regime" of small wavevectors, which
Model selection and inference a practical information-theoretic approach
Burnham, Kenneth P
1998-01-01
This book is unique in that it covers the philosophy of model-based data analysis and an omnibus strategy for the analysis of empirical data The book introduces information theoretic approaches and focuses critical attention on a priori modeling and the selection of a good approximating model that best represents the inference supported by the data Kullback-Leibler information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection The maximized log-likelihood function can be bias-corrected to provide an estimate of expected, relative Kullback-Leibler information This leads to Akaike's Information Criterion (AIC) and various extensions and these are relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are ...
Information-Theoretic Inference of Large Transcriptional Regulatory Networks
Meyer Patrick
2007-01-01
Full Text Available The paper presents MRNET, an original method for inferring genetic networks from microarray data. The method is based on maximum relevance/minimum redundancy (MRMR, an effective information-theoretic technique for feature selection in supervised learning. The MRMR principle consists in selecting among the least redundant variables the ones that have the highest mutual information with the target. MRNET extends this feature selection principle to networks in order to infer gene-dependence relationships from microarray data. The paper assesses MRNET by benchmarking it against RELNET, CLR, and ARACNE, three state-of-the-art information-theoretic methods for large (up to several thousands of genes network inference. Experimental results on thirty synthetically generated microarray datasets show that MRNET is competitive with these methods.
Information-Theoretic Inference of Large Transcriptional Regulatory Networks
Patrick E. Meyer
2007-06-01
Full Text Available The paper presents MRNET, an original method for inferring genetic networks from microarray data. The method is based on maximum relevance/minimum redundancy (MRMR, an effective information-theoretic technique for feature selection in supervised learning. The MRMR principle consists in selecting among the least redundant variables the ones that have the highest mutual information with the target. MRNET extends this feature selection principle to networks in order to infer gene-dependence relationships from microarray data. The paper assesses MRNET by benchmarking it against RELNET, CLR, and ARACNE, three state-of-the-art information-theoretic methods for large (up to several thousands of genes network inference. Experimental results on thirty synthetically generated microarray datasets show that MRNET is competitive with these methods.
Inform: Efficient Information-Theoretic Analysis of Collective Behaviors
Douglas G. Moore
2018-06-01
Full Text Available The study of collective behavior has traditionally relied on a variety of different methodological tools ranging from more theoretical methods such as population or game-theoretic models to empirical ones like Monte Carlo or multi-agent simulations. An approach that is increasingly being explored is the use of information theory as a methodological framework to study the flow of information and the statistical properties of collectives of interacting agents. While a few general purpose toolkits exist, most of the existing software for information theoretic analysis of collective systems is limited in scope. We introduce Inform, an open-source framework for efficient information theoretic analysis that exploits the computational power of a C library while simplifying its use through a variety of wrappers for common higher-level scripting languages. We focus on two such wrappers here: PyInform (Python and rinform (R. Inform and its wrappers are cross-platform and general-purpose. They include classical information-theoretic measures, measures of information dynamics and information-based methods to study the statistical behavior of collective systems, and expose a lower-level API that allow users to construct measures of their own. We describe the architecture of the Inform framework, study its computational efficiency and use it to analyze three different case studies of collective behavior: biochemical information storage in regenerating planaria, nest-site selection in the ant Temnothorax rugatulus, and collective decision making in multi-agent simulations.
Information theoretic analysis of canny edge detection in visual communication
Jiang, Bo; Rahman, Zia-ur
2011-06-01
In general edge detection evaluation, the edge detectors are examined, analyzed, and compared either visually or with a metric for specific an application. This analysis is usually independent of the characteristics of the image-gathering, transmission and display processes that do impact the quality of the acquired image and thus, the resulting edge image. We propose a new information theoretic analysis of edge detection that unites the different components of the visual communication channel and assesses edge detection algorithms in an integrated manner based on Shannon's information theory. The edge detection algorithm here is considered to achieve high performance only if the information rate from the scene to the edge approaches the maximum possible. Thus, by setting initial conditions of the visual communication system as constant, different edge detection algorithms could be evaluated. This analysis is normally limited to linear shift-invariant filters so in order to examine the Canny edge operator in our proposed system, we need to estimate its "power spectral density" (PSD). Since the Canny operator is non-linear and shift variant, we perform the estimation for a set of different system environment conditions using simulations. In our paper we will first introduce the PSD of the Canny operator for a range of system parameters. Then, using the estimated PSD, we will assess the Canny operator using information theoretic analysis. The information-theoretic metric is also used to compare the performance of the Canny operator with other edge-detection operators. This also provides a simple tool for selecting appropriate edgedetection algorithms based on system parameters, and for adjusting their parameters to maximize information throughput.
Information-theoretic signatures of biodiversity in the barcoding gene.
Barbosa, Valmir C
2018-08-14
Analyzing the information content of DNA, though holding the promise to help quantify how the processes of evolution have led to information gain throughout the ages, has remained an elusive goal. Paradoxically, one of the main reasons for this has been precisely the great diversity of life on the planet: if on the one hand this diversity is a rich source of data for information-content analysis, on the other hand there is so much variation as to make the task unmanageable. During the past decade or so, however, succinct fragments of the COI mitochondrial gene, which is present in all animal phyla and in a few others, have been shown to be useful for species identification through DNA barcoding. A few million such fragments are now publicly available through the BOLD systems initiative, thus providing an unprecedented opportunity for relatively comprehensive information-theoretic analyses of DNA to be attempted. Here we show how a generalized form of total correlation can yield distinctive information-theoretic descriptors of the phyla represented in those fragments. In order to illustrate the potential of this analysis to provide new insight into the evolution of species, we performed principal component analysis on standardized versions of the said descriptors for 23 phyla. Surprisingly, we found that, though based solely on the species represented in the data, the first principal component correlates strongly with the natural logarithm of the number of all known living species for those phyla. The new descriptors thus constitute clear information-theoretic signatures of the processes whereby evolution has given rise to current biodiversity, which suggests their potential usefulness in further related studies. Copyright © 2018 Elsevier Ltd. All rights reserved.
Information Theoretic Tools for Parameter Fitting in Coarse Grained Models
Kalligiannaki, Evangelia
2015-01-07
We study the application of information theoretic tools for model reduction in the case of systems driven by stochastic dynamics out of equilibrium. The model/dimension reduction is considered by proposing parametrized coarse grained dynamics and finding the optimal parameter set for which the relative entropy rate with respect to the atomistic dynamics is minimized. The minimization problem leads to a generalization of the force matching methods to non equilibrium systems. A multiplicative noise example reveals the importance of the diffusion coefficient in the optimization problem.
Chenguang Shi
2014-01-01
Full Text Available Widely distributed radar network architectures can provide significant performance improvement for target detection and localization. For a fixed radar network, the achievable target detection performance may go beyond a predetermined threshold with full transmitted power allocation, which is extremely vulnerable in modern electronic warfare. In this paper, we study the problem of low probability of intercept (LPI design for radar network and propose two novel LPI optimization schemes based on information-theoretic criteria. For a predefined threshold of target detection, Schleher intercept factor is minimized by optimizing transmission power allocation among netted radars in the network. Due to the lack of analytical closed-form expression for receiver operation characteristics (ROC, we employ two information-theoretic criteria, namely, Bhattacharyya distance and J-divergence as the metrics for target detection performance. The resulting nonconvex and nonlinear LPI optimization problems associated with different information-theoretic criteria are cast under a unified framework, and the nonlinear programming based genetic algorithm (NPGA is used to tackle the optimization problems in the framework. Numerical simulations demonstrate that our proposed LPI strategies are effective in enhancing the LPI performance for radar network.
Information-theoretic decomposition of embodied and situated systems.
Da Rold, Federico
2018-07-01
The embodied and situated view of cognition stresses the importance of real-time and nonlinear bodily interaction with the environment for developing concepts and structuring knowledge. In this article, populations of robots controlled by an artificial neural network learn a wall-following task through artificial evolution. At the end of the evolutionary process, time series are recorded from perceptual and motor neurons of selected robots. Information-theoretic measures are estimated on pairings of variables to unveil nonlinear interactions that structure the agent-environment system. Specifically, the mutual information is utilized to quantify the degree of dependence and the transfer entropy to detect the direction of the information flow. Furthermore, the system is analyzed with the local form of such measures, thus capturing the underlying dynamics of information. Results show that different measures are interdependent and complementary in uncovering aspects of the robots' interaction with the environment, as well as characteristics of the functional neural structure. Therefore, the set of information-theoretic measures provides a decomposition of the system, capturing the intricacy of nonlinear relationships that characterize robots' behavior and neural dynamics. Copyright © 2018 Elsevier Ltd. All rights reserved.
Information-Theoretic Bounded Rationality and ε-Optimality
Daniel A. Braun
2014-08-01
Full Text Available Bounded rationality concerns the study of decision makers with limited information processing resources. Previously, the free energy difference functional has been suggested to model bounded rational decision making, as it provides a natural trade-off between an energy or utility function that is to be optimized and information processing costs that are measured by entropic search costs. The main question of this article is how the information-theoretic free energy model relates to simple ε-optimality models of bounded rational decision making, where the decision maker is satisfied with any action in an ε-neighborhood of the optimal utility. We find that the stochastic policies that optimize the free energy trade-off comply with the notion of ε-optimality. Moreover, this optimality criterion even holds when the environment is adversarial. We conclude that the study of bounded rationality based on ε-optimality criteria that abstract away from the particulars of the information processing constraints is compatible with the information-theoretic free energy model of bounded rationality.
Information-Theoretic Properties of Auditory Sequences Dynamically Influence Expectation and Memory.
Agres, Kat; Abdallah, Samer; Pearce, Marcus
2018-01-01
A basic function of cognition is to detect regularities in sensory input to facilitate the prediction and recognition of future events. It has been proposed that these implicit expectations arise from an internal predictive coding model, based on knowledge acquired through processes such as statistical learning, but it is unclear how different types of statistical information affect listeners' memory for auditory stimuli. We used a combination of behavioral and computational methods to investigate memory for non-linguistic auditory sequences. Participants repeatedly heard tone sequences varying systematically in their information-theoretic properties. Expectedness ratings of tones were collected during three listening sessions, and a recognition memory test was given after each session. Information-theoretic measures of sequential predictability significantly influenced listeners' expectedness ratings, and variations in these properties had a significant impact on memory performance. Predictable sequences yielded increasingly better memory performance with increasing exposure. Computational simulations using a probabilistic model of auditory expectation suggest that listeners dynamically formed a new, and increasingly accurate, implicit cognitive model of the information-theoretic structure of the sequences throughout the experimental session. Copyright © 2017 Cognitive Science Society, Inc.
One-dimensional barcode reading: an information theoretic approach
Houni, Karim; Sawaya, Wadih; Delignon, Yves
2008-03-01
In the convergence context of identification technology and information-data transmission, the barcode found its place as the simplest and the most pervasive solution for new uses, especially within mobile commerce, bringing youth to this long-lived technology. From a communication theory point of view, a barcode is a singular coding based on a graphical representation of the information to be transmitted. We present an information theoretic approach for 1D image-based barcode reading analysis. With a barcode facing the camera, distortions and acquisition are modeled as a communication channel. The performance of the system is evaluated by means of the average mutual information quantity. On the basis of this theoretical criterion for a reliable transmission, we introduce two new measures: the theoretical depth of field and the theoretical resolution. Simulations illustrate the gain of this approach.
Information theoretical assessment of visual communication with wavelet coding
Rahman, Zia-ur
1995-06-01
A visual communication channel can be characterized by the efficiency with which it conveys information, and the quality of the images restored from the transmitted data. Efficient data representation requires the use of constraints of the visual communication channel. Our information theoretic analysis combines the design of the wavelet compression algorithm with the design of the visual communication channel. Shannon's communication theory, Wiener's restoration filter, and the critical design factors of image gathering and display are combined to provide metrics for measuring the efficiency of data transmission, and for quantitatively assessing the visual quality of the restored image. These metrics are: a) the mutual information (Eta) between the radiance the radiance field and the restored image, and b) the efficiency of the channel which can be roughly measured by as the ratio (Eta) /H, where H is the average number of bits being used to transmit the data. Huck, et al. (Journal of Visual Communication and Image Representation, Vol. 4, No. 2, 1993) have shown that channels desinged to maximize (Eta) , also maximize. Our assessment provides a framework for designing channels which provide the highest possible visual quality for a given amount of data under the critical design limitations of the image gathering and display devices. Results show that a trade-off exists between the maximum realizable information of the channel and its efficiency: an increase in one leads to a decrease in the other. The final selection of which of these quantities to maximize is, of course, application dependent.
Information theoretic analysis of edge detection in visual communication
Jiang, Bo; Rahman, Zia-ur
2010-08-01
Generally, the designs of digital image processing algorithms and image gathering devices remain separate. Consequently, the performance of digital image processing algorithms is evaluated without taking into account the artifacts introduced into the process by the image gathering process. However, experiments show that the image gathering process profoundly impacts the performance of digital image processing and the quality of the resulting images. Huck et al. proposed one definitive theoretic analysis of visual communication channels, where the different parts, such as image gathering, processing, and display, are assessed in an integrated manner using Shannon's information theory. In this paper, we perform an end-to-end information theory based system analysis to assess edge detection methods. We evaluate the performance of the different algorithms as a function of the characteristics of the scene, and the parameters, such as sampling, additive noise etc., that define the image gathering system. The edge detection algorithm is regarded to have high performance only if the information rate from the scene to the edge approaches the maximum possible. This goal can be achieved only by jointly optimizing all processes. People generally use subjective judgment to compare different edge detection methods. There is not a common tool that can be used to evaluate the performance of the different algorithms, and to give people a guide for selecting the best algorithm for a given system or scene. Our information-theoretic assessment becomes this new tool to which allows us to compare the different edge detection operators in a common environment.
Universality in an information-theoretic motivated nonlinear Schrodinger equation
Parwani, R; Tabia, G
2007-01-01
Using perturbative methods, we analyse a nonlinear generalization of Schrodinger's equation that had previously been obtained through information-theoretic arguments. We obtain analytical expressions for the leading correction, in terms of the nonlinearity scale, to the energy eigenvalues of the linear Schrodinger equation in the presence of an external potential and observe some generic features. In one space dimension these are (i) for nodeless ground states, the energy shifts are subleading in the nonlinearity parameter compared to the shifts for the excited states; (ii) the shifts for the excited states are due predominantly to contribution from the nodes of the unperturbed wavefunctions, and (iii) the energy shifts for excited states are positive for small values of a regulating parameter and negative at large values, vanishing at a universal critical value that is not manifest in the equation. Some of these features hold true for higher dimensional problems. We also study two exactly solved nonlinear Schrodinger equations so as to contrast our observations. Finally, we comment on the possible significance of our results if the nonlinearity is physically realized
Information-theoretic limitations on approximate quantum cloning and broadcasting
Lemm, Marius; Wilde, Mark M.
2017-07-01
We prove quantitative limitations on any approximate simultaneous cloning or broadcasting of mixed states. The results are based on information-theoretic (entropic) considerations and generalize the well-known no-cloning and no-broadcasting theorems. We also observe and exploit the fact that the universal cloning machine on the symmetric subspace of n qudits and symmetrized partial trace channels are dual to each other. This duality manifests itself both in the algebraic sense of adjointness of quantum channels and in the operational sense that a universal cloning machine can be used as an approximate recovery channel for a symmetrized partial trace channel and vice versa. The duality extends to give control of the performance of generalized universal quantum cloning machines (UQCMs) on subspaces more general than the symmetric subspace. This gives a way to quantify the usefulness of a priori information in the context of cloning. For example, we can control the performance of an antisymmetric analog of the UQCM in recovering from the loss of n -k fermionic particles.
A comparison of SAR ATR performance with information theoretic predictions
Blacknell, David
2003-09-01
Performance assessment of automatic target detection and recognition algorithms for SAR systems (or indeed any other sensors) is essential if the military utility of the system / algorithm mix is to be quantified. This is a relatively straightforward task if extensive trials data from an existing system is used. However, a crucial requirement is to assess the potential performance of novel systems as a guide to procurement decisions. This task is no longer straightforward since a hypothetical system cannot provide experimental trials data. QinetiQ has previously developed a theoretical technique for classification algorithm performance assessment based on information theory. The purpose of the study presented here has been to validate this approach. To this end, experimental SAR imagery of targets has been collected using the QinetiQ Enhanced Surveillance Radar to allow algorithm performance assessments as a number of parameters are varied. In particular, performance comparisons can be made for (i) resolutions up to 0.1m, (ii) single channel versus polarimetric (iii) targets in the open versus targets in scrubland and (iv) use versus non-use of camouflage. The change in performance as these parameters are varied has been quantified from the experimental imagery whilst the information theoretic approach has been used to predict the expected variation of performance with parameter value. A comparison of these measured and predicted assessments has revealed the strengths and weaknesses of the theoretical technique as will be discussed in the paper.
An Information-Theoretic-Cluster Visualization for Self-Organizing Maps.
Brito da Silva, Leonardo Enzo; Wunsch, Donald C
2018-06-01
Improved data visualization will be a significant tool to enhance cluster analysis. In this paper, an information-theoretic-based method for cluster visualization using self-organizing maps (SOMs) is presented. The information-theoretic visualization (IT-vis) has the same structure as the unified distance matrix, but instead of depicting Euclidean distances between adjacent neurons, it displays the similarity between the distributions associated with adjacent neurons. Each SOM neuron has an associated subset of the data set whose cardinality controls the granularity of the IT-vis and with which the first- and second-order statistics are computed and used to estimate their probability density functions. These are used to calculate the similarity measure, based on Renyi's quadratic cross entropy and cross information potential (CIP). The introduced visualizations combine the low computational cost and kernel estimation properties of the representative CIP and the data structure representation of a single-linkage-based grouping algorithm to generate an enhanced SOM-based visualization. The visual quality of the IT-vis is assessed by comparing it with other visualization methods for several real-world and synthetic benchmark data sets. Thus, this paper also contains a significant literature survey. The experiments demonstrate the IT-vis cluster revealing capabilities, in which cluster boundaries are sharply captured. Additionally, the information-theoretic visualizations are used to perform clustering of the SOM. Compared with other methods, IT-vis of large SOMs yielded the best results in this paper, for which the quality of the final partitions was evaluated using external validity indices.
Auxiliary Heat Exchanger Flow Distribution Test
Kaufman, J.S.; Bressler, M.M.
1983-01-01
The Auxiliary Heat Exchanger Flow Distribution Test was the first part of a test program to develop a water-cooled (tube-side), compact heat exchanger for removing heat from the circulating gas in a high-temperature gas-cooled reactor (HTGR). Measurements of velocity and pressure were made with various shell side inlet and outlet configurations. A flow configuration was developed which provides acceptable velocity distribution throughout the heat exchanger without adding excessive pressure drop
Information Theoretic Characterization of Physical Theories with Projective State Space
Zaopo, Marco
2015-08-01
Probabilistic theories are a natural framework to investigate the foundations of quantum theory and possible alternative or deeper theories. In a generic probabilistic theory, states of a physical system are represented as vectors of outcomes probabilities and state spaces are convex cones. In this picture the physics of a given theory is related to the geometric shape of the cone of states. In quantum theory, for instance, the shape of the cone of states corresponds to a projective space over complex numbers. In this paper we investigate geometric constraints on the state space of a generic theory imposed by the following information theoretic requirements: every non completely mixed state of a system is perfectly distinguishable from some other state in a single shot measurement; information capacity of physical systems is conserved under making mixtures of states. These assumptions guarantee that a generic physical system satisfies a natural principle asserting that the more a state of the system is mixed the less information can be stored in the system using that state as logical value. We show that all theories satisfying the above assumptions are such that the shape of their cones of states is that of a projective space over a generic field of numbers. Remarkably, these theories constitute generalizations of quantum theory where superposition principle holds with coefficients pertaining to a generic field of numbers in place of complex numbers. If the field of numbers is trivial and contains only one element we obtain classical theory. This result tells that superposition principle is quite common among probabilistic theories while its absence gives evidence of either classical theory or an implausible theory.
Distinguishing prognostic and predictive biomarkers: An information theoretic approach.
Sechidis, Konstantinos; Papangelou, Konstantinos; Metcalfe, Paul D; Svensson, David; Weatherall, James; Brown, Gavin
2018-05-02
The identification of biomarkers to support decision-making is central to personalised medicine, in both clinical and research scenarios. The challenge can be seen in two halves: identifying predictive markers, which guide the development/use of tailored therapies; and identifying prognostic markers, which guide other aspects of care and clinical trial planning, i.e. prognostic markers can be considered as covariates for stratification. Mistakenly assuming a biomarker to be predictive, when it is in fact largely prognostic (and vice-versa) is highly undesirable, and can result in financial, ethical and personal consequences. We present a framework for data-driven ranking of biomarkers on their prognostic/predictive strength, using a novel information theoretic method. This approach provides a natural algebra to discuss and quantify the individual predictive and prognostic strength, in a self-consistent mathematical framework. Our contribution is a novel procedure, INFO+, which naturally distinguishes the prognostic vs predictive role of each biomarker and handles higher order interactions. In a comprehensive empirical evaluation INFO+ outperforms more complex methods, most notably when noise factors dominate, and biomarkers are likely to be falsely identified as predictive, when in fact they are just strongly prognostic. Furthermore, we show that our methods can be 1-3 orders of magnitude faster than competitors, making it useful for biomarker discovery in 'big data' scenarios. Finally, we apply our methods to identify predictive biomarkers on two real clinical trials, and introduce a new graphical representation that provides greater insight into the prognostic and predictive strength of each biomarker. R implementations of the suggested methods are available at https://github.com/sechidis. konstantinos.sechidis@manchester.ac.uk. Supplementary data are available at Bioinformatics online.
Several foundational and information theoretic implications of Bell’s theorem
Kar, Guruprasad; Banik, Manik
2016-08-01
In 1935, Albert Einstein and two colleagues, Boris Podolsky and Nathan Rosen (EPR) developed a thought experiment to demonstrate what they felt was a lack of completeness in quantum mechanics (QM). EPR also postulated the existence of more fundamental theory where physical reality of any system would be completely described by the variables/states of that fundamental theory. This variable is commonly called hidden variable and the theory is called hidden variable theory (HVT). In 1964, John Bell proposed an empirically verifiable criterion to test for the existence of these HVTs. He derived an inequality, which must be satisfied by any theory that fulfill the conditions of locality and reality. He also showed that QM, as it violates this inequality, is incompatible with any local-realistic theory. Later it has been shown that Bell’s inequality (BI) can be derived from different set of assumptions and it also find applications in useful information theoretic protocols. In this review, we will discuss various foundational as well as information theoretic implications of BI. We will also discuss about some restricted nonlocal feature of quantum nonlocality and elaborate the role of Uncertainty principle and Complementarity principle in explaining this feature.
Information-theoretic characterization of dynamic energy systems
Bevis, Troy Lawson
The latter half of the 20th century saw tremendous growth in nearly every aspect of civilization. From the internet to transportation, the various infrastructures relied upon by society has become exponentially more complex. Energy systems are no exception, and today the power grid is one of the largest infrastructures in the history of the world. The growing infrastructure has led to an increase in not only the amount of energy produced, but also an increase in the expectations of the energy systems themselves. The need for a power grid that is reliable, secure, and efficient is apparent, and there have been several initiatives to provide such a system. These increases in expectations have led to a growth in the renewable energy sources that are being integrated into the grid, a change that increases efficiency and disperses the generation throughout the system. Although this change in the grid infrastructure is beneficial, it leads to grand challenges in system level control and operation. As the number of sources increases and becomes geographically distributed, the control systems are no longer local to the system. This means that communication networks must be enhanced to support multiple devices that must communicate reliably. A common solution to these new systems is to use wide area networks for the communication network, as opposed to point-to-point communication. Although the wide area network will support a large number of devices, it generally comes with a compromise in the form of latency in the communication system. Now the device controller has latency injected into the feedback loop of the system. Also, renewable energy sources are largely non-dispatchable generation. That is, they are never guaranteed to be online and supplying the demanded energy. As renewable generation is typically modeled as stochastic process, it would useful to include this behavior in the control system algorithms. The combination of communication latency and stochastic
Numerical Estimation of Information Theoretic Measures for Large Data Sets
2013-01-30
probability including a new indifference rule,” J. Inst. of Actuaries Students’ Soc. 73, 285–334 (1947). 7. M. Hutter and M. Zaffalon, “Distribution...Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, Dover Publications, New York (1972). 13. K.B. Oldham et al., An Atlas
Xiong, T P; Yan, L L; Zhou, F; Rehan, K; Liang, D F; Chen, L; Yang, W L; Ma, Z H; Feng, M; Vedral, V
2018-01-05
Most nonequilibrium processes in thermodynamics are quantified only by inequalities; however, the Jarzynski relation presents a remarkably simple and general equality relating nonequilibrium quantities with the equilibrium free energy, and this equality holds in both the classical and quantum regimes. We report a single-spin test and confirmation of the Jarzynski relation in the quantum regime using a single ultracold ^{40}Ca^{+} ion trapped in a harmonic potential, based on a general information-theoretic equality for a temporal evolution of the system sandwiched between two projective measurements. By considering both initially pure and mixed states, respectively, we verify, in an exact and fundamental fashion, the nonequilibrium quantum thermodynamics relevant to the mutual information and Jarzynski equality.
Information Theoretical Analysis of Identification based on Active Content Fingerprinting
Farhadzadeh, Farzad; Willems, Frans M. J.; Voloshinovskiy, Sviatoslav
2014-01-01
Content fingerprinting and digital watermarking are techniques that are used for content protection and distribution monitoring. Over the past few years, both techniques have been well studied and their shortcomings understood. Recently, a new content fingerprinting scheme called {\\em active content fingerprinting} was introduced to overcome these shortcomings. Active content fingerprinting aims to modify a content to extract robuster fingerprints than the conventional content fingerprinting....
Multipath interference test method for distributed amplifiers
Okada, Takahiro; Aida, Kazuo
2005-12-01
A method for testing distributed amplifiers is presented; the multipath interference (MPI) is detected as a beat spectrum between the multipath signal and the direct signal using a binary frequency shifted keying (FSK) test signal. The lightwave source is composed of a DFB-LD that is directly modulated by a pulse stream passing through an equalizer, and emits the FSK signal of the frequency deviation of about 430MHz at repetition rate of 80-100 kHz. The receiver consists of a photo-diode and an electrical spectrum analyzer (ESA). The base-band power spectrum peak appeared at the frequency of the FSK frequency deviation can be converted to amount of MPI using a calibration chart. The test method has improved the minimum detectable MPI as low as -70 dB, compared to that of -50 dB of the conventional test method. The detailed design and performance of the proposed method are discussed, including the MPI simulator for calibration procedure, computer simulations for evaluating the error caused by the FSK repetition rate and the fiber length under test and experiments on singlemode fibers and distributed Raman amplifier.
Parametric sensitivity analysis for stochastic molecular systems using information theoretic metrics
Tsourtis, Anastasios, E-mail: tsourtis@uoc.gr [Department of Mathematics and Applied Mathematics, University of Crete, Crete (Greece); Pantazis, Yannis, E-mail: pantazis@math.umass.edu; Katsoulakis, Markos A., E-mail: markos@math.umass.edu [Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003 (United States); Harmandaris, Vagelis, E-mail: harman@uoc.gr [Department of Mathematics and Applied Mathematics, University of Crete, and Institute of Applied and Computational Mathematics (IACM), Foundation for Research and Technology Hellas (FORTH), GR-70013 Heraklion, Crete (Greece)
2015-07-07
In this paper, we present a parametric sensitivity analysis (SA) methodology for continuous time and continuous space Markov processes represented by stochastic differential equations. Particularly, we focus on stochastic molecular dynamics as described by the Langevin equation. The utilized SA method is based on the computation of the information-theoretic (and thermodynamic) quantity of relative entropy rate (RER) and the associated Fisher information matrix (FIM) between path distributions, and it is an extension of the work proposed by Y. Pantazis and M. A. Katsoulakis [J. Chem. Phys. 138, 054115 (2013)]. A major advantage of the pathwise SA method is that both RER and pathwise FIM depend only on averages of the force field; therefore, they are tractable and computable as ergodic averages from a single run of the molecular dynamics simulation both in equilibrium and in non-equilibrium steady state regimes. We validate the performance of the extended SA method to two different molecular stochastic systems, a standard Lennard-Jones fluid and an all-atom methane liquid, and compare the obtained parameter sensitivities with parameter sensitivities on three popular and well-studied observable functions, namely, the radial distribution function, the mean squared displacement, and the pressure. Results show that the RER-based sensitivities are highly correlated with the observable-based sensitivities.
Jenkinson, Garrett; Abante, Jordi; Feinberg, Andrew P; Goutsias, John
2018-03-07
DNA methylation is a stable form of epigenetic memory used by cells to control gene expression. Whole genome bisulfite sequencing (WGBS) has emerged as a gold-standard experimental technique for studying DNA methylation by producing high resolution genome-wide methylation profiles. Statistical modeling and analysis is employed to computationally extract and quantify information from these profiles in an effort to identify regions of the genome that demonstrate crucial or aberrant epigenetic behavior. However, the performance of most currently available methods for methylation analysis is hampered by their inability to directly account for statistical dependencies between neighboring methylation sites, thus ignoring significant information available in WGBS reads. We present a powerful information-theoretic approach for genome-wide modeling and analysis of WGBS data based on the 1D Ising model of statistical physics. This approach takes into account correlations in methylation by utilizing a joint probability model that encapsulates all information available in WGBS methylation reads and produces accurate results even when applied on single WGBS samples with low coverage. Using the Shannon entropy, our approach provides a rigorous quantification of methylation stochasticity in individual WGBS samples genome-wide. Furthermore, it utilizes the Jensen-Shannon distance to evaluate differences in methylation distributions between a test and a reference sample. Differential performance assessment using simulated and real human lung normal/cancer data demonstrate a clear superiority of our approach over DSS, a recently proposed method for WGBS data analysis. Critically, these results demonstrate that marginal methods become statistically invalid when correlations are present in the data. This contribution demonstrates clear benefits and the necessity of modeling joint probability distributions of methylation using the 1D Ising model of statistical physics and of
An Information-Theoretic Approach to PMU Placement in Electric Power Systems
Li, Qiao; Cui, Tao; Weng, Yang; Negi, Rohit; Franchetti, Franz; Ilic, Marija D.
2012-01-01
This paper presents an information-theoretic approach to address the phasor measurement unit (PMU) placement problem in electric power systems. Different from the conventional 'topological observability' based approaches, this paper advocates a much more refined, information-theoretic criterion, namely the mutual information (MI) between the PMU measurements and the power system states. The proposed MI criterion can not only include the full system observability as a special case, but also ca...
Craciun, Stefan; Brockmeier, Austin J; George, Alan D; Lam, Herman; Príncipe, José C
2011-01-01
Methods for decoding movements from neural spike counts using adaptive filters often rely on minimizing the mean-squared error. However, for non-Gaussian distribution of errors, this approach is not optimal for performance. Therefore, rather than using probabilistic modeling, we propose an alternate non-parametric approach. In order to extract more structure from the input signal (neuronal spike counts) we propose using minimum error entropy (MEE), an information-theoretic approach that minimizes the error entropy as part of an iterative cost function. However, the disadvantage of using MEE as the cost function for adaptive filters is the increase in computational complexity. In this paper we present a comparison between the decoding performance of the analytic Wiener filter and a linear filter trained with MEE, which is then mapped to a parallel architecture in reconfigurable hardware tailored to the computational needs of the MEE filter. We observe considerable speedup from the hardware design. The adaptation of filter weights for the multiple-input, multiple-output linear filters, necessary in motor decoding, is a highly parallelizable algorithm. It can be decomposed into many independent computational blocks with a parallel architecture readily mapped to a field-programmable gate array (FPGA) and scales to large numbers of neurons. By pipelining and parallelizing independent computations in the algorithm, the proposed parallel architecture has sublinear increases in execution time with respect to both window size and filter order.
Information density converges in dialogue: Towards an information-theoretic model.
Xu, Yang; Reitter, David
2018-01-01
The principle of entropy rate constancy (ERC) states that language users distribute information such that words tend to be equally predictable given previous contexts. We examine the applicability of this principle to spoken dialogue, as previous findings primarily rest on written text. The study takes into account the joint-activity nature of dialogue and the topic shift mechanisms that are different from monologue. It examines how the information contributions from the two dialogue partners interactively evolve as the discourse develops. The increase of local sentence-level information density (predicted by ERC) is shown to apply to dialogue overall. However, when the different roles of interlocutors in introducing new topics are identified, their contribution in information content displays a new converging pattern. We draw explanations to this pattern from multiple perspectives: Casting dialogue as an information exchange system would mean that the pattern is the result of two interlocutors maintaining their own context rather than sharing one. Second, we present some empirical evidence that a model of Interactive Alignment may include information density to explain the effect. Third, we argue that building common ground is a process analogous to information convergence. Thus, we put forward an information-theoretic view of dialogue, under which some existing theories of human dialogue may eventually be unified. Copyright © 2017 Elsevier B.V. All rights reserved.
Model-free information-theoretic approach to infer leadership in pairs of zebrafish.
Butail, Sachit; Mwaffo, Violet; Porfiri, Maurizio
2016-04-01
Collective behavior affords several advantages to fish in avoiding predators, foraging, mating, and swimming. Although fish schools have been traditionally considered egalitarian superorganisms, a number of empirical observations suggest the emergence of leadership in gregarious groups. Detecting and classifying leader-follower relationships is central to elucidate the behavioral and physiological causes of leadership and understand its consequences. Here, we demonstrate an information-theoretic approach to infer leadership from positional data of fish swimming. In this framework, we measure social interactions between fish pairs through the mathematical construct of transfer entropy, which quantifies the predictive power of a time series to anticipate another, possibly coupled, time series. We focus on the zebrafish model organism, which is rapidly emerging as a species of choice in preclinical research for its genetic similarity to humans and reduced neurobiological complexity with respect to mammals. To overcome experimental confounds and generate test data sets on which we can thoroughly assess our approach, we adapt and calibrate a data-driven stochastic model of zebrafish motion for the simulation of a coupled dynamical system of zebrafish pairs. In this synthetic data set, the extent and direction of the coupling between the fish are systematically varied across a wide parameter range to demonstrate the accuracy and reliability of transfer entropy in inferring leadership. Our approach is expected to aid in the analysis of collective behavior, providing a data-driven perspective to understand social interactions.
10 CFR 431.198 - Enforcement testing for distribution transformers.
2010-01-01
... 10 Energy 3 2010-01-01 2010-01-01 false Enforcement testing for distribution transformers. 431.198... COMMERCIAL AND INDUSTRIAL EQUIPMENT Distribution Transformers Compliance and Enforcement § 431.198 Enforcement testing for distribution transformers. (a) Test notice. Upon receiving information in writing...
Simulation-Based Testing of Distributed Systems
Rutherford, Matthew J; Carzaniga, Antonio; Wolf, Alexander L
2006-01-01
.... Typically written using an imperative programming language, these simulations capture basic algorithmic functionality at the same time as they focus attention on properties critical to distribution...
TTCN-3 for distributed testing embedded systems
Blom, S.C.C.; Deiß, T.; Ioustinova, N.; Kontio, A.; Pol, van de J.C.; Rennoch, A.; Sidorova, N.; Virbitskaite, I.; Voronkov, A.
2007-01-01
Abstract. TTCN-3 is a standardized language for specifying and executing test suites that is particularly popular for testing embedded systems. Prior to testing embedded software in a target environment, the software is usually tested in the host environment. Executing in the host environment often
Quantum information theoretical analysis of various constructions for quantum secret sharing
Rietjens, K.P.T.; Schoenmakers, B.; Tuyls, P.T.
2005-01-01
Recently, an information theoretical model for quantum secret sharing (QSS) schemes was introduced. By using this model, we prove that pure state quantum threshold schemes (QTS) can be constructed from quantum MDS codes and vice versa. In particular, we consider stabilizer codes and give a
Esquivel, R.O.; Flores-Gallegos, N.; Iuga, C.; Carrera, E.M.; Angulo, J.C.; Antolin, J.
2010-01-01
The information-theoretic description of the course of two elementary chemical reactions allows a phenomenological description of the chemical course of the hydrogenic abstraction and the S N 2 identity reactions by use of Shannon entropic measures in position and momentum spaces. The analyses reveal their synchronous/asynchronous mechanistic behavior.
On the information-theoretic approach to G\\"odel's incompleteness theorem
D'Abramo, Germano
2002-01-01
In this paper we briefly review and analyze three published proofs of Chaitin's theorem, the celebrated information-theoretic version of G\\"odel's incompleteness theorem. Then, we discuss our main perplexity concerning a key step common to all these demonstrations.
Distributed temperature sensor testing in liquid sodium
Gerardi, Craig, E-mail: cgerardi@anl.gov; Bremer, Nathan; Lisowski, Darius; Lomperski, Stephen
2017-02-15
Highlights: • Distributed temperature sensors measured high-resolution liquid-sodium temperatures. • DTSs worked well up to 400 °C. • A single DTS simultaneously detected sodium level and temperature. - Abstract: Rayleigh-backscatter-based distributed fiber optic sensors were immersed in sodium to obtain high-resolution liquid-sodium temperature measurements. Distributed temperature sensors (DTSs) functioned well up to 400 °C in a liquid sodium environment. The DTSs measured sodium column temperature and the temperature of a complex geometrical pattern that leveraged the flexibility of fiber optics. A single Ø 360 μm OD sensor registered dozens of temperatures along a length of over one meter at 100 Hz. We also demonstrated the capability to use a single DTS to simultaneously detect thermal interfaces (e.g. sodium level) and measure temperature.
Goodness-of-fit tests for a heavy tailed distribution
A.J. Koning (Alex); L. Peng (Liang)
2005-01-01
textabstractFor testing whether a distribution function is heavy tailed, we study the Kolmogorov test, Berk-Jones test, score test and their integrated versions. A comparison is conducted via Bahadur efficiency and simulations. The score test and the integrated score test show the best
Distributed temperature sensor testing in liquid sodium
Gerardi, Craig; Bremer, Nathan; Lisowski, Darius; Lomperski, Stephen
2017-02-01
Rayleigh-backscatter-based distributed fiber optic sensors were immersed in sodium to obtain high-resolution liquid-sodium temperature measurements. Distributed temperature sensors (DTSs) functioned well up to 400°C in a liquid sodium environment. The DTSs measured sodium column temperature and the temperature of a complex geometrical pattern that leveraged the flexibility of fiber optics. A single Ø 360 lm OD sensor registered dozens of temperatures along a length of over one meter at 100 Hz. We also demonstrated the capability to use a single DTS to simultaneously detect thermal interfaces (e.g. sodium level) and measure temperature.
Test report light duty utility arm power distribution system (PDS)
Clark, D.A.
1996-01-01
The Light Duty Utility Arm (LDUA) Power Distribution System has completed vendor and post-delivery acceptance testing. The Power Distribution System has been found to be acceptable and is now ready for integration with the overall LDUA system
Goodness-of-fit tests for the Gompertz distribution
Lenart, Adam; Missov, Trifon
The Gompertz distribution is often fitted to lifespan data, however testing whether the fit satisfies theoretical criteria was neglected. Here five goodness-of-fit measures, the Anderson-Darling statistic, the Kullback-Leibler discrimination information, the correlation coefficient test, testing ...... for the mean of the sample hazard and a nested test against the generalized extreme value distributions are discussed. Along with an application to laboratory rat data, critical values calculated by the empirical distribution of the test statistics are also presented.......The Gompertz distribution is often fitted to lifespan data, however testing whether the fit satisfies theoretical criteria was neglected. Here five goodness-of-fit measures, the Anderson-Darling statistic, the Kullback-Leibler discrimination information, the correlation coefficient test, testing...
Fang, Song; Ishii, Hideaki
2017-01-01
This book investigates the performance limitation issues in networked feedback systems. The fact that networked feedback systems consist of control and communication devices and systems calls for the integration of control theory and information theory. The primary contributions of this book lie in two aspects: the newly-proposed information-theoretic measures and the newly-discovered control performance limitations. We first propose a number of information notions to facilitate the analysis. Using those notions, classes of performance limitations of networked feedback systems, as well as state estimation systems, are then investigated. In general, the book presents a unique, cohesive treatment of performance limitation issues of networked feedback systems via an information-theoretic approach. This book is believed to be the first to treat the aforementioned subjects systematically and in a unified manner, offering a unique perspective differing from existing books.
Expectancy-Violation and Information-Theoretic Models of Melodic Complexity
Tuomas Eerola
2016-07-01
Full Text Available The present study assesses two types of models for melodic complexity: one based on expectancy violations and the other one related to an information-theoretic account of redundancy in music. Seven different datasets spanning artificial sequences, folk and pop songs were used to refine and assess the models. The refinement eliminated unnecessary components from both types of models. The final analysis pitted three variants of the two model types against each other and could explain from 46-74% of the variance in the ratings across the datasets. The most parsimonious models were identified with an information-theoretic criterion. This suggested that the simplified expectancy-violation models were the most efficient for these sets of data. However, the differences between all optimized models were subtle in terms both of performance and simplicity.
Dagiuklas Tasos
2011-01-01
Full Text Available This paper presents a Wireless Information-Theoretic Security (WITS scheme, which has been recently introduced as a robust physical layer-based security solution, especially for infrastructureless networks. An autonomic network of moving users was implemented via 802.11n nodes of an ad hoc network for an outdoor topology with obstacles. Obstructed-Line-of-Sight (OLOS and Non-Line-of-Sight (NLOS propagation scenarios were examined. Low-speed user movement was considered, so that Doppler spread could be discarded. A transmitter and a legitimate receiver exchanged information in the presence of a moving eavesdropper. Average Signal-to-Noise Ratio (SNR values were acquired for both the main and the wiretap channel, and the Probability of Nonzero Secrecy Capacity was calculated based on theoretical formula. Experimental results validate theoretical findings stressing the importance of user location and mobility schemes on the robustness of Wireless Information-Theoretic Security and call for further theoretical analysis.
Adaptive information-theoretic bounded rational decision-making with parametric priors
Grau-Moya, Jordi; Braun, Daniel A.
2015-01-01
Deviations from rational decision-making due to limited computational resources have been studied in the field of bounded rationality, originally proposed by Herbert Simon. There have been a number of different approaches to model bounded rationality ranging from optimality principles to heuristics. Here we take an information-theoretic approach to bounded rationality, where information-processing costs are measured by the relative entropy between a posterior decision strategy and a given fix...
Statistical test for the distribution of galaxies on plates
Garcia Lambas, D.
1985-01-01
A statistical test for the distribution of galaxies on plates is presented. We apply the test to synthetic astronomical plates obtained by means of numerical simulation (Garcia Lambas and Sersic 1983) with three different models for the 3-dimensional distribution, comparison with an observational plate, suggest the presence of filamentary structure. (author)
21 CFR 211.165 - Testing and release for distribution.
2010-04-01
... (CONTINUED) DRUGS: GENERAL CURRENT GOOD MANUFACTURING PRACTICE FOR FINISHED PHARMACEUTICALS Laboratory Controls § 211.165 Testing and release for distribution. (a) For each batch of drug product, there shall be... 21 Food and Drugs 4 2010-04-01 2010-04-01 false Testing and release for distribution. 211.165...
Homogeneity and scale testing of generalized gamma distribution
Stehlik, Milan
2008-01-01
The aim of this paper is to derive the exact distributions of the likelihood ratio tests of homogeneity and scale hypothesis when the observations are generalized gamma distributed. The special cases of exponential, Rayleigh, Weibull or gamma distributed observations are discussed exclusively. The photoemulsion experiment analysis and scale test with missing time-to-failure observations are present to illustrate the applications of methods discussed
Dunlop, David Livingston
The purpose of this study was to use an information theoretic memory model to quantitatively investigate classification sorting and recall behaviors of various groups of students. The model provided theorems for the determination of information theoretic measures from which inferences concerning mental processing were made. The basic procedure…
Distributed Rocket Engine Testing Health Monitoring System, Phase I
National Aeronautics and Space Administration — The on-ground and Distributed Rocket Engine Testing Health Monitoring System (DiRETHMS) provides a system architecture and software tools for performing diagnostics...
Distributed Rocket Engine Testing Health Monitoring System, Phase II
National Aeronautics and Space Administration — Leveraging the Phase I achievements of the Distributed Rocket Engine Testing Health Monitoring System (DiRETHMS) including its software toolsets and system building...
statistical tests for frequency distribution of mean gravity anomalies
ES Obe
1980-03-01
Mar 1, 1980 ... STATISTICAL TESTS FOR FREQUENCY DISTRIBUTION OF MEAN. GRAVITY ANOMALIES. By ... approach. Kaula [1,2] discussed the method of applying statistical techniques in the ..... mathematical foundation of physical ...
Numerical distribution functions of fractional unit root and cointegration tests
MacKinnon, James G.; Nielsen, Morten Ørregaard
We calculate numerically the asymptotic distribution functions of likelihood ratio tests for fractional unit roots and cointegration rank. Because these distributions depend on a real-valued parameter, b, which must be estimated, simple tabulation is not feasible. Partly due to the presence...
Application of a truncated normal failure distribution in reliability testing
Groves, C., Jr.
1968-01-01
Statistical truncated normal distribution function is applied as a time-to-failure distribution function in equipment reliability estimations. Age-dependent characteristics of the truncated function provide a basis for formulating a system of high-reliability testing that effectively merges statistical, engineering, and cost considerations.
Statistical Tests for Frequency Distribution of Mean Gravity Anomalies
The hypothesis that a very large number of lOx 10mean gravity anomalies are normally distributed has been rejected at 5% Significance level based on the X2 and the unit normal deviate tests. However, the 50 equal area mean anomalies derived from the lOx 10data, have been found to be normally distributed at the same ...
A brief overview of the distribution test grids with a distributed generation inclusion case study
Stanisavljević Aleksandar M.
2018-01-01
Full Text Available The paper presents an overview of the electric distribution test grids issued by different technical institutions. They are used for testing different scenarios in operation of a grid for research, benchmarking, comparison and other purposes. Their types, main characteristics, features as well as application possibilities are shown. Recently, these grids are modified with inclusion of distributed generation. An example of modification and application of the IEEE 13-bus for testing effects of faults in cases without and with a distributed generator connection to the grid is presented. [Project of the Serbian Ministry of Education, Science and Technological Development, Grant no. III 042004: Smart Electricity Distribution Grids Based on Distribution Management System and Distributed Generation
Ross S Williamson
2015-04-01
Full Text Available Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron's probability of spiking. One popular method, known as maximally informative dimensions (MID, uses an information-theoretic quantity known as "single-spike information" to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex.
Darmon, David
2018-03-01
In the absence of mechanistic or phenomenological models of real-world systems, data-driven models become necessary. The discovery of various embedding theorems in the 1980s and 1990s motivated a powerful set of tools for analyzing deterministic dynamical systems via delay-coordinate embeddings of observations of their component states. However, in many branches of science, the condition of operational determinism is not satisfied, and stochastic models must be brought to bear. For such stochastic models, the tool set developed for delay-coordinate embedding is no longer appropriate, and a new toolkit must be developed. We present an information-theoretic criterion, the negative log-predictive likelihood, for selecting the embedding dimension for a predictively optimal data-driven model of a stochastic dynamical system. We develop a nonparametric estimator for the negative log-predictive likelihood and compare its performance to a recently proposed criterion based on active information storage. Finally, we show how the output of the model selection procedure can be used to compare candidate predictors for a stochastic system to an information-theoretic lower bound.
SAIL: Summation-bAsed Incremental Learning for Information-Theoretic Text Clustering.
Cao, Jie; Wu, Zhiang; Wu, Junjie; Xiong, Hui
2013-04-01
Information-theoretic clustering aims to exploit information-theoretic measures as the clustering criteria. A common practice on this topic is the so-called Info-Kmeans, which performs K-means clustering with KL-divergence as the proximity function. While expert efforts on Info-Kmeans have shown promising results, a remaining challenge is to deal with high-dimensional sparse data such as text corpora. Indeed, it is possible that the centroids contain many zero-value features for high-dimensional text vectors, which leads to infinite KL-divergence values and creates a dilemma in assigning objects to centroids during the iteration process of Info-Kmeans. To meet this challenge, in this paper, we propose a Summation-bAsed Incremental Learning (SAIL) algorithm for Info-Kmeans clustering. Specifically, by using an equivalent objective function, SAIL replaces the computation of KL-divergence by the incremental computation of Shannon entropy. This can avoid the zero-feature dilemma caused by the use of KL-divergence. To improve the clustering quality, we further introduce the variable neighborhood search scheme and propose the V-SAIL algorithm, which is then accelerated by a multithreaded scheme in PV-SAIL. Our experimental results on various real-world text collections have shown that, with SAIL as a booster, the clustering performance of Info-Kmeans can be significantly improved. Also, V-SAIL and PV-SAIL indeed help improve the clustering quality at a lower cost of computation.
Model-Driven Test Generation of Distributed Systems
Easwaran, Arvind; Hall, Brendan; Schweiker, Kevin
2012-01-01
This report describes a novel test generation technique for distributed systems. Utilizing formal models and formal verification tools, spe cifically the Symbolic Analysis Laboratory (SAL) tool-suite from SRI, we present techniques to generate concurrent test vectors for distrib uted systems. These are initially explored within an informal test validation context and later extended to achieve full MC/DC coverage of the TTEthernet protocol operating within a system-centric context.
Materials Science Research Rack-1 Fire Suppressant Distribution Test Report
Wieland, P. O.
2002-01-01
Fire suppressant distribution testing was performed on the Materials Science Research Rack-1 (MSRR-1), a furnace facility payload that will be installed in the U.S. Lab module of the International Space Station. Unlike racks that were tested previously, the MSRR-1 uses the Active Rack Isolation System (ARIS) to reduce vibration on experiments, so the effects of ARIS on fire suppressant distribution were unknown. Two tests were performed to map the distribution of CO2 fire suppressant throughout a mockup of the MSRR-1 designed to have the same component volumes and flowpath restrictions as the flight rack. For the first test, the average maximum CO2 concentration for the rack was 60 percent, achieved within 45 s of discharge initiation, meeting the requirement to reach 50 percent throughout the rack within 1 min. For the second test, one of the experiment mockups was removed to provide a worst-case configuration, and the average maximum CO2 concentration for the rack was 58 percent. Comparing the results of this testing with results from previous testing leads to several general conclusions that can be used to evaluate future racks. The MSRR-1 will meet the requirements for fire suppressant distribution. Primary factors that affect the ability to meet the CO2 distribution requirements are the free air volume in the rack and the total area and distribution of openings in the rack shell. The length of the suppressant flowpath and degree of tortuousness has little correlation with CO2 concentration. The total area of holes in the rack shell could be significantly increased. The free air volume could be significantly increased. To ensure the highest maximum CO2 concentration, the PFE nozzle should be inserted to the stop on the nozzle.
Accelerated life testing design using geometric process for pareto distribution
Mustafa Kamal; Shazia Zarrin; Arif Ul Islam
2013-01-01
In this paper the geometric process is used for the analysis of accelerated life testing under constant stress for Pareto Distribution. Assuming that the lifetimes under increasing stress levels form a geometric process, estimates of the parameters are obtained by using the maximum likelihood method for complete data. In addition, asymptotic interval estimates of the parameters of the distribution using Fisher information matrix are also obtained. The statistical properties of the parameters ...
Log-concave Probability Distributions: Theory and Statistical Testing
An, Mark Yuing
1996-01-01
This paper studies the broad class of log-concave probability distributions that arise in economics of uncertainty and information. For univariate, continuous, and log-concave random variables we prove useful properties without imposing the differentiability of density functions. Discrete...... and multivariate distributions are also discussed. We propose simple non-parametric testing procedures for log-concavity. The test statistics are constructed to test one of the two implicati ons of log-concavity: increasing hazard rates and new-is-better-than-used (NBU) property. The test for increasing hazard...... rates are based on normalized spacing of the sample order statistics. The tests for NBU property fall into the category of Hoeffding's U-statistics...
Three-dimensionality of space and the quantum bit: an information-theoretic approach
Müller, Markus P; Masanes, Lluís
2013-01-01
It is sometimes pointed out as a curiosity that the state space of quantum two-level systems, i.e. the qubit, and actual physical space are both three-dimensional and Euclidean. In this paper, we suggest an information-theoretic analysis of this relationship, by proving a particular mathematical result: suppose that physics takes place in d spatial dimensions, and that some events happen probabilistically (not assuming quantum theory in any way). Furthermore, suppose there are systems that carry ‘minimal amounts of direction information’, interacting via some continuous reversible time evolution. We prove that this uniquely determines spatial dimension d = 3 and quantum theory on two qubits (including entanglement and unitary time evolution), and that it allows observers to infer local spatial geometry from probability measurements. (paper)
Open source tools for the information theoretic analysis of neural data
Robin A. A Ince
2010-05-01
Full Text Available The recent and rapid development of open-source software tools for the analysis of neurophysiological datasets consisting of multiple simultaneous recordings of spikes, field potentials and other neural signals holds the promise for a significant advance in the standardization, transparency, quality, reproducibility and variety of techniques used to analyze neurophysiological data and integrate the information obtained at different spatial and temporal scales. In this Review we focus on recent advances in open source toolboxes for the information theoretic analysis of neural responses. We also present examples of their use to investigate the role of spike timing precision, correlations across neurons and field potential fluctuations in the encoding of sensory information. These information toolboxes, available both in Matlab and Python programming environments, hold the potential to enlarge the domain of application of information theory to neuroscience and to lead to new discoveries about how neurons encode and transmit information.
Some Observations on the Concepts of Information-Theoretic Entropy and Randomness
Jonathan D.H. Smith
2001-02-01
Full Text Available Abstract: Certain aspects of the history, derivation, and physical application of the information-theoretic entropy concept are discussed. Pre-dating Shannon, the concept is traced back to Pauli. A derivation from first principles is given, without use of approximations. The concept depends on the underlying degree of randomness. In physical applications, this translates to dependence on the experimental apparatus available. An example illustrates how this dependence affects Prigogine's proposal for the use of the Second Law of Thermodynamics as a selection principle for the breaking of time symmetry. The dependence also serves to yield a resolution of the so-called ``Gibbs Paradox.'' Extension of the concept from the discrete to the continuous case is discussed. The usual extension is shown to be dimensionally incorrect. Correction introduces a reference density, leading to the concept of Kullback entropy. Practical relativistic considerations suggest a possible proper reference density.
Open source tools for the information theoretic analysis of neural data.
Ince, Robin A A; Mazzoni, Alberto; Petersen, Rasmus S; Panzeri, Stefano
2010-01-01
The recent and rapid development of open source software tools for the analysis of neurophysiological datasets consisting of simultaneous multiple recordings of spikes, field potentials and other neural signals holds the promise for a significant advance in the standardization, transparency, quality, reproducibility and variety of techniques used to analyze neurophysiological data and for the integration of information obtained at different spatial and temporal scales. In this review we focus on recent advances in open source toolboxes for the information theoretic analysis of neural responses. We also present examples of their use to investigate the role of spike timing precision, correlations across neurons, and field potential fluctuations in the encoding of sensory information. These information toolboxes, available both in MATLAB and Python programming environments, hold the potential to enlarge the domain of application of information theory to neuroscience and to lead to new discoveries about how neurons encode and transmit information.
An Information-Theoretic Approach for Indirect Train Traffic Monitoring Using Building Vibration
Susu Xu
2017-05-01
Full Text Available This paper introduces an indirect train traffic monitoring method to detect and infer real-time train events based on the vibration response of a nearby building. Monitoring and characterizing traffic events are important for cities to improve the efficiency of transportation systems (e.g., train passing, heavy trucks, and traffic. Most prior work falls into two categories: (1 methods that require intensive labor to manually record events or (2 systems that require deployment of dedicated sensors. These approaches are difficult and costly to execute and maintain. In addition, most prior work uses dedicated sensors designed for a single purpose, resulting in deployment of multiple sensor systems. This further increases costs. Meanwhile, with the increasing demands of structural health monitoring, many vibration sensors are being deployed in commercial buildings. Traffic events create ground vibration that propagates to nearby building structures inducing noisy vibration responses. We present an information-theoretic method for train event monitoring using commonly existing vibration sensors deployed for building health monitoring. The key idea is to represent the wave propagation in a building induced by train traffic as information conveyed in noisy measurement signals. Our technique first uses wavelet analysis to detect train events. Then, by analyzing information exchange patterns of building vibration signals, we infer the category of the events (i.e., southbound or northbound train. Our algorithm is evaluated with an 11-story building where trains pass by frequently. The results show that the method can robustly achieve a train event detection accuracy of up to a 93% true positive rate and an 80% true negative rate. For direction categorization, compared with the traditional signal processing method, our information-theoretic approach reduces categorization error from 32.1 to 12.1%, which is a 2.5× improvement.
A multivariate rank test for comparing mass size distributions
Lombard, F.
2012-04-01
Particle size analyses of a raw material are commonplace in the mineral processing industry. Knowledge of particle size distributions is crucial in planning milling operations to enable an optimum degree of liberation of valuable mineral phases, to minimize plant losses due to an excess of oversize or undersize material or to attain a size distribution that fits a contractual specification. The problem addressed in the present paper is how to test the equality of two or more underlying size distributions. A distinguishing feature of these size distributions is that they are not based on counts of individual particles. Rather, they are mass size distributions giving the fractions of the total mass of a sampled material lying in each of a number of size intervals. As such, the data are compositional in nature, using the terminology of Aitchison [1] that is, multivariate vectors the components of which add to 100%. In the literature, various versions of Hotelling\\'s T 2 have been used to compare matched pairs of such compositional data. In this paper, we propose a robust test procedure based on ranks as a competitor to Hotelling\\'s T 2. In contrast to the latter statistic, the power of the rank test is not unduly affected by the presence of outliers or of zeros among the data. © 2012 Copyright Taylor and Francis Group, LLC.
HammerCloud: A Stress Testing System for Distributed Analysis
Ster, Daniel C van der; García, Mario Úbeda; Paladin, Massimo; Elmsheuser, Johannes
2011-01-01
Distributed analysis of LHC data is an I/O-intensive activity which places large demands on the internal network, storage, and local disks at remote computing facilities. Commissioning and maintaining a site to provide an efficient distributed analysis service is therefore a challenge which can be aided by tools to help evaluate a variety of infrastructure designs and configurations. HammerCloud is one such tool; it is a stress testing service which is used by central operations teams, regional coordinators, and local site admins to (a) submit arbitrary number of analysis jobs to a number of sites, (b) maintain at a steady-state a predefined number of jobs running at the sites under test, (c) produce web-based reports summarizing the efficiency and performance of the sites under test, and (d) present a web-interface for historical test results to both evaluate progress and compare sites. HammerCloud was built around the distributed analysis framework Ganga, exploiting its API for grid job management. HammerCloud has been employed by the ATLAS experiment for continuous testing of many sites worldwide, and also during large scale computing challenges such as STEP'09 and UAT'09, where the scale of the tests exceeded 10,000 concurrently running and 1,000,000 total jobs over multi-day periods. In addition, HammerCloud is being adopted by the CMS experiment; the plugin structure of HammerCloud allows the execution of CMS jobs using their official tool (CRAB).
242A Distributed Control System Year 2000 Acceptance Test Report
TEATS, M.C.
1999-08-31
This report documents acceptance test results for the 242-A Evaporator distributive control system upgrade to D/3 version 9.0-2 for year 2000 compliance. This report documents the test results obtained by acceptance testing as directed by procedure HNF-2695. This verification procedure will document the initial testing and evaluation of the potential 242-A Distributed Control System (DCS) operating difficulties across the year 2000 boundary and the calendar adjustments needed for the leap year. Baseline system performance data will be recorded using current, as-is operating system software. Data will also be collected for operating system software that has been modified to correct year 2000 problems. This verification procedure is intended to be generic such that it may be performed on any D/3{trademark} (GSE Process Solutions, Inc.) distributed control system that runs with the VMSTM (Digital Equipment Corporation) operating system. This test may be run on simulation or production systems depending upon facility status. On production systems, DCS outages will occur nine times throughout performance of the test. These outages are expected to last about 10 minutes each.
242A Distributed Control System Year 2000 Acceptance Test Report
TEATS, M.C.
1999-01-01
This report documents acceptance test results for the 242-A Evaporator distributive control system upgrade to D/3 version 9.0-2 for year 2000 compliance. This report documents the test results obtained by acceptance testing as directed by procedure HNF-2695. This verification procedure will document the initial testing and evaluation of the potential 242-A Distributed Control System (DCS) operating difficulties across the year 2000 boundary and the calendar adjustments needed for the leap year. Baseline system performance data will be recorded using current, as-is operating system software. Data will also be collected for operating system software that has been modified to correct year 2000 problems. This verification procedure is intended to be generic such that it may be performed on any D/3(trademark) (GSE Process Solutions, Inc.) distributed control system that runs with the VMSTM (Digital Equipment Corporation) operating system. This test may be run on simulation or production systems depending upon facility status. On production systems, DCS outages will occur nine times throughout performance of the test. These outages are expected to last about 10 minutes each
Distributed Sensor Network Software Development Testing through Simulation
Brennan, Sean M. [Univ. of New Mexico, Albuquerque, NM (United States)
2003-12-01
The distributed sensor network (DSN) presents a novel and highly complex computing platform with dif culties and opportunities that are just beginning to be explored. The potential of sensor networks extends from monitoring for threat reduction, to conducting instant and remote inventories, to ecological surveys. Developing and testing for robust and scalable applications is currently practiced almost exclusively in hardware. The Distributed Sensors Simulator (DSS) is an infrastructure that allows the user to debug and test software for DSNs independent of hardware constraints. The exibility of DSS allows developers and researchers to investigate topological, phenomenological, networking, robustness and scaling issues, to explore arbitrary algorithms for distributed sensors, and to defeat those algorithms through simulated failure. The user speci es the topology, the environment, the application, and any number of arbitrary failures; DSS provides the virtual environmental embedding.
Nevada test site radionuclide inventory and distribution: project operations plan
Kordas, J.F.; Anspaugh, L.R.
1982-01-01
This document is the operational plan for conducting the Radionuclide Inventory and Distribution Program (RIDP) at the Nevada Test Site (NTS). The basic objective of this program is to inventory the significant radionuclides of NTS origin in NTS surface soil. The expected duration of the program is five years. This plan includes the program objectives, methods, organization, and schedules
Sodium flow distribution test of the air cooler tubes
Uchida, Hiroyuki; Ohta, Hidehisa; Shimazu, Hisashi
1980-01-01
In the heat transfer tubes of the air cooler which is installed in the auxiliary core cooling system of the fast breeder prototype plant reactor ''Monju'', sodium freezing may be caused by undercooling the sodium induced by an extremely unbalanced sodium flow in the tubes. Thus, the sodium flow distribution test of the air cooler tubes was performed to examine the flow distribution of the tubes and to estimate the possibility of sodium freezing in the tubes. This test was performed by using a one fourth air cooler model installed in the water flow test facility. As the test results show, the flow distribution from the inlet header to each tube is almost equal at any operating condition, that is, the velocity deviation from normalized mean velocity is less than 6% and sodium freezing does not occur up to 250% air velocity deviation at stand-by condition. It was clear that the proposed air cooler design for the ''Monju'' will have a good sodium flow distribution at any operating condition. (author)
da Fonseca, María; Samengo, Inés
2016-12-01
The accuracy with which humans detect chromatic differences varies throughout color space. For example, we are far more precise when discriminating two similar orange stimuli than two similar green stimuli. In order for two colors to be perceived as different, the neurons representing chromatic information must respond differently, and the difference must be larger than the trial-to-trial variability of the response to each separate color. Photoreceptors constitute the first stage in the processing of color information; many more stages are required before humans can consciously report whether two stimuli are perceived as chromatically distinguishable. Therefore, although photoreceptor absorption curves are expected to influence the accuracy of conscious discriminability, there is no reason to believe that they should suffice to explain it. Here we develop information-theoretical tools based on the Fisher metric that demonstrate that photoreceptor absorption properties explain about 87% of the variance of human color discrimination ability, as tested by previous behavioral experiments. In the context of this theory, the bottleneck in chromatic information processing is determined by photoreceptor absorption characteristics. Subsequent encoding stages modify only marginally the chromatic discriminability at the photoreceptor level.
Moisture distribution in sludges based on different testing methods
Wenyi Deng; Xiaodong Li; Jianhua Yan; Fei Wang; Yong Chi; Kefa Cen
2011-01-01
Moisture distributions in municipal sewage sludge, printing and dyeing sludge and paper mill sludge were experimentally studied based on four different methods, i.e., drying test, thermogravimetric-differential thermal analysis (TG-DTA) test, thermogravimetricdifferential scanning calorimetry (TG-DSC) test and water activity test. The results indicated that the moistures in the mechanically dewatered sludges were interstitial water, surface water and bound water. The interstitial water accounted for more than 50％ wet basis (wb) of the total moisture content. The bond strength of sludge moisture increased with decreasing moisture content, especially when the moisture content was lower than 50％ wb. Furthermore, the comparison among the four different testing methods was presented.The drying test was advantaged by its ability to quantify free water, interstitial water, surface water and bound water; while TG-DSC test, TG-DTA test and water activity test were capable of determining the bond strength of moisture in sludge. It was found that the results from TG-DSC and TG-DTA test are more persuasive than water activity test.
Scopatz, Anthony; Schneider, Erich; Li, Jun; Yim, Man-Sung
2011-01-01
A light water reactor, fast reactor symbiotic fuel cycle scenario was modeled and parameterized based on thirty independent inputs. Simultaneously and stochastically choosing different values for each of these inputs and performing the associated fuel cycle mass-balance calculation, the fuel cycle itself underwent Monte Carlo simulation. A novel information theoretic metric is postulated as a measure of system-wide covariance. This metric is the coefficient of variation of the set of uncertainty coefficients generated from 2D slices of a 3D contingency table. It is then applied to the fuel cycle, taking fast reactor used fuel plutonium and americium separations as independent variables and the capacity of a fully-loaded tuff repository as the response. This set of parameters is known from prior studies to have a strong covariance. When measured with all 435 other input parameters possible, the fast reactor plutonium and americium separations pair was found to be ranked the second most covariant. This verifies that the coefficient of variation metric captures the desired sensitivity of sensitivity effects in the nuclear fuel cycle. (author)
Information-Theoretic Data Discarding for Dynamic Trees on Data Streams
Christoforos Anagnostopoulos
2013-12-01
Full Text Available Ubiquitous automated data collection at an unprecedented scale is making available streaming, real-time information flows in a wide variety of settings, transforming both science and industry. Learning algorithms deployed in such contexts often rely on single-pass inference, where the data history is never revisited. Learning may also need to be temporally adaptive to remain up-to-date against unforeseen changes in the data generating mechanism. Online Bayesian inference remains challenged by such transient, evolving data streams. Nonparametric modeling techniques can prove particularly ill-suited, as the complexity of the model is allowed to increase with the sample size. In this work, we take steps to overcome these challenges by porting information theoretic heuristics, such as exponential forgetting and active learning, into a fully Bayesian framework. We showcase our methods by augmenting a modern non-parametric modeling framework, dynamic trees, and illustrate its performance on a number of practical examples. The end product is a powerful streaming regression and classification tool, whose performance compares favorably to the state-of-the-art.
Matthias Dehmer
Full Text Available This paper aims to investigate information-theoretic network complexity measures which have already been intensely used in mathematical- and medicinal chemistry including drug design. Numerous such measures have been developed so far but many of them lack a meaningful interpretation, e.g., we want to examine which kind of structural information they detect. Therefore, our main contribution is to shed light on the relatedness between some selected information measures for graphs by performing a large scale analysis using chemical networks. Starting from several sets containing real and synthetic chemical structures represented by graphs, we study the relatedness between a classical (partition-based complexity measure called the topological information content of a graph and some others inferred by a different paradigm leading to partition-independent measures. Moreover, we evaluate the uniqueness of network complexity measures numerically. Generally, a high uniqueness is an important and desirable property when designing novel topological descriptors having the potential to be applied to large chemical databases.
Cao, Xiaofang; Rong, Chunying; Zhong, Aiguo; Lu, Tian; Liu, Shubin
2018-01-15
Molecular acidity is one of the important physiochemical properties of a molecular system, yet its accurate calculation and prediction are still an unresolved problem in the literature. In this work, we propose to make use of the quantities from the information-theoretic (IT) approach in density functional reactivity theory and provide an accurate description of molecular acidity from a completely new perspective. To illustrate our point, five different categories of acidic series, singly and doubly substituted benzoic acids, singly substituted benzenesulfinic acids, benzeneseleninic acids, phenols, and alkyl carboxylic acids, have been thoroughly examined. We show that using IT quantities such as Shannon entropy, Fisher information, Ghosh-Berkowitz-Parr entropy, information gain, Onicescu information energy, and relative Rényi entropy, one is able to simultaneously predict experimental pKa values of these different categories of compounds. Because of the universality of the quantities employed in this work, which are all density dependent, our approach should be general and be applicable to other systems as well. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Information-Theoretic Limits on Broadband Multi-Antenna Systems in the Presence of Mutual Coupling
Taluja, Pawandeep Singh
2011-12-01
Multiple-input, multiple-output (MIMO) systems have received considerable attention over the last decade due to their ability to provide high throughputs and mitigate multipath fading effects. While most of these benefits are obtained for ideal arrays with large separation between the antennas, practical devices are often constrained in physical dimensions. With smaller inter-element spacings, signal correlation and mutual coupling between the antennas start to degrade the system performance, thereby limiting the deployment of a large number of antennas. Various studies have proposed transceiver designs based on optimal matching networks to compensate for this loss. However, such networks are considered impractical due to their multiport structure and sensitivity to the RF bandwidth of the system. In this dissertation, we investigate two aspects of compact transceiver design. First, we consider simpler architectures that exploit coupling between the antennas, and second, we establish information-theoretic limits of broadband communication systems with closely-spaced antennas. We begin with a receiver model of a diversity antenna selection system and propose novel strategies that make use of inactive elements by virtue of mutual coupling. We then examine the limits on the matching efficiency of a single antenna system using broadband matching theory. Next, we present an extension to this theory for coupled MIMO systems to elucidate the impact of coupling on the RF bandwidth of the system, and derive optimal transceiver designs. Lastly, we summarize the main findings of this dissertation and suggest open problems for future work.
An information theoretic approach to use high-fidelity codes to calibrate low-fidelity codes
Lewis, Allison, E-mail: lewis.allison10@gmail.com [Department of Mathematics, North Carolina State University, Raleigh, NC 27695 (United States); Smith, Ralph [Department of Mathematics, North Carolina State University, Raleigh, NC 27695 (United States); Williams, Brian [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Figueroa, Victor [Sandia National Laboratories, Albuquerque, NM 87185 (United States)
2016-11-01
For many simulation models, it can be prohibitively expensive or physically infeasible to obtain a complete set of experimental data to calibrate model parameters. In such cases, one can alternatively employ validated higher-fidelity codes to generate simulated data, which can be used to calibrate the lower-fidelity code. In this paper, we employ an information-theoretic framework to determine the reduction in parameter uncertainty that is obtained by evaluating the high-fidelity code at a specific set of design conditions. These conditions are chosen sequentially, based on the amount of information that they contribute to the low-fidelity model parameters. The goal is to employ Bayesian experimental design techniques to minimize the number of high-fidelity code evaluations required to accurately calibrate the low-fidelity model. We illustrate the performance of this framework using heat and diffusion examples, a 1-D kinetic neutron diffusion equation, and a particle transport model, and include initial results from the integration of the high-fidelity thermal-hydraulics code Hydra-TH with a low-fidelity exponential model for the friction correlation factor.
Information-theoretic semi-supervised metric learning via entropy regularization.
Niu, Gang; Dai, Bo; Yamada, Makoto; Sugiyama, Masashi
2014-08-01
We propose a general information-theoretic approach to semi-supervised metric learning called SERAPH (SEmi-supervised metRic leArning Paradigm with Hypersparsity) that does not rely on the manifold assumption. Given the probability parameterized by a Mahalanobis distance, we maximize its entropy on labeled data and minimize its entropy on unlabeled data following entropy regularization. For metric learning, entropy regularization improves manifold regularization by considering the dissimilarity information of unlabeled data in the unsupervised part, and hence it allows the supervised and unsupervised parts to be integrated in a natural and meaningful way. Moreover, we regularize SERAPH by trace-norm regularization to encourage low-dimensional projections associated with the distance metric. The nonconvex optimization problem of SERAPH could be solved efficiently and stably by either a gradient projection algorithm or an EM-like iterative algorithm whose M-step is convex. Experiments demonstrate that SERAPH compares favorably with many well-known metric learning methods, and the learned Mahalanobis distance possesses high discriminability even under noisy environments.
Dimensional Information-Theoretic Measurement of Facial Emotion Expressions in Schizophrenia
Jihun Hamm
2014-01-01
Full Text Available Altered facial expressions of emotions are characteristic impairments in schizophrenia. Ratings of affect have traditionally been limited to clinical rating scales and facial muscle movement analysis, which require extensive training and have limitations based on methodology and ecological validity. To improve reliable assessment of dynamic facial expression changes, we have developed automated measurements of facial emotion expressions based on information-theoretic measures of expressivity of ambiguity and distinctiveness of facial expressions. These measures were examined in matched groups of persons with schizophrenia (n=28 and healthy controls (n=26 who underwent video acquisition to assess expressivity of basic emotions (happiness, sadness, anger, fear, and disgust in evoked conditions. Persons with schizophrenia scored higher on ambiguity, the measure of conditional entropy within the expression of a single emotion, and they scored lower on distinctiveness, the measure of mutual information across expressions of different emotions. The automated measures compared favorably with observer-based ratings. This method can be applied for delineating dynamic emotional expressivity in healthy and clinical populations.
Modified Distribution-Free Goodness-of-Fit Test Statistic.
Chun, So Yeon; Browne, Michael W; Shapiro, Alexander
2018-03-01
Covariance structure analysis and its structural equation modeling extensions have become one of the most widely used methodologies in social sciences such as psychology, education, and economics. An important issue in such analysis is to assess the goodness of fit of a model under analysis. One of the most popular test statistics used in covariance structure analysis is the asymptotically distribution-free (ADF) test statistic introduced by Browne (Br J Math Stat Psychol 37:62-83, 1984). The ADF statistic can be used to test models without any specific distribution assumption (e.g., multivariate normal distribution) of the observed data. Despite its advantage, it has been shown in various empirical studies that unless sample sizes are extremely large, this ADF statistic could perform very poorly in practice. In this paper, we provide a theoretical explanation for this phenomenon and further propose a modified test statistic that improves the performance in samples of realistic size. The proposed statistic deals with the possible ill-conditioning of the involved large-scale covariance matrices.
HammerCloud: A Stress Testing System for Distributed Analysis
van der Ster, Daniel C; Ubeda Garcia, Mario; Paladin, Massimo
2011-01-01
Distributed analysis of LHC data is an I/O-intensive activity which places large demands on the internal network, storage, and local disks at remote computing facilities. Commissioning and maintaining a site to provide an efficient distributed analysis service is therefore a challenge which can be aided by tools to help evaluate a variety of infrastructure designs and configurations. HammerCloud (HC) is one such tool; it is a stress testing service which is used by central operations teams, regional coordinators, and local site admins to (a) submit arbitrary number of analysis jobs to a number of sites, (b) maintain at a steady-state a predefined number of jobs running at the sites under test, (c) produce web-based reports summarizing the efficiency and performance of the sites under test, and (d) present a web-interface for historical test results to both evaluate progress and compare sites. HC was built around the distributed analysis framework Ganga, exploiting its API for grid job management. HC has been ...
LEDA RF distribution system design and component test results
Roybal, W.T.; Rees, D.E.; Borchert, H.L.; McCarthy, M.; Toole, L.
1998-01-01
The 350 MHz and 700 MHz RF distribution systems for the Low Energy Demonstration Accelerator (LEDA) have been designed and are currently being installed at Los Alamos National Laboratory. Since 350 MHz is a familiar frequency used at other accelerator facilities, most of the major high-power components were available. The 700 MHz, 1.0 MW, CW RF delivery system designed for LEDA is a new development. Therefore, high-power circulators, waterloads, phase shifters, switches, and harmonic filters had to be designed and built for this applications. The final Accelerator Production of Tritium (APT) RF distribution systems design will be based on much of the same technology as the LEDA systems and will have many of the RF components tested for LEDA incorporated into the design. Low power and high-power tests performed on various components of these LEDA systems and their results are presented here
Sodium flow distribution in test fuel assembly P-23B
Taylor, J.P.S.
1978-08-01
Relatively large cladding diametral increases in the exterior fuel pins of HEDL's test fuel subassembly P-23B were successfully explained by a thermal-hydraulic/solid mechanics analysis. This analysis indicates that while at power, the subassembly flow was less than planned and that the fuel pins were considerably displaced and bowed from their nominal position. In accomplishing this analysis, a method was developed to estimate the sodium flow distribution and pin distortions in a fuel subassembly at power
Factors affecting daughters distribution among progeny testing Holstein bulls
Martino Cassandro
2012-01-01
Full Text Available The aim of this study was to investigate factors influencing the number of daughters of Holstein bulls during the progeny testing using data provided by the Italian Holstein Friesian Cattle Breeders Association. The hypothesis is that there are no differences among artificial insemination studs (AIS on the daughters distribution among progeny testing bulls. For each bull and beginning from 21 months of age, the distribution of daughters over the progeny testing period was calculated. Data were available on 1973 bulls born between 1986 and 2004, progeny tested in Italy and with at least 4 paternal half-sibs. On average, bulls exited the genetic centre at 11.3±1.1 months and reached their first official genetic proof at 58.0±3.1 months of age. An analysis of variance was performed on the cumulative frequency of daughters at 24, 36, 48, and 60 months. The generalized linear model included the fixed effects of year of birth of the bull (18 levels, artificial insemination stud (4 levels and sire of bull (137 levels. All effects significantly affected the variability of studied traits. Artificial insemination stud was the most important source of variation, followed by year of birth and sire of bull. Significant differences among AI studs exist, probably reflecting different strategies adopted during progeny testing.
Bai, D.S.; Chun, Y.R.; Kim, J.G.
1995-01-01
This paper considers the design of life-test sampling plans based on failure-censored accelerated life tests. The lifetime distribution of products is assumed to be Weibull with a scale parameter that is a log linear function of a (possibly transformed) stress. Two levels of stress higher than the use condition stress, high and low, are used. Sampling plans with equal expected test times at high and low test stresses which satisfy the producer's and consumer's risk requirements and minimize the asymptotic variance of the test statistic used to decide lot acceptability are obtained. The properties of the proposed life-test sampling plans are investigated
Jang, Kwang Eun; Lee, Jongha; Sung, Younghun; Lee, SeongDeok
2013-01-01
Purpose: X-ray photons generated from a typical x-ray source for clinical applications exhibit a broad range of wavelengths, and the interactions between individual particles and biological substances depend on particles' energy levels. Most existing reconstruction methods for transmission tomography, however, neglect this polychromatic nature of measurements and rely on the monochromatic approximation. In this study, we developed a new family of iterative methods that incorporates the exact polychromatic model into tomographic image recovery, which improves the accuracy and quality of reconstruction.Methods: The generalized information-theoretic discrepancy (GID) was employed as a new metric for quantifying the distance between the measured and synthetic data. By using special features of the GID, the objective function for polychromatic reconstruction which contains a double integral over the wavelength and the trajectory of incident x-rays was simplified to a paraboloidal form without using the monochromatic approximation. More specifically, the original GID was replaced with a surrogate function with two auxiliary, energy-dependent variables. Subsequently, the alternating minimization technique was applied to solve the double minimization problem. Based on the optimization transfer principle, the objective function was further simplified to the paraboloidal equation, which leads to a closed-form update formula. Numerical experiments on the beam-hardening correction and material-selective reconstruction were conducted to compare and assess the performance of conventional methods and the proposed algorithms.Results: The authors found that the GID determines the distance between its two arguments in a flexible manner. In this study, three groups of GIDs with distinct data representations were considered. The authors demonstrated that one type of GIDs that comprises “raw” data can be viewed as an extension of existing statistical reconstructions; under a
Pant, Sanjay; Lombardi, Damiano
2015-10-01
A new approach for assessing parameter identifiability of dynamical systems in a Bayesian setting is presented. The concept of Shannon entropy is employed to measure the inherent uncertainty in the parameters. The expected reduction in this uncertainty is seen as the amount of information one expects to gain about the parameters due to the availability of noisy measurements of the dynamical system. Such expected information gain is interpreted in terms of the variance of a hypothetical measurement device that can measure the parameters directly, and is related to practical identifiability of the parameters. If the individual parameters are unidentifiable, correlation between parameter combinations is assessed through conditional mutual information to determine which sets of parameters can be identified together. The information theoretic quantities of entropy and information are evaluated numerically through a combination of Monte Carlo and k-nearest neighbour methods in a non-parametric fashion. Unlike many methods to evaluate identifiability proposed in the literature, the proposed approach takes the measurement-noise into account and is not restricted to any particular noise-structure. Whilst computationally intensive for large dynamical systems, it is easily parallelisable and is non-intrusive as it does not necessitate re-writing of the numerical solvers of the dynamical system. The application of such an approach is presented for a variety of dynamical systems--ranging from systems governed by ordinary differential equations to partial differential equations--and, where possible, validated against results previously published in the literature. Copyright © 2015 Elsevier Inc. All rights reserved.
Li, Yue; Jha, Devesh K; Ray, Asok; Wettergren, Thomas A; Yue Li; Jha, Devesh K; Ray, Asok; Wettergren, Thomas A; Wettergren, Thomas A; Li, Yue; Ray, Asok; Jha, Devesh K
2018-06-01
This paper presents information-theoretic performance analysis of passive sensor networks for detection of moving targets. The proposed method falls largely under the category of data-level information fusion in sensor networks. To this end, a measure of information contribution for sensors is formulated in a symbolic dynamics framework. The network information state is approximately represented as the largest principal component of the time series collected across the network. To quantify each sensor's contribution for generation of the information content, Markov machine models as well as x-Markov (pronounced as cross-Markov) machine models, conditioned on the network information state, are constructed; the difference between the conditional entropies of these machines is then treated as an approximate measure of information contribution by the respective sensors. The x-Markov models represent the conditional temporal statistics given the network information state. The proposed method has been validated on experimental data collected from a local area network of passive sensors for target detection, where the statistical characteristics of environmental disturbances are similar to those of the target signal in the sense of time scale and texture. A distinctive feature of the proposed algorithm is that the network decisions are independent of the behavior and identity of the individual sensors, which is desirable from computational perspectives. Results are presented to demonstrate the proposed method's efficacy to correctly identify the presence of a target with very low false-alarm rates. The performance of the underlying algorithm is compared with that of a recent data-driven, feature-level information fusion algorithm. It is shown that the proposed algorithm outperforms the other algorithm.
Cryogenic distribution system for ITER proto-type cryoline test
Bhattacharya, R.; Shah, N.; Badgujar, S.; Sarkar, B.
2012-01-01
Design validation for ITER cryoline will be carried out by proto-type test on cryoline. The major objectives of the test will be to ensure the mechanical integrity, reliability, thermal stress and heat load as well as checking of assembly and fabrication procedures. The cryogenics system has to satisfy the functional operating scenario of the cryoline. Cryoplant, distribution box (DB) including liquid helium (LHe) tank constitute the cryogenic system for the test. Conceptual system architecture is proposed with a commercially available refrigerator/liquefier and custom designed DB housing cold compressor, cold circulator as well as phase separator with sub-merged heat exchanger. System level optimization, mainly with DB and LHe tank with options, has been studied to minimize the cold power required for the system. Aspen HYSYS is used for the purpose of process simulation. The paper describes the system architecture and the optimized design as well as process simulation with associated results. (author)
3D nonrigid medical image registration using a new information theoretic measure
Li, Bicao; Yang, Guanyu; Coatrieux, Jean Louis; Li, Baosheng; Shu, Huazhong
2015-11-01
This work presents a novel method for the nonrigid registration of medical images based on the Arimoto entropy, a generalization of the Shannon entropy. The proposed method employed the Jensen-Arimoto divergence measure as a similarity metric to measure the statistical dependence between medical images. Free-form deformations were adopted as the transformation model and the Parzen window estimation was applied to compute the probability distributions. A penalty term is incorporated into the objective function to smooth the nonrigid transformation. The goal of registration is to optimize an objective function consisting of a dissimilarity term and a penalty term, which would be minimal when two deformed images are perfectly aligned using the limited memory BFGS optimization method, and thus to get the optimal geometric transformation. To validate the performance of the proposed method, experiments on both simulated 3D brain MR images and real 3D thoracic CT data sets were designed and performed on the open source elastix package. For the simulated experiments, the registration errors of 3D brain MR images with various magnitudes of known deformations and different levels of noise were measured. For the real data tests, four data sets of 4D thoracic CT from four patients were selected to assess the registration performance of the method, including ten 3D CT images for each 4D CT data covering an entire respiration cycle. These results were compared with the normalized cross correlation and the mutual information methods and show a slight but true improvement in registration accuracy.
3D nonrigid medical image registration using a new information theoretic measure
Li, Bicao; Yang, Guanyu; Coatrieux, Jean Louis; Li, Baosheng; Shu, Huazhong
2015-01-01
This work presents a novel method for the nonrigid registration of medical images based on the Arimoto entropy, a generalization of the Shannon entropy. The proposed method employed the Jensen–Arimoto divergence measure as a similarity metric to measure the statistical dependence between medical images. Free-form deformations were adopted as the transformation model and the Parzen window estimation was applied to compute the probability distributions. A penalty term is incorporated into the objective function to smooth the nonrigid transformation. The goal of registration is to optimize an objective function consisting of a dissimilarity term and a penalty term, which would be minimal when two deformed images are perfectly aligned using the limited memory BFGS optimization method, and thus to get the optimal geometric transformation. To validate the performance of the proposed method, experiments on both simulated 3D brain MR images and real 3D thoracic CT data sets were designed and performed on the open source elastix package. For the simulated experiments, the registration errors of 3D brain MR images with various magnitudes of known deformations and different levels of noise were measured. For the real data tests, four data sets of 4D thoracic CT from four patients were selected to assess the registration performance of the method, including ten 3D CT images for each 4D CT data covering an entire respiration cycle. These results were compared with the normalized cross correlation and the mutual information methods and show a slight but true improvement in registration accuracy. (paper)
Malevergne, Yannick; Pisarenko, Vladilen; Sornette, Didier
2011-03-01
Fat-tail distributions of sizes abound in natural, physical, economic, and social systems. The lognormal and the power laws have historically competed for recognition with sometimes closely related generating processes and hard-to-distinguish tail properties. This state-of-affair is illustrated with the debate between Eeckhout [Amer. Econ. Rev. 94, 1429 (2004)] and Levy [Amer. Econ. Rev. 99, 1672 (2009)] on the validity of Zipf's law for US city sizes. By using a uniformly most powerful unbiased (UMPU) test between the lognormal and the power-laws, we show that conclusive results can be achieved to end this debate. We advocate the UMPU test as a systematic tool to address similar controversies in the literature of many disciplines involving power laws, scaling, "fat" or "heavy" tails. In order to demonstrate that our procedure works for data sets other than the US city size distribution, we also briefly present the results obtained for the power-law tail of the distribution of personal identity (ID) losses, which constitute one of the major emergent risks at the interface between cyberspace and reality.
Distributed training, testing, and decision aids within one solution
Strini, Robert A.; Strini, Keith
2002-07-01
Military air operations in the European theater require U.S. and NATO participants to send various mission experts to 10 Combined Air Operations Centers (CAOCs). Little or no training occurs prior to their arrival for tours of duty ranging between 90 days to 3 years. When training does occur, there is little assessment of its effectiveness in raising CAOC mission readiness. A comprehensive training management system has been developed that utilizes traditional and web based distance-learning methods for providing instruction and task practice as well as distributed simulation to provide mission rehearsal training opportunities on demand for the C2 warrior. This system incorporates new technologies, such as voice interaction and virtual tutors, and a Learning Management System (LMS) that tracks trainee progress from academic learning through procedural practice and mission training exercises. Supervisors can monitor their subordinate's progress through synchronous or asynchronous methods. Embedded within this system are virtual tutors, which provide automated performance measurement as well as tutoring. The training system offers a true time management savings for current instructors and training providers that today must perform On the Job Training (OJT) duties before, during and after each event. Many units do not have the resources to support OJT and are forced to maintain an overlap of several days to minimally maintain unit readiness. One CAOC Commander affected by this paradigm has advocated supporting a beta version of this system to test its ability to offer training on-demand and track the progress of its personnel and unit readiness. If successful, aircrew simulation devices can be connected through either Distributed Interactive Simulation or High Level Architecture methods to provide a DMT-C2 air operations training environment in Europe. This paper presents an approach to establishing a training, testing and decision aid capability and means to assess
Sucheston Lara
2010-09-01
Full Text Available Abstract Background Multifactorial diseases such as cancer and cardiovascular diseases are caused by the complex interplay between genes and environment. The detection of these interactions remains challenging due to computational limitations. Information theoretic approaches use computationally efficient directed search strategies and thus provide a feasible solution to this problem. However, the power of information theoretic methods for interaction analysis has not been systematically evaluated. In this work, we compare power and Type I error of an information-theoretic approach to existing interaction analysis methods. Methods The k-way interaction information (KWII metric for identifying variable combinations involved in gene-gene interactions (GGI was assessed using several simulated data sets under models of genetic heterogeneity driven by susceptibility increasing loci with varying allele frequency, penetrance values and heritability. The power and proportion of false positives of the KWII was compared to multifactor dimensionality reduction (MDR, restricted partitioning method (RPM and logistic regression. Results The power of the KWII was considerably greater than MDR on all six simulation models examined. For a given disease prevalence at high values of heritability, the power of both RPM and KWII was greater than 95%. For models with low heritability and/or genetic heterogeneity, the power of the KWII was consistently greater than RPM; the improvements in power for the KWII over RPM ranged from 4.7% to 14.2% at for α = 0.001 in the three models at the lowest heritability values examined. KWII performed similar to logistic regression. Conclusions Information theoretic models are flexible and have excellent power to detect GGI under a variety of conditions that characterize complex diseases.
Independent test assessment using the extreme value distribution theory.
Almeida, Marcio; Blondell, Lucy; Peralta, Juan M; Kent, Jack W; Jun, Goo; Teslovich, Tanya M; Fuchsberger, Christian; Wood, Andrew R; Manning, Alisa K; Frayling, Timothy M; Cingolani, Pablo E; Sladek, Robert; Dyer, Thomas D; Abecasis, Goncalo; Duggirala, Ravindranath; Blangero, John
2016-01-01
The new generation of whole genome sequencing platforms offers great possibilities and challenges for dissecting the genetic basis of complex traits. With a very high number of sequence variants, a naïve multiple hypothesis threshold correction hinders the identification of reliable associations by the overreduction of statistical power. In this report, we examine 2 alternative approaches to improve the statistical power of a whole genome association study to detect reliable genetic associations. The approaches were tested using the Genetic Analysis Workshop 19 (GAW19) whole genome sequencing data. The first tested method estimates the real number of effective independent tests actually being performed in whole genome association project by the use of an extreme value distribution and a set of phenotype simulations. Given the familiar nature of the GAW19 data and the finite number of pedigree founders in the sample, the number of correlations between genotypes is greater than in a set of unrelated samples. Using our procedure, we estimate that the effective number represents only 15 % of the total number of independent tests performed. However, even using this corrected significance threshold, no genome-wide significant association could be detected for systolic and diastolic blood pressure traits. The second approach implements a biological relevance-driven hypothesis tested by exploiting prior computational predictions on the effect of nonsynonymous genetic variants detected in a whole genome sequencing association study. This guided testing approach was able to identify 2 promising single-nucleotide polymorphisms (SNPs), 1 for each trait, targeting biologically relevant genes that could help shed light on the genesis of the human hypertension. The first gene, PFH14 , associated with systolic blood pressure, interacts directly with genes involved in calcium-channel formation and the second gene, MAP4 , encodes a microtubule-associated protein and had already
Experimental test of nuclear magnetization distribution and nuclear structure models
Beirsdorfer, P; Crespo-Lopez-Urrutia, J R; Utter, S B.
1999-01-01
Models exist that ascribe the nuclear magnetic fields to the presence of a single nucleon whose spin is not neutralized by pairing it up with that of another nucleon; other models assume that the generation of the magnetic field is shared among some or all nucleons throughout the nucleus. All models predict the same magnetic field external to the nucleus since this is an anchor provided by experiments. The models differ, however, in their predictions of the magnetic field arrangement within the nucleus for which no data exist. The only way to distinguish which model gives the correct description of the nucleus would be to use a probe inserted into the nucleus. The goal of our project was to develop exactly such a probe and to use it to measure fundamental nuclear quantities that have eluded experimental scrutiny. The need for accurately knowing such quantities extends far beyond nuclear physics and has ramifications in parity violation experiments on atomic traps and the testing of the standard model in elementary particle physics. Unlike scattering experiments that employ streams of free particles, our technique to probe the internal magnetic field distribution of the nucleus rests on using a single bound electron. Quantum mechanics shows that an electron in the innermost orbital surrounding the nucleus constantly dives into the nucleus and thus samples the fields that exist inside. This sampling of the nucleus usually results in only minute shifts in the electron s average orbital, which would be difficult to detect. By studying two particular energy states of the electron, we can, however, dramatically enhance the effects of the distribution of the magnetic fields in the nucleus. In fact about 2% of the energy difference between the two states, dubbed the hyperfine splitting, is determined by the effects related to the distribution of magnetic fields in the nucleus, A precise measurement of this energy difference (better than 0.01%) would then allow us to place
Sileshi, G
2006-10-01
Researchers and regulatory agencies often make statistical inferences from insect count data using modelling approaches that assume homogeneous variance. Such models do not allow for formal appraisal of variability which in its different forms is the subject of interest in ecology. Therefore, the objectives of this paper were to (i) compare models suitable for handling variance heterogeneity and (ii) select optimal models to ensure valid statistical inferences from insect count data. The log-normal, standard Poisson, Poisson corrected for overdispersion, zero-inflated Poisson, the negative binomial distribution and zero-inflated negative binomial models were compared using six count datasets on foliage-dwelling insects and five families of soil-dwelling insects. Akaike's and Schwarz Bayesian information criteria were used for comparing the various models. Over 50% of the counts were zeros even in locally abundant species such as Ootheca bennigseni Weise, Mesoplatys ochroptera Stål and Diaecoderus spp. The Poisson model after correction for overdispersion and the standard negative binomial distribution model provided better description of the probability distribution of seven out of the 11 insects than the log-normal, standard Poisson, zero-inflated Poisson or zero-inflated negative binomial models. It is concluded that excess zeros and variance heterogeneity are common data phenomena in insect counts. If not properly modelled, these properties can invalidate the normal distribution assumptions resulting in biased estimation of ecological effects and jeopardizing the integrity of the scientific inferences. Therefore, it is recommended that statistical models appropriate for handling these data properties be selected using objective criteria to ensure efficient statistical inference.
Population distribution around the Nevada Test Site, 1984
Smith, D.D.; Coogan, J.S.
1984-08-01
The Environmental Monitoring Systems Laboratory (EMSL-LV) conducts an offsite radiological safety program outside the boundaries of the Nevada Test Site. As part of this program, the EMSL-LV maintains a comprehensive and current listing of all rural offsite residents and dairy animals within the controllable sectors (areas where the EMSL-LV could implement protective or remedial actions that would assure public safety). This report was produced to give a brief overview of the population distribution and information on the activities within the controllable sectors. Obviously the numbers of people in a sector change dependent upon the season of the year, and such diverse information as the price of minerals which relates to the opening and closing of mining operations. Currently, the controllable sectors out to 200 kilometers from the Control Point on the NTS are considered to be the entire northeast, north-northeast, north, north-northwest, west-northwest sectors and portions of the east and east-northeast sectors. The west-southwest and south-southwest sections are considered controllable out to 40 to 80 kilometers. No major population centers or dairy farms lie within these sectors. 7 references, 5 figures, 2 tables
Similarity Analysis for Reactor Flow Distribution Test and Its Validation
Hong, Soon Joon; Ha, Jung Hui [Heungdeok IT Valley, Yongin (Korea, Republic of); Lee, Taehoo; Han, Ji Woong [KAERI, Daejeon (Korea, Republic of)
2015-05-15
facility. It was clearly found in Hong et al. In this study the feasibility of the similarity analysis of Hong et al. was examined. The similarity analysis was applied to SFR which has been designed in KAERI (Korea Atomic Energy Research Institute) in order to design the reactor flow distribution test. The length scale was assumed to be 1/5, and the velocity scale 1/2, which bounds the square root of the length scale (1/√5). The CFX calculations for both prototype and model were carried out and the flow field was compared.
Grau-Moya, Jordi; Ortega, Pedro A; Braun, Daniel A
2016-01-01
A number of recent studies have investigated differences in human choice behavior depending on task framing, especially comparing economic decision-making to choice behavior in equivalent sensorimotor tasks. Here we test whether decision-making under ambiguity exhibits effects of task framing in motor vs. non-motor context. In a first experiment, we designed an experience-based urn task with varying degrees of ambiguity and an equivalent motor task where subjects chose between hitting partially occluded targets. In a second experiment, we controlled for the different stimulus design in the two tasks by introducing an urn task with bar stimuli matching those in the motor task. We found ambiguity attitudes to be mainly influenced by stimulus design. In particular, we found that the same subjects tended to be ambiguity-preferring when choosing between ambiguous bar stimuli, but ambiguity-avoiding when choosing between ambiguous urn sample stimuli. In contrast, subjects' choice pattern was not affected by changing from a target hitting task to a non-motor context when keeping the stimulus design unchanged. In both tasks subjects' choice behavior was continuously modulated by the degree of ambiguity. We show that this modulation of behavior can be explained by an information-theoretic model of ambiguity that generalizes Bayes-optimal decision-making by combining Bayesian inference with robust decision-making under model uncertainty. Our results demonstrate the benefits of information-theoretic models of decision-making under varying degrees of ambiguity for a given context, but also demonstrate the sensitivity of ambiguity attitudes across contexts that theoretical models struggle to explain.
Jordi Grau-Moya
Full Text Available A number of recent studies have investigated differences in human choice behavior depending on task framing, especially comparing economic decision-making to choice behavior in equivalent sensorimotor tasks. Here we test whether decision-making under ambiguity exhibits effects of task framing in motor vs. non-motor context. In a first experiment, we designed an experience-based urn task with varying degrees of ambiguity and an equivalent motor task where subjects chose between hitting partially occluded targets. In a second experiment, we controlled for the different stimulus design in the two tasks by introducing an urn task with bar stimuli matching those in the motor task. We found ambiguity attitudes to be mainly influenced by stimulus design. In particular, we found that the same subjects tended to be ambiguity-preferring when choosing between ambiguous bar stimuli, but ambiguity-avoiding when choosing between ambiguous urn sample stimuli. In contrast, subjects' choice pattern was not affected by changing from a target hitting task to a non-motor context when keeping the stimulus design unchanged. In both tasks subjects' choice behavior was continuously modulated by the degree of ambiguity. We show that this modulation of behavior can be explained by an information-theoretic model of ambiguity that generalizes Bayes-optimal decision-making by combining Bayesian inference with robust decision-making under model uncertainty. Our results demonstrate the benefits of information-theoretic models of decision-making under varying degrees of ambiguity for a given context, but also demonstrate the sensitivity of ambiguity attitudes across contexts that theoretical models struggle to explain.
Grau-Moya, Jordi; Ortega, Pedro A.; Braun, Daniel A.
2016-01-01
A number of recent studies have investigated differences in human choice behavior depending on task framing, especially comparing economic decision-making to choice behavior in equivalent sensorimotor tasks. Here we test whether decision-making under ambiguity exhibits effects of task framing in motor vs. non-motor context. In a first experiment, we designed an experience-based urn task with varying degrees of ambiguity and an equivalent motor task where subjects chose between hitting partially occluded targets. In a second experiment, we controlled for the different stimulus design in the two tasks by introducing an urn task with bar stimuli matching those in the motor task. We found ambiguity attitudes to be mainly influenced by stimulus design. In particular, we found that the same subjects tended to be ambiguity-preferring when choosing between ambiguous bar stimuli, but ambiguity-avoiding when choosing between ambiguous urn sample stimuli. In contrast, subjects’ choice pattern was not affected by changing from a target hitting task to a non-motor context when keeping the stimulus design unchanged. In both tasks subjects’ choice behavior was continuously modulated by the degree of ambiguity. We show that this modulation of behavior can be explained by an information-theoretic model of ambiguity that generalizes Bayes-optimal decision-making by combining Bayesian inference with robust decision-making under model uncertainty. Our results demonstrate the benefits of information-theoretic models of decision-making under varying degrees of ambiguity for a given context, but also demonstrate the sensitivity of ambiguity attitudes across contexts that theoretical models struggle to explain. PMID:27124723
Howe, Alex R.; Burrows, Adam; Deming, Drake
2017-01-01
We provide an example of an analysis to explore the optimization of observations of transiting hot Jupiters with the James Webb Space Telescope (JWST) to characterize their atmospheres based on a simple three-parameter forward model. We construct expansive forward model sets for 11 hot Jupiters, 10 of which are relatively well characterized, exploring a range of parameters such as equilibrium temperature and metallicity, as well as considering host stars over a wide range in brightness. We compute posterior distributions of our model parameters for each planet with all of the available JWST spectroscopic modes and several programs of combined observations and compute their effectiveness using the metric of estimated mutual information per degree of freedom. From these simulations, clear trends emerge that provide guidelines for designing a JWST observing program. We demonstrate that these guidelines apply over a wide range of planet parameters and target brightnesses for our simple forward model.
Howe, Alex R.; Burrows, Adam [Department of Astronomy, University of Michigan, 1085 S. University, Ann Arbor, MI 48109 (United States); Deming, Drake, E-mail: arhowe@umich.edu, E-mail: burrows@astro.princeton.edu, E-mail: ddeming@astro.umd.edu [Department of Astronomy, University of Maryland College Park, MD 20742 (United States)
2017-01-20
We provide an example of an analysis to explore the optimization of observations of transiting hot Jupiters with the James Webb Space Telescope ( JWST ) to characterize their atmospheres based on a simple three-parameter forward model. We construct expansive forward model sets for 11 hot Jupiters, 10 of which are relatively well characterized, exploring a range of parameters such as equilibrium temperature and metallicity, as well as considering host stars over a wide range in brightness. We compute posterior distributions of our model parameters for each planet with all of the available JWST spectroscopic modes and several programs of combined observations and compute their effectiveness using the metric of estimated mutual information per degree of freedom. From these simulations, clear trends emerge that provide guidelines for designing a JWST observing program. We demonstrate that these guidelines apply over a wide range of planet parameters and target brightnesses for our simple forward model.
The quantization of the attention function under a Bayes information theoretic model
Wynn, H.P.; Sebastiani, P.
2001-01-01
Bayes experimental design using entropy, or equivalently negative information, as a criterion is fairly well developed. The present work applies this model but at a primitive level in statistical sampling. It is assumed that the observer/experimentor is allowed to place a window over the support of a sampling distribution and only 'pay for' observations that fall in this window. The window can be modeled with an 'attention function', simply the indicator function of the window. The understanding is that the cost of the experiment is only the number of paid for observations: n. For fixed n and under the information model it turns out that for standard problems the optimal structure for the window, in the limit amongst all types of window including disjoint regions, is discrete. That is to say it is optimal to observe the world (in this sense) through discrete slits. It also shows that in this case Bayesians with different priors will receive different samples because typically the optimal attention windows will be disjoint. This property we refer to as the quantization of the attention function
Clifford, Jacob; Adami, Christoph
2015-09-02
Transcription factor binding to the surface of DNA regulatory regions is one of the primary causes of regulating gene expression levels. A probabilistic approach to model protein-DNA interactions at the sequence level is through position weight matrices (PWMs) that estimate the joint probability of a DNA binding site sequence by assuming positional independence within the DNA sequence. Here we construct conditional PWMs that depend on the motif signatures in the flanking DNA sequence, by conditioning known binding site loci on the presence or absence of additional binding sites in the flanking sequence of each site's locus. Pooling known sites with similar flanking sequence patterns allows for the estimation of the conditional distribution function over the binding site sequences. We apply our model to the Dorsal transcription factor binding sites active in patterning the Dorsal-Ventral axis of Drosophila development. We find that those binding sites that cooperate with nearby Twist sites on average contain about 0.5 bits of information about the presence of Twist transcription factor binding sites in the flanking sequence. We also find that Dorsal binding site detectors conditioned on flanking sequence information make better predictions about what is a Dorsal site relative to background DNA than detection without information about flanking sequence features.
Shell model test of the Porter-Thomas distribution
Grimes, S.M.; Bloom, S.D.
1981-01-01
Eigenvectors have been calculated for the A=18, 19, 20, 21, and 26 nuclei in an sd shell basis. The decomposition of these states into their shell model components shows, in agreement with other recent work, that this distribution is not a single Gaussian. We find that the largest amplitudes are distributed approximately in a Gaussian fashion. Thus, many experimental measurements should be consistent with the Porter-Thomas predictions. We argue that the non-Gaussian form of the complete distribution can be simply related to the structure of the Hamiltonian
A multivariate rank test for comparing mass size distributions
Lombard, F.; Potgieter, C. J.
2012-01-01
Particle size analyses of a raw material are commonplace in the mineral processing industry. Knowledge of particle size distributions is crucial in planning milling operations to enable an optimum degree of liberation of valuable mineral phases
Experimental tests of charge symmetry violation in parton distributions
Londergan, J.T.; Murdock, D.P.; Thomas, A.W.
2005-01-01
Recently, a global phenomenological fit to high energy data has included charge symmetry breaking terms, leading to limits on the allowed magnitude of such effects. We discuss two possible experiments that could search for isospin violation in valence parton distributions. We show that, given the magnitude of charge symmetry violation consistent with existing global data, such experiments might expect to see effects at a level of several percent. Alternatively, such experiments could significantly decrease the upper limits on isospin violation in parton distributions
Horgan, S.; Iannucci, J.; Whitaker, C.; Cibulka, L.; Erdman, W.
2002-05-01
The objective of this project was to evaluate the Nevada Test Site (NTS) as a location for performing dedicated, in-depth testing of distributed resources (DR) integrated with the electric distribution system. In this large scale testing, it is desired to operate multiple DRs and loads in an actual operating environment, in a series of controlled tests to concentrate on issues of interest to the DR community. This report includes an inventory of existing facilities at NTS, an assessment of site attributes in relation to DR testing requirements, and an evaluation of the feasibility and cost of upgrades to the site that would make it a fully qualified DR testing facility.
Wu, Wenjie; Wu, Zemin; Rong, Chunying; Lu, Tian; Huang, Ying; Liu, Shubin
2015-07-23
The electrophilic aromatic substitution for nitration, halogenation, sulfonation, and acylation is a vastly important category of chemical transformation. Its reactivity and regioselectivity is predominantly determined by nucleophilicity of carbon atoms on the aromatic ring, which in return is immensely influenced by the group that is attached to the aromatic ring a priori. In this work, taking advantage of recent developments in quantifying nucleophilicity (electrophilicity) with descriptors from the information-theoretic approach in density functional reactivity theory, we examine the reactivity properties of this reaction system from three perspectives. These include scaling patterns of information-theoretic quantities such as Shannon entropy, Fisher information, Ghosh-Berkowitz-Parr entropy and information gain at both molecular and atomic levels, quantitative predictions of the barrier height with both Hirshfeld charge and information gain, and energetic decomposition analyses of the barrier height for the reactions. To that end, we focused in this work on the identity reaction of the monosubstituted-benzene molecule reacting with hydrogen fluoride using boron trifluoride as the catalyst in the gas phase. We also considered 19 substituting groups, 9 of which are ortho/para directing and the other 9 meta directing, besides the case of R = -H. Similar scaling patterns for these information-theoretic quantities found for stable species elsewhere were disclosed for these reactions systems. We also unveiled novel scaling patterns for information gain at the atomic level. The barrier height of the reactions can reliably be predicted by using both the Hirshfeld charge and information gain at the regioselective carbon atom. The energy decomposition analysis ensued yields an unambiguous picture about the origin of the barrier height, where we showed that it is the electrostatic interaction that plays the dominant role, while the roles played by exchange-correlation and
Improved Testing of Distributed Lag Model in Presence of ...
The finite distributed lag models (DLM) are often used in econometrics and statistics. Application of the ordinary least square (OLS) directly on the DLM for estimation may have serious problems. To overcome these problems, some alternative estimation procedures are available in the literature. One popular method to ...
10 CFR 431.193 - Test procedures for measuring energy consumption of distribution transformers.
2010-01-01
... 10 Energy 3 2010-01-01 2010-01-01 false Test procedures for measuring energy consumption of distribution transformers. 431.193 Section 431.193 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY... § 431.193 Test procedures for measuring energy consumption of distribution transformers. The test...
Przybyla, Jay; Taylor, Jeffrey; Zhou, Xuesong
2010-01-01
In this paper, a spatial information-theoretic model is proposed to locate sensors for detecting source-to-target patterns of special nuclear material (SNM) smuggling. In order to ship the nuclear materials from a source location with SNM production to a target city, the smugglers must employ global and domestic logistics systems. This paper focuses on locating a limited set of fixed and mobile radiation sensors in a transportation network, with the intent to maximize the expected information gain and minimize the estimation error for the subsequent nuclear material detection stage. A Kalman filtering-based framework is adapted to assist the decision-maker in quantifying the network-wide information gain and SNM flow estimation accuracy.
Xuesong Zhou
2010-08-01
Full Text Available In this paper, a spatial information-theoretic model is proposed to locate sensors for detecting source-to-target patterns of special nuclear material (SNM smuggling. In order to ship the nuclear materials from a source location with SNM production to a target city, the smugglers must employ global and domestic logistics systems. This paper focuses on locating a limited set of fixed and mobile radiation sensors in a transportation network, with the intent to maximize the expected information gain and minimize the estimation error for the subsequent nuclear material detection stage. A Kalman filtering-based framework is adapted to assist the decision-maker in quantifying the network-wide information gain and SNM flow estimation accuracy.
A Test Generation Framework for Distributed Fault-Tolerant Algorithms
Goodloe, Alwyn; Bushnell, David; Miner, Paul; Pasareanu, Corina S.
2009-01-01
Heavyweight formal methods such as theorem proving have been successfully applied to the analysis of safety critical fault-tolerant systems. Typically, the models and proofs performed during such analysis do not inform the testing process of actual implementations. We propose a framework for generating test vectors from specifications written in the Prototype Verification System (PVS). The methodology uses a translator to produce a Java prototype from a PVS specification. Symbolic (Java) PathFinder is then employed to generate a collection of test cases. A small example is employed to illustrate how the framework can be used in practice.
Uncovering Bugs in Distributed Storage Systems during Testing (not in Production!)
Deligiannis, P; McCutchen, M; Thomson, P; Chen, S; Donaldson, AF; Erickson, J; Huang, C; Lal, A; Mudduluru, R; Qadeer, S; Schulte, W
2016-01-01
Testing distributed systems is challenging due to multiple sources of nondeterminism. Conventional testing techniques, such as unit, integration and stress testing, are ineffective in preventing serious but subtle bugs from reaching production. Formal techniques, such as TLA+, can only verify high-level specifications of systems at the level of logic-based models, and fall short of checking the actual executable code. In this paper, we present a new methodology for testing distributed systems...
Project W-320 acceptance test report for AY-farm electrical distribution
Bevins, R.R.
1998-01-01
This Acceptance Test Procedure (ATP) has been prepared to demonstrate that the AY-Farm Electrical Distribution System functions as required by the design criteria. This test is divided into three parts to support the planned construction schedule; Section 8 tests Mini-Power Pane AY102-PPI and the EES; Section 9 tests the SSS support systems; Section 10 tests the SSS and the Multi-Pak Group Control Panel. This test does not include the operation of end-use components (loads) supplied from the distribution system. Tests of the end-use components (loads) will be performed by other W-320 ATPs
Real-time flight test data distribution and display
Nesel, Michael C.; Hammons, Kevin R.
1988-01-01
Enhancements to the real-time processing and display systems of the NASA Western Aeronautical Test Range are described. Display processing has been moved out of the telemetry and radar acquisition processing systems super-minicomputers into user/client interactive graphic workstations. Real-time data is provided to the workstations by way of Ethernet. Future enhancement plans include use of fiber optic cable to replace the Ethernet.
Asymptotically Distribution-Free Goodness-of-Fit Testing for Copulas
Can, S.U.; Einmahl, John; Laeven, R.J.A.
2017-01-01
Consider a random sample from a continuous multivariate distribution function F with copula C. In order to test the null hypothesis that C belongs to a certain parametric family, we construct an under H0 asymptotically distribution-free process that serves as a tests generator. The process is a
A CLASS OF DISTRIBUTION-FREE TESTS FOR INDEPENDENCE AGAINST POSITIVE QUADRANT DEPENDENCE
Parameshwar V Pandit
2014-02-01
Full Text Available A class of distribution-free tests based on convex combination of two U-statistics is considered for testing independence against positive quadrant dependence. The class of tests proposed by Kochar and Gupta (1987 and Kendall’s test are members of the proposed class. The performance of the proposed class is evaluated in terms of Pitman asymptotic relative efficiency for Block- Basu (1974 model and Woodworth family of distributions. It has been observed that some members of the class perform better than the existing tests in the literature. Unbiasedness and consistency of the proposed class of tests have been established.
Clergeau, Jean-Francois; Ferraton, Matthieu; Guerard, Bruno; Khaplanov, Anton; Piscitelli, Francesco; Platz, Martin; Rigal, Jean-Marie; Van Esch, Patrick; Daulle, Thibault
2013-06-01
1D or 2D neutron imaging detectors with individual wire or strip readout using discriminators have the advantage of being able to treat several neutron impacts partially overlapping in time, hence reducing global dead time. A single neutron impact usually gives rise to several discriminator signals. In this paper, we introduce an information-theoretical definition of image resolution. Two point-like spots of neutron impacts with a given distance between them act as a source of information (each neutron hit belongs to one spot or the other), and the detector plus signal treatment is regarded as an imperfect communication channel that transmits this information. The maximal mutual information obtained from this channel as a function of the distance between the spots allows to define a calibration-independent measure of resolution. We then apply this measure to quantify the power of resolution of different algorithms treating these individual discriminator signals which can be implemented in firmware. The method is then applied to different detectors existing at the ILL. Center-of-gravity methods usually improve the resolution over best-wire algorithms which are the standard way of treating these signals. (authors)
Mohamed Idhammad
2018-01-01
Full Text Available Cloud Computing services are often delivered through HTTP protocol. This facilitates access to services and reduces costs for both providers and end-users. However, this increases the vulnerabilities of the Cloud services face to HTTP DDoS attacks. HTTP request methods are often used to address web servers’ vulnerabilities and create multiple scenarios of HTTP DDoS attack such as Low and Slow or Flooding attacks. Existing HTTP DDoS detection systems are challenged by the big amounts of network traffic generated by these attacks, low detection accuracy, and high false positive rates. In this paper we present a detection system of HTTP DDoS attacks in a Cloud environment based on Information Theoretic Entropy and Random Forest ensemble learning algorithm. A time-based sliding window algorithm is used to estimate the entropy of the network header features of the incoming network traffic. When the estimated entropy exceeds its normal range the preprocessing and the classification tasks are triggered. To assess the proposed approach various experiments were performed on the CIDDS-001 public dataset. The proposed approach achieves satisfactory results with an accuracy of 99.54%, a FPR of 0.4%, and a running time of 18.5s.
Tourassi, Georgia D.; Harrawood, Brian; Singh, Swatee; Lo, Joseph Y.; Floyd, Carey E.
2007-01-01
The purpose of this study was to evaluate image similarity measures employed in an information-theoretic computer-assisted detection (IT-CAD) scheme. The scheme was developed for content-based retrieval and detection of masses in screening mammograms. The study is aimed toward an interactive clinical paradigm where physicians query the proposed IT-CAD scheme on mammographic locations that are either visually suspicious or indicated as suspicious by other cuing CAD systems. The IT-CAD scheme provides an evidence-based, second opinion for query mammographic locations using a knowledge database of mass and normal cases. In this study, eight entropy-based similarity measures were compared with respect to retrieval precision and detection accuracy using a database of 1820 mammographic regions of interest. The IT-CAD scheme was then validated on a separate database for false positive reduction of progressively more challenging visual cues generated by an existing, in-house mass detection system. The study showed that the image similarity measures fall into one of two categories; one category is better suited to the retrieval of semantically similar cases while the second is more effective with knowledge-based decisions regarding the presence of a true mass in the query location. In addition, the IT-CAD scheme yielded a substantial reduction in false-positive detections while maintaining high detection rate for malignant masses
Lizier, Joseph T; Heinzle, Jakob; Horstmann, Annette; Haynes, John-Dylan; Prokopenko, Mikhail
2011-02-01
The human brain undertakes highly sophisticated information processing facilitated by the interaction between its sub-regions. We present a novel method for interregional connectivity analysis, using multivariate extensions to the mutual information and transfer entropy. The method allows us to identify the underlying directed information structure between brain regions, and how that structure changes according to behavioral conditions. This method is distinguished in using asymmetric, multivariate, information-theoretical analysis, which captures not only directional and non-linear relationships, but also collective interactions. Importantly, the method is able to estimate multivariate information measures with only relatively little data. We demonstrate the method to analyze functional magnetic resonance imaging time series to establish the directed information structure between brain regions involved in a visuo-motor tracking task. Importantly, this results in a tiered structure, with known movement planning regions driving visual and motor control regions. Also, we examine the changes in this structure as the difficulty of the tracking task is increased. We find that task difficulty modulates the coupling strength between regions of a cortical network involved in movement planning and between motor cortex and the cerebellum which is involved in the fine-tuning of motor control. It is likely these methods will find utility in identifying interregional structure (and experimentally induced changes in this structure) in other cognitive tasks and data modalities.
Stamoulis, Catherine; Schomer, Donald L; Chang, Bernard S
2013-08-01
How a seizure terminates is still under-studied and, despite its clinical importance, remains an obscure phase of seizure evolution. Recent studies of seizure-related scalp EEGs at frequencies >100 Hz suggest that neural activity, in the form of oscillations and/or neuronal network interactions, may play an important role in preictal/ictal seizure evolution (Andrade-Valenca et al., 2011; Stamoulis et al., 2012). However, the role of high-frequency activity in seizure termination, is unknown, if it exists at all. Using information theoretic measures of network coordination, this study investigated ictal and immediate postictal neurodynamic interactions encoded in scalp EEGs from a relatively small sample of 8 patients with focal epilepsy and multiple seizures originating in temporal and/or frontal brain regions, at frequencies ≤ 100 Hz and >100 Hz, respectively. Despite some heterogeneity in the dynamics of these interactions, consistent patterns were also estimated. Specifically, in several seizures, linear or non-linear increase in high-frequency neuronal coordination during ictal intervals, coincided with a corresponding decrease in coordination at frequencies interval, which continues during the postictal interval. This may be one of several possible mechanisms that facilitate seizure termination. In fact, inhibition of pairwise interactions between EEGs by other signals in their spatial neighborhood, quantified by negative interaction information, was estimated at frequencies ≤ 100 Hz, at least in some seizures. Copyright © 2013 Elsevier B.V. All rights reserved.
Clergeau, Jean-Francois; Ferraton, Matthieu; Guerard, Bruno; Khaplanov, Anton; Piscitelli, Francesco; Platz, Martin; Rigal, Jean-Marie; Van Esch, Patrick [Institut Laue Langevin, Neutron Detector Service, Grenoble (France); Daulle, Thibault [PHELMA Grenoble - INP Grenoble (France)
2013-06-15
1D or 2D neutron imaging detectors with individual wire or strip readout using discriminators have the advantage of being able to treat several neutron impacts partially overlapping in time, hence reducing global dead time. A single neutron impact usually gives rise to several discriminator signals. In this paper, we introduce an information-theoretical definition of image resolution. Two point-like spots of neutron impacts with a given distance between them act as a source of information (each neutron hit belongs to one spot or the other), and the detector plus signal treatment is regarded as an imperfect communication channel that transmits this information. The maximal mutual information obtained from this channel as a function of the distance between the spots allows to define a calibration-independent measure of resolution. We then apply this measure to quantify the power of resolution of different algorithms treating these individual discriminator signals which can be implemented in firmware. The method is then applied to different detectors existing at the ILL. Center-of-gravity methods usually improve the resolution over best-wire algorithms which are the standard way of treating these signals. (authors)
Erik Olofsen
2015-07-01
Full Text Available Akaike's information theoretic criterion for model discrimination (AIC is often stated to "overfit", i.e., it selects models with a higher dimension than the dimension of the model that generated the data. However, with experimental pharmacokinetic data it may not be possible to identify the correct model, because of the complexity of the processes governing drug disposition. Instead of trying to find the correct model, a more useful objective might be to minimize the prediction error of drug concentrations in subjects with unknown disposition characteristics. In that case, the AIC might be the selection criterion of choice. We performed Monte Carlo simulations using a model of pharmacokinetic data (a power function of time with the property that fits with common multi-exponential models can never be perfect - thus resembling the situation with real data. Prespecified models were fitted to simulated data sets, and AIC and AICc (the criterion with a correction for small sample sizes values were calculated and averaged. The average predictive performances of the models, quantified using simulated validation sets, were compared to the means of the AICs. The data for fits and validation consisted of 11 concentration measurements each obtained in 5 individuals, with three degrees of interindividual variability in the pharmacokinetic volume of distribution. Mean AICc corresponded very well, and better than mean AIC, with mean predictive performance. With increasing interindividual variability, there was a trend towards larger optimal models, but with respect to both lowest AICc and best predictive performance. Furthermore, it was observed that the mean square prediction error itself became less suitable as a validation criterion, and that a predictive performance measure should incorporate interindividual variability. This simulation study showed that, at least in a relatively simple mixed-effects modelling context with a set of prespecified models
Testing iOS apps with HadoopUnit rapid distributed GUI testing
Tilley, Scott
2014-01-01
Smartphone users have come to expect high-quality apps. This has increased the importance of software testing in mobile software development. Unfortunately, testing apps-particularly the GUI-can be very time-consuming. Exercising every user interface element and verifying transitions between different views of the app under test quickly becomes problematic. For example, execution of iOS GUI test suites using Apple's UI Automation framework can take an hour or more if the app's interface is complicated. The longer it takes to run a test, the less frequently the test can be run, which in turn re
Confidence bounds and hypothesis tests for normal distribution coefficients of variation
Steve Verrill; Richard A. Johnson
2007-01-01
For normally distributed populations, we obtain confidence bounds on a ratio of two coefficients of variation, provide a test for the equality of k coefficients of variation, and provide confidence bounds on a coefficient of variation shared by k populations.
Testing the anisotropy in the angular distribution of Fermi/GBM gamma-ray bursts
Tarnopolski, M.
2017-12-01
Gamma-ray bursts (GRBs) were confirmed to be of extragalactic origin due to their isotropic angular distribution, combined with the fact that they exhibited an intensity distribution that deviated strongly from the -3/2 power law. This finding was later confirmed with the first redshift, equal to at least z = 0.835, measured for GRB970508. Despite this result, the data from CGRO/BATSE and Swift/BAT indicate that long GRBs are indeed distributed isotropically, but the distribution of short GRBs is anisotropic. Fermi/GBM has detected 1669 GRBs up to date, and their sky distribution is examined in this paper. A number of statistical tests are applied: nearest neighbour analysis, fractal dimension, dipole and quadrupole moments of the distribution function decomposed into spherical harmonics, binomial test and the two-point angular correlation function. Monte Carlo benchmark testing of each test is performed in order to evaluate its reliability. It is found that short GRBs are distributed anisotropically in the sky, and long ones have an isotropic distribution. The probability that these results are not a chance occurrence is equal to at least 99.98 per cent and 30.68 per cent for short and long GRBs, respectively. The cosmological context of this finding and its relation to large-scale structures is discussed.
Impact of peak electricity demand in distribution grids: a stress test
Hoogsteen, Gerwin; Molderink, Albert; Hurink, Johann L.; Smit, Gerardus Johannes Maria; Schuring, Friso; Kootstra, Ben
2015-01-01
The number of (hybrid) electric vehicles is growing, leading to a higher demand for electricity in distribution grids. To investigate the effects of the expected peak demand on distribution grids, a stress test with 15 electric vehicles in a single street is conducted and described in this paper.
Preliminary investigation on determination of radionuclide distribution in field tracing test site
Tanaka, Tadao; Mukai, Masayuki; Takebe, Shinichi; Guo Zede; Li Shushen; Kamiyama, Hideo.
1993-12-01
Field tracing tests for radionuclide migration have been conducted by using 3 H, 60 Co, 85 Sr and 134 Cs, in the natural unsaturated loess zone at field test site of China Institute for Radiation Protection. It is necessary to obtain confidable distribution data of the radionuclides in the test site, in order to evaluate exactly the migration behavior of the radionuclides in situ. An available method to determine the distribution was proposed on the basis of preliminary discussing results on sampling method of soils from the test site and analytical method of radioactivity in the soils. (author)
In-core flow rate distribution measurement test of the JOYO irradiation core
Suzuki, Toshihiro; Isozaki, Kazunori; Suzuki, Soju
1996-01-01
A flow rate distribution measurement test was carried out for the JOYO irradiation core (the MK-II core) after the 29th duty cycle operation. The main object of the test is to confirm the proper flow rate distribution at the final phase of the MK-II core. The each flow rate at the outlet of subassemblies was measured by the permanent magnetic flowmeter inserted avail of fuel exchange hole in the rotating plug. This is third test in the MK-II core, after 10 years absence from the final test (1985). Total of 550 subassemblies were exchanged and accumulated reactor operation time reached up to 38,000 hours from the previous test. As a conclusion, it confirmed that the flow rate distribution has been kept suitable in the final phase of the MK-II core. (author)
The Application of Hardware in the Loop Testing for Distributed Engine Control
Thomas, George L.; Culley, Dennis E.; Brand, Alex
2016-01-01
The essence of a distributed control system is the modular partitioning of control function across a hardware implementation. This type of control architecture requires embedding electronics in a multitude of control element nodes for the execution of those functions, and their integration as a unified system. As the field of distributed aeropropulsion control moves toward reality, questions about building and validating these systems remain. This paper focuses on the development of hardware-in-the-loop (HIL) test techniques for distributed aero engine control, and the application of HIL testing as it pertains to potential advanced engine control applications that may now be possible due to the intelligent capability embedded in the nodes.
Loparo, Kenneth [Case Western Reserve Univ., Cleveland, OH (United States); Kolacinski, Richard [Case Western Reserve Univ., Cleveland, OH (United States); Threeanaew, Wanchat [Case Western Reserve Univ., Cleveland, OH (United States); Agharazi, Hanieh [Case Western Reserve Univ., Cleveland, OH (United States)
2017-01-30
A central goal of the work was to enable both the extraction of all relevant information from sensor data, and the application of information gained from appropriate processing and fusion at the system level to operational control and decision-making at various levels of the control hierarchy through: 1. Exploiting the deep connection between information theory and the thermodynamic formalism, 2. Deployment using distributed intelligent agents with testing and validation in a hardware-in-the loop simulation environment. Enterprise architectures are the organizing logic for key business processes and IT infrastructure and, while the generality of current definitions provides sufficient flexibility, the current architecture frameworks do not inherently provide the appropriate structure. Of particular concern is that existing architecture frameworks often do not make a distinction between ``data'' and ``information.'' This work defines an enterprise architecture for health and condition monitoring of power plant equipment and further provides the appropriate foundation for addressing shortcomings in current architecture definition frameworks through the discovery of the information connectivity between the elements of a power generation plant. That is, to identify the correlative structure between available observations streams using informational measures. The principle focus here is on the implementation and testing of an emergent, agent-based, algorithm based on the foraging behavior of ants for eliciting this structure and on measures for characterizing differences between communication topologies. The elicitation algorithms are applied to data streams produced by a detailed numerical simulation of Alstom’s 1000 MW ultra-super-critical boiler and steam plant. The elicitation algorithm and topology characterization can be based on different informational metrics for detecting connectivity, e.g. mutual information and linear correlation.
Real time testing of intelligent relays for synchronous distributed generation islanding detection
Zhuang, Davy
As electric power systems continue to grow to meet ever-increasing energy demand, their security, reliability, and sustainability requirements also become more stringent. The deployment of distributed energy resources (DER), including generation and storage, in conventional passive distribution feeders, gives rise to integration problems involving protection and unintentional islanding. Distributed generators need to be islanded for safety reasons when disconnected or isolated from the main feeder as distributed generator islanding may create hazards to utility and third-party personnel, and possibly damage the distribution system infrastructure, including the distributed generators. This thesis compares several key performance indicators of a newly developed intelligent islanding detection relay, against islanding detection devices currently used by the industry. The intelligent relay employs multivariable analysis and data mining methods to arrive at decision trees that contain both the protection handles and the settings. A test methodology is developed to assess the performance of these intelligent relays on a real time simulation environment using a generic model based on a real-life distribution feeder. The methodology demonstrates the applicability and potential advantages of the intelligent relay, by running a large number of tests, reflecting a multitude of system operating conditions. The testing indicates that the intelligent relay often outperforms frequency, voltage and rate of change of frequency relays currently used for islanding detection, while respecting the islanding detection time constraints imposed by standing distributed generator interconnection guidelines.
On the asymptotic distribution of a unit root test against ESTAR alternatives
Hanck, Christoph
We derive the null distribution of the nonlinear unit root test proposed in Kapetanios et al. [Kapetanios, G., Shin, Y., Snell, A., 2003. Testing for a unit root in the nonlinear STAR framework, journal of Econometrics 112, 359-379] when nonzero means or both means and deterministic trends are
Confidence bounds and hypothesis tests for normal distribution coefficients of variation
Steve P. Verrill; Richard A. Johnson
2007-01-01
For normally distributed populations, we obtain confidence bounds on a ratio of two coefficients of variation, provide a test for the equality of k coefficients of variation, and provide confidence bounds on a coefficient of variation shared by k populations. To develop these confidence bounds and test, we first establish that estimators based on Newton steps from n-...
Test of methods for retrospective activity size distribution determination from filter samples
Meisenberg, Oliver; Tschiersch, Jochen
2015-01-01
Determining the activity size distribution of radioactive aerosol particles requires sophisticated and heavy equipment, which makes measurements at large number of sites difficult and expensive. Therefore three methods for a retrospective determination of size distributions from aerosol filter samples in the laboratory were tested for their applicability. Extraction into a carrier liquid with subsequent nebulisation showed size distributions with a slight but correctable bias towards larger diameters compared with the original size distribution. Yields in the order of magnitude of 1% could be achieved. Sonication-assisted extraction into a carrier liquid caused a coagulation mode to appear in the size distribution. Sonication-assisted extraction into the air did not show acceptable results due to small yields. The method of extraction into a carrier liquid without sonication was applied to aerosol samples from Chernobyl in order to calculate inhalation dose coefficients for 137 Cs based on the individual size distribution. The effective dose coefficient is about half of that calculated with a default reference size distribution. - Highlights: • Activity size distributions can be recovered after aerosol sampling on filters. • Extraction into a carrier liquid and subsequent nebulisation is appropriate. • This facilitates the determination of activity size distributions for individuals. • Size distributions from this method can be used for individual dose coefficients. • Dose coefficients were calculated for the workers at the new Chernobyl shelter
Different goodness of fit tests for Rayleigh distribution in ranked set sampling
Amer Al-Omari
2016-03-01
Full Text Available In this paper, different goodness of fit tests for the Rayleigh distribution are considered based on simple random sampling (SRS and ranked set sampling (RSS techniques. The performance of the suggested estimators is evaluated in terms of the power of the tests by using Monte Carlo simulation. It is found that the suggested RSS tests perform better than their counterparts in SRS.
ON ESTIMATION AND HYPOTHESIS TESTING OF THE GRAIN SIZE DISTRIBUTION BY THE SALTYKOV METHOD
Yuri Gulbin
2011-05-01
Full Text Available The paper considers the problem of validity of unfolding the grain size distribution with the back-substitution method. Due to the ill-conditioned nature of unfolding matrices, it is necessary to evaluate the accuracy and precision of parameter estimation and to verify the possibility of expected grain size distribution testing on the basis of intersection size histogram data. In order to review these questions, the computer modeling was used to compare size distributions obtained stereologically with those possessed by three-dimensional model aggregates of grains with a specified shape and random size. Results of simulations are reported and ways of improving the conventional stereological techniques are suggested. It is shown that new improvements in estimating and testing procedures enable grain size distributions to be unfolded more efficiently.
A subchannel and CFD analysis of void distribution for the BWR fuel bundle test benchmark
In, Wang-Kee; Hwang, Dae-Hyun; Jeong, Jae Jun
2013-01-01
Highlights: ► We analyzed subchannel void distributions using subchannel, system and CFD codes. ► The mean error and standard deviation at steady states were compared. ► The deviation of the CFD simulation was greater than those of the others. ► The large deviation of the CFD prediction is due to interface model uncertainties. -- Abstract: The subchannel grade and microscopic void distributions in the NUPEC (Nuclear Power Engineering Corporation) BFBT (BWR Full-Size Fine-Mesh Bundle Tests) facility have been evaluated with a subchannel analysis code MATRA, a system code MARS and a CFD code CFX-10. Sixteen test series from five different test bundles were selected for the analysis of the steady-state subchannel void distributions. Four test cases for a high burn-up 8 × 8 fuel bundle with a single water rod were simulated using CFX-10 for the microscopic void distribution benchmark. Two transient cases, a turbine trip without a bypass as a typical power transient and a re-circulation pump trip as a flow transient, were also chosen for this analysis. It was found that the steady-state void distributions calculated by both the MATRA and MARS codes coincided well with the measured data in the range of thermodynamic qualities from 5 to 25%. The results of the transient calculations were also similar to each other and very reasonable. The CFD simulation reproduced the overall radial void distribution trend which produces less vapor in the central part of the bundle and more vapor in the periphery. However, the predicted variation of the void distribution inside the subchannels is small, while the measured one is large showing a very high concentration in the center of the subchannels. The variations of the void distribution between the center of the subchannels and the subchannel gap are estimated to be about 5–10% for the CFD prediction and more than 20% for the experiment
Pascual Pañach, Josep
2010-01-01
Leaks are present in all water distribution systems. In this paper a method for leakage detection and localisation is presented. It uses pressure measurements and simulation models. Leakage localisation methodology is based on pressure sensitivity matrix. Sensitivity is normalised and binarised using a common threshold for all nodes, so a signatures matrix is obtained. A pressure sensor optimal distribution methodology is developed too, but it is not used in the real test. To validate this...
Nagai, Keiichi; Hirabayashi, Masaru; Onojima, T.; Gunji, Minoru; Ara, Kuniaki; Oki, Yoshihisa
1999-04-01
In order to develop a numerical code simulating sodium fires initiated frame dispersion of droplets, measured data of droplet diameter as well as its distribution are needed. In the present experiment the distribution of droplet diameter was measured using water, oil and sodium. The tests elucidated the influential factors with respect to the droplet diameter. In addition, we sought to develop a similarity law between water and sodium. The droplet size distribution of sodium using the large diameter droplet (Elnozzle) was predicted. (J.P.N.)
Impulse tests on distribution transformers protected by means of spark gaps
Pykaelae, M.L.; Palva, V. [Helsinki Univ. of Technology, Otaniemi (Finland). High Voltage Institute; Niskanen, K. [ABB Corporate Research, Vaasa (Finland)
1997-12-31
Distribution transformers in rural networks have to cope with transient overvoltages, even with those caused by the direct lightning strokes to the lines. In Finland the 24 kV network conditions, such as wooden pole lines, high soil resistivity and isolated neutral network, lead into fast transient overvoltages. Impulse testing of pole-mounted distribution transformers ({<=} 200 kVA) protected by means of spark gaps were studied. Different failure detection methods were used. Results can be used as background information for standardization work dealing with distribution transformers protected by means of spark gaps. (orig.) 9 refs.
Impulse tests on distribution transformers protected by means of spark gaps
Pykaelae, M L; Palva, V [Helsinki Univ. of Technology, Otaniemi (Finland). High Voltage Institute; Niskanen, K [ABB Corporate Research, Vaasa (Finland)
1998-12-31
Distribution transformers in rural networks have to cope with transient overvoltages, even with those caused by the direct lightning strokes to the lines. In Finland the 24 kV network conditions, such as wooden pole lines, high soil resistivity and isolated neutral network, lead into fast transient overvoltages. Impulse testing of pole-mounted distribution transformers ({<=} 200 kVA) protected by means of spark gaps were studied. Different failure detection methods were used. Results can be used as background information for standardization work dealing with distribution transformers protected by means of spark gaps. (orig.) 9 refs.
Development of Ada language control software for the NASA power management and distribution test bed
Wright, Ted; Mackin, Michael; Gantose, Dave
1989-01-01
The Ada language software developed to control the NASA Lewis Research Center's Power Management and Distribution testbed is described. The testbed is a reduced-scale prototype of the electric power system to be used on space station Freedom. It is designed to develop and test hardware and software for a 20-kHz power distribution system. The distributed, multiprocessor, testbed control system has an easy-to-use operator interface with an understandable English-text format. A simple interface for algorithm writers that uses the same commands as the operator interface is provided, encouraging interactive exploration of the system.
Maatta, E; CERN. Geneva; Swoboda, Detlef; Lecoeur, G
1999-01-01
The sub-detectors and systems in the ALICE experiment [1] are of various types. However, during physics runs, all devices necessary for the operation of the detector must be accessible and controllable through a common computer interface. Throughout all other periods each sub-detector requires maintenance, upgrading or test operation. To this end, an access independant of other sub-detectors must be guaranteed. These basic requirements impose a fair number of constraints on the architecture and components of the Detector Control System (DCS). The purpose of the TESt project consisted in the construction of a stand alone unit for a specific sub-system of an ALICE detector in order to gain first experience with commercial products for detector control. Although the control system includes only a small number of devices and is designed for a particular application, it covers nevertheless all layers of a complete system and can be extended or used in different applications. The control system prototype has been...
A practical test for the choice of mixing distribution in discrete choice models
Fosgerau, Mogens; Bierlaire, Michel
2007-01-01
The choice of a specific distribution for random parameters of discrete choice models is a critical issue in transportation analysis. Indeed, various pieces of research have demonstrated that an inappropriate choice of the distribution may lead to serious bias in model forecast and in the estimated...... means of random parameters. In this paper, we propose a practical test, based on seminonparametric techniques. The test is analyzed both on synthetic and real data, and is shown to be simple and powerful. (c) 2007 Elsevier Ltd. All rights reserved....
A Generic Danish Distribution Grid Model for Smart Grid Technology Testing
Cha, Seung-Tae; Wu, Qiuwei; Østergaard, Jacob
2012-01-01
This paper describes the development of a generic Danish distribution grid model for smart grid technology testing based on the Bornholm power system. The frequency dependent network equivalent (FDNE) method has been used in order to accurately preserve the desired properties and characteristics...... as a generic Smart Grid benchmark model for testing purposes....... by comparing the transient response of the original Bornholm power system model and the developed generic model under significant fault conditions. The results clearly show that the equivalent generic distribution grid model retains the dynamic characteristics of the original system, and can be used...
Kerschbamer, Rudolf
2015-05-01
This paper proposes a geometric delineation of distributional preference types and a non-parametric approach for their identification in a two-person context. It starts with a small set of assumptions on preferences and shows that this set (i) naturally results in a taxonomy of distributional archetypes that nests all empirically relevant types considered in previous work; and (ii) gives rise to a clean experimental identification procedure - the Equality Equivalence Test - that discriminates between archetypes according to core features of preferences rather than properties of specific modeling variants. As a by-product the test yields a two-dimensional index of preference intensity.
Distributed analysis functional testing using GangaRobot in the ATLAS experiment
Legger, Federica; ATLAS Collaboration
2011-12-01
Automated distributed analysis tests are necessary to ensure smooth operations of the ATLAS grid resources. The HammerCloud framework allows for easy definition, submission and monitoring of grid test applications. Both functional and stress test applications can be defined in HammerCloud. Stress tests are large-scale tests meant to verify the behaviour of sites under heavy load. Functional tests are light user applications running at each site with high frequency, to ensure that the site functionalities are available at all times. Success or failure rates of these tests jobs are individually monitored. Test definitions and results are stored in a database and made available to users and site administrators through a web interface. In this work we present the recent developments of the GangaRobot framework. GangaRobot monitors the outcome of functional tests, creates a blacklist of sites failing the tests, and exports the results to the ATLAS Site Status Board (SSB) and to the Service Availability Monitor (SAM), providing on the one hand a fast way to identify systematic or temporary site failures, and on the other hand allowing for an effective distribution of the work load on the available resources.
Wu, Hao
2018-05-01
In structural equation modelling (SEM), a robust adjustment to the test statistic or to its reference distribution is needed when its null distribution deviates from a χ 2 distribution, which usually arises when data do not follow a multivariate normal distribution. Unfortunately, existing studies on this issue typically focus on only a few methods and neglect the majority of alternative methods in statistics. Existing simulation studies typically consider only non-normal distributions of data that either satisfy asymptotic robustness or lead to an asymptotic scaled χ 2 distribution. In this work we conduct a comprehensive study that involves both typical methods in SEM and less well-known methods from the statistics literature. We also propose the use of several novel non-normal data distributions that are qualitatively different from the non-normal distributions widely used in existing studies. We found that several under-studied methods give the best performance under specific conditions, but the Satorra-Bentler method remains the most viable method for most situations. © 2017 The British Psychological Society.
Goodness-of-Fit Tests for Generalized Normal Distribution for Use in Hydrological Frequency Analysis
Das, Samiran
2018-04-01
The use of three-parameter generalized normal (GNO) as a hydrological frequency distribution is well recognized, but its application is limited due to unavailability of popular goodness-of-fit (GOF) test statistics. This study develops popular empirical distribution function (EDF)-based test statistics to investigate the goodness-of-fit of the GNO distribution. The focus is on the case most relevant to the hydrologist, namely, that in which the parameter values are unidentified and estimated from a sample using the method of L-moments. The widely used EDF tests such as Kolmogorov-Smirnov, Cramer von Mises, and Anderson-Darling (AD) are considered in this study. A modified version of AD, namely, the Modified Anderson-Darling (MAD) test, is also considered and its performance is assessed against other EDF tests using a power study that incorporates six specific Wakeby distributions (WA-1, WA-2, WA-3, WA-4, WA-5, and WA-6) as the alternative distributions. The critical values of the proposed test statistics are approximated using Monte Carlo techniques and are summarized in chart and regression equation form to show the dependence of shape parameter and sample size. The performance results obtained from the power study suggest that the AD and a variant of the MAD (MAD-L) are the most powerful tests. Finally, the study performs case studies involving annual maximum flow data of selected gauged sites from Irish and US catchments to show the application of the derived critical values and recommends further assessments to be carried out on flow data sets of rivers with various hydrological regimes.
Ojaghi, Mobin; Martínez, Ignacio Lamata; Dietz, Matt S.; Williams, Martin S.; Blakeborough, Anthony; Crewe, Adam J.; Taylor, Colin A.; Madabhushi, S. P. Gopal; Haigh, Stuart K.
2018-01-01
Distributed Hybrid Testing (DHT) is an experimental technique designed to capitalise on advances in modern networking infrastructure to overcome traditional laboratory capacity limitations. By coupling the heterogeneous test apparatus and computational resources of geographically distributed laboratories, DHT provides the means to take on complex, multi-disciplinary challenges with new forms of communication and collaboration. To introduce the opportunity and practicability afforded by DHT, here an exemplar multi-site test is addressed in which a dedicated fibre network and suite of custom software is used to connect the geotechnical centrifuge at the University of Cambridge with a variety of structural dynamics loading apparatus at the University of Oxford and the University of Bristol. While centrifuge time-scaling prevents real-time rates of loading in this test, such experiments may be used to gain valuable insights into physical phenomena, test procedure and accuracy. These and other related experiments have led to the development of the real-time DHT technique and the creation of a flexible framework that aims to facilitate future distributed tests within the UK and beyond. As a further example, a real-time DHT experiment between structural labs using this framework for testing across the Internet is also presented.
Posterior cerebral artery Wada test: sodium amytal distribution and functional deficits
Urbach, H.; Schild, H.H. [Dept. of Radiology/Neuroradiology, Univ. of Bonn (Germany); Klemm, E.; Biersack, H.J. [Bonn Univ. (Germany). Klinik fuer Nuklearmedizin; Linke, D.B.; Behrends, K.; Schramm, J. [Dept. of Neurosurgery, Univ. of Bonn (Germany)
2001-04-01
Inadequate sodium amytal delivery to the posterior hippocampus during the intracarotid Wada test has led to development of selective tests. Our purpose was to show the sodium amytal distribution in the posterior cerebral artery (PCA) Wada test and to relate it to functional deficits during the test. We simultaneously injected 80 mg sodium amytal and 14.8 MBq {sup 99} {sup m}Tc-hexamethylpropyleneamine oxime (HMPAO) into the P2-segment of the PCA in 14 patients with temporal lobe epilepsy. To show the skull, we injected 116 MBq {sup 99} {sup m}Tc-HDP intravenously. Sodium amytal distribution was determined by high-resolution single-photon emission computed tomography (SPECT). In all patients, HMPAO was distributed throughout the parahippocampal gyrus and hippocampus; it was also seen in the occipital lobe in all cases and in the thalamus in 11. Eleven patients were awake and cooperative; one was slightly uncooperative due to speech comprehension difficulties and perseveration. All patients showed contralateral hemianopia during the test. Four patients had nominal dysphasia for 1-3 min. None developed motor deficits or had permanent neurological deficits. Neurological deficits due to inactivation of extrahippocampal areas thus do not grossly interfere with neuropsychological testing during the test. (orig.)
Huang, Shuguang; Yeo, Adeline A; Li, Shuyu Dan
2007-10-01
The Kolmogorov-Smirnov (K-S) test is a statistical method often used for comparing two distributions. In high-throughput screening (HTS) studies, such distributions usually arise from the phenotype of independent cell populations. However, the K-S test has been criticized for being overly sensitive in applications, and it often detects a statistically significant difference that is not biologically meaningful. One major reason is that there is a common phenomenon in HTS studies that systematic drifting exists among the distributions due to reasons such as instrument variation, plate edge effect, accidental difference in sample handling, etc. In particular, in high-content cellular imaging experiments, the location shift could be dramatic since some compounds themselves are fluorescent. This oversensitivity of the K-S test is particularly overpowered in cellular assays where the sample sizes are very big (usually several thousands). In this paper, a modified K-S test is proposed to deal with the nonspecific location-shift problem in HTS studies. Specifically, we propose that the distributions are "normalized" by density curve alignment before the K-S test is conducted. In applications to simulation data and real experimental data, the results show that the proposed method has improved specificity.
Jackola, Arthur S.; Hartjen, Gary L.
1992-01-01
The plans for a new test facility, including new environmental test systems, which are presently under construction, and the major environmental Test Support Equipment (TSE) used therein are addressed. This all-new Rocketdyne facility will perform space simulation environmental tests on Power Management and Distribution (PMAD) hardware to Space Station Freedom (SSF) at the Engineering Model, Qualification Model, and Flight Model levels of fidelity. Testing will include Random Vibration in three axes - Thermal Vacuum, Thermal Cycling and Thermal Burn-in - as well as numerous electrical functional tests. The facility is designed to support a relatively high throughput of hardware under test, while maintaining the high standards required for a man-rated space program.
Abeer Abd-Alla EL-Helbawy
2016-09-01
Full Text Available The accelerated life tests provide quick information on the life time distributions by testing materials or products at higher than basic conditional levels of stress such as pressure, high temperature, vibration, voltage or load to induce failures. In this paper, the acceleration model assumed is log linear model. Constant stress tests are discussed based on Type I and Type II censoring. The Kumaraswmay Weibull distribution is used. The estimators of the parameters, reliability, hazard rate functions and p-th percentile at normal condition, low stress, and high stress are obtained. In addition, credible intervals for parameters of the models are constructed. Optimum test plan are designed. Some numerical studies are used to solve the complicated integrals such as Laplace and Markov Chain Monte Carlo methods.
Abeer Abd-Alla EL-Helbawy
2016-12-01
Full Text Available The accelerated life tests provide quick information on the life time distributions by testing materials or products at higher than basic conditional levels of stress such as pressure, high temperature, vibration, voltage or load to induce failures. In this paper, the acceleration model assumed is log linear model. Constant stress tests are discussed based on Type I and Type II censoring. The Kumaraswmay Weibull distribution is used. The estimators of the parameters, reliability, hazard rate functions and p-th percentile at normal condition, low stress, and high stress are obtained. In addition, credible intervals for parameters of the models are constructed. Optimum test plan are designed. Some numerical studies are used to solve the complicated integrals such as Laplace and Markov Chain Monte Carlo methods.
Wilkins, M.; Moyer, E. J.; Hussein, Islam I.; Schumacher, P. W., Jr.
will explore the effects of choosing ɛ as a function of α and β. Our intent is that this work will help bridge understanding between the well-trodden grounds of Type I and Type II errors and changes in information theoretic content.
Reibnegger, Gilbert
2013-10-21
Usual evaluation tools for diagnostic tests such as, sensitivity/specificity and ROC analyses, are designed for the discrimination between two diagnostic categories, using dichotomous test results. Information theoretical quantities such as mutual information allow in depth-analysis of more complex discrimination problems, including continuous test results, but are rarely used in clinical chemistry. This paper provides a primer on useful information theoretical concepts with a strong focus on typical diagnostic scenarios. Information theoretical concepts are shortly explained. Mathematica CDF documents are provided which compute entropies and mutual information as function of pretest probabilities and the distribution of test results among the categories, and allow interactive exploration of the behavior of these quantities in comparison with more conventional diagnostic measures. Using data from a previously published study, the application of information theory to practical diagnostic problems involving up to 4×4 -contingency tables is demonstrated. Information theoretical concepts are particularly useful for diagnostic problems requiring more than the usual binary classification. Quantitative test results can be properly analyzed, and in contrast to popular concepts such as ROC analysis, the effects of variations of pre-test probabilities of the diagnostic categories can be explicitly taken into account. © 2013 Elsevier B.V. All rights reserved.
Test Protocol for Room-to-Room Distribution of Outside Air by Residential Ventilation Systems
Barley, C. D.; Anderson, R.; Hendron, B.; Hancock, E.
2007-12-01
This test and analysis protocol has been developed as a practical approach for measuring outside air distribution in homes. It has been used successfully in field tests and has led to significant insights on ventilation design issues. Performance advantages of more sophisticated ventilation systems over simpler, less-costly designs have been verified, and specific problems, such as airflow short-circuiting, have been identified.
Testing nuclear parton distributions with pA collisions at the LHC
Quiroga-Arias, Paloma; Wiedemann, Urs Achim
2010-01-01
Global perturbative QCD analyses, based on large data sets from electron-proton and hadron collider experiments, provide tight constraints on the parton distribution function (PDF) in the proton. The extension of these analyses to nuclear parton distributions (nPDF) has attracted much interest in recent years. nPDFs are needed as benchmarks for the characterization of hot QCD matter in nucleus-nucleus collisions, and attract further interest since they may show novel signatures of non-linear density-dependent QCD evolution. However, it is not known from first principles whether the factorization of long-range phenomena into process-independent parton distribution, which underlies global PDF extractions for the proton, extends to nuclear effects. As a consequence, assessing the reliability of nPDFs for benchmark calculations goes beyond testing the numerical accuracy of their extraction and requires phenomenological tests of the factorization assumption. Here we argue that a proton-nucleus collision program at...
Testing collinear factorization and nuclear parton distributions with pA collisions at the LHC
Quiroga-Arias, Paloma; Wiedemann, Urs Achim
2011-01-01
Global perturbative QCD analyses, based on large data sets from electron-proton and hadron collider experiments, provide tight constraints on the parton distribution function (PDF) in the proton. The extension of these analyses to nuclear parton distributions (nPDF) has attracted much interest in recent years. nPDFs are needed as benchmarks for the characterization of hot QCD matter in nucleus-nucleus collisions, and attract further interest since they may show novel signatures of non- linear density-dependent QCD evolution. However, it is not known from first principles whether the factorization of long-range phenomena into process-independent parton distribution, which underlies global PDF extractions for the proton, extends to nuclear effects. As a consequence, assessing the reliability of nPDFs for benchmark calculations goes beyond testing the numerical accuracy of their extraction and requires phenomenological tests of the factorization assumption. Here we argue that a proton-nucleus collision program a...
The Analysis of process optimization during the loading distribution test for steam turbine
Li Jiangwei; Cao Yuhua; Li Dawei
2014-01-01
The loading distribution of steam turbine needs six times to complete in total, the first time is completed when the turbine cylinder buckles, the rest must be completed orderly in the process of installing GVP pipe. To complete 5 tests of loading distribution and installation of GVP pipe, it usually takes around 90 days for most nuclear plants while the unit l of Fuqing Nuclear Power Station compress it into about 45 days by optimizing the installation process. this article describes the successful experience of how the Unit l of Fuqing Nuclear Power Station finished 5 tests of loading distribution and installation of GVP pipe in 45 days by optimizing the process, Meanwhile they analysis the advantages and disadvantages through comparing it with the process provide by suppliers, which brings up some rationalization proposals for installation work to the follow-up units of our plant. (authors)
Standard test method for distribution coefficients of inorganic species by the batch method
American Society for Testing and Materials. Philadelphia
2010-01-01
1.1 This test method covers the determination of distribution coefficients of chemical species to quantify uptake onto solid materials by a batch sorption technique. It is a laboratory method primarily intended to assess sorption of dissolved ionic species subject to migration through pores and interstices of site specific geomedia. It may also be applied to other materials such as manufactured adsorption media and construction materials. Application of the results to long-term field behavior is not addressed in this method. Distribution coefficients for radionuclides in selected geomedia are commonly determined for the purpose of assessing potential migratory behavior of contaminants in the subsurface of contaminated sites and waste disposal facilities. This test method is also applicable to studies for parametric studies of the variables and mechanisms which contribute to the measured distribution coefficient. 1.2 The values stated in SI units are to be regarded as standard. No other units of measurement a...
Mustafa, Ghullam; Bak-Jensen, Birgitte; Mahat, Pukar
2013-01-01
The fluctuating nature of some of the Distributed Generation (DG) sources can cause power quality related problems like power frequency oscillations, voltage fluctuations etc. In future, the DG penetration is expected to increase and hence this requires some control actions to deal with the power...... controller. The control system is tested in the distribution test network set up by CIGRE. The new approach of the PV controller is done in such a way that it can control AC and DC voltage of the PV converter during dynamic conditions. The battery controller is also developed in such a way that it can...... quality issues. The main focus of this paper is on development of controllers for a distribution system with different DG’s and especially development of a Photovoltaic (PV) controller using a Static Compensator (STATCOM) controller and on modeling of a Battery Storage System (BSS) also based on a STATCOM...
Eskildsen, Anne; LeRoux, Peter C.; Heikkinen, Risto K.
2013-01-01
changes at expanding range margins can be predicted accurately. Location. Finland. Methods. Using 10-km resolution butterfly atlas data from two periods, 1992–1999 (t1) and 2002–2009 (t2), with a significant between-period temperature increase, we modelled the effects of climatic warming on butterfly...... butterfly distributions under climate change. Model performance was lower with independent compared to non-independent validation and improved when land cover and soil type variables were included, compared to climate-only models. SDMs performed less well for highly mobile species and for species with long......Aim. To quantify whether species distribution models (SDMs) can reliably forecast species distributions under observed climate change. In particular, to test whether the predictive ability of SDMs depends on species traits or the inclusion of land cover and soil type, and whether distributional...
Wan Xianrong
2017-02-01
Full Text Available Digital broadcasting and television are important classes of illuminators of opportunity for passive radars. Distributed and multistatic structure are the development trends for passive radars. Most modern digital broadcasting and television systems work on a network, which not only provides a natural condition to distributed passive radar but also puts forward higher requirements on the design of passive radar systems. Among those requirements, precise synchronization among the receivers and transmitters as well as among multiple receiving stations, which mainly involves frequency and time synchronization, is the first to be solved. To satisfy the synchronization requirements of distributed passive radars, a synchronization scheme based on GPS is presented in this paper. Moreover, an effective scheme based on the China Mobile Multimedia Broadcasting signal is proposed to test the system synchronization performance. Finally, the reliability of the synchronization design is verified via the distributed multistatic passive radar experiments.
Effect of distributive mass of spring on power flow in engineering test
Sheng, Meiping; Wang, Ting; Wang, Minqing; Wang, Xiao; Zhao, Xuan
2018-06-01
Mass of spring is always neglected in theoretical and simulative analysis, while it may be a significance in practical engineering. This paper is concerned with the distributive mass of a steel spring which is used as an isolator to simulate isolation performance of a water pipe in a heating system. Theoretical derivation of distributive mass effect of steel spring on vibration is presented, and multiple eigenfrequencies are obtained, which manifest that distributive mass results in extra modes and complex impedance properties. Furthermore, numerical simulation visually shows several anti-resonances of the steel spring corresponding to impedance and power flow curves. When anti-resonances emerge, the spring collects large energy which may cause damage and unexpected consequences in practical engineering and needs to be avoided. Finally, experimental tests are conducted and results show consistency with that of the simulation of the spring with distributive mass.
U.S.: proposed federal legislation to allow condom distribution and HIV testing in prison.
Dolinsky, Anna
2007-05-01
Representative Barbara Lee (D-CA) is reintroducing legislation in the U.S. House of Representatives that would require federal correctional facilities to allow community organizations to distribute condoms and provide voluntary counselling and testing for HIV and STDs for inmates. The bill has been referred to the House Judiciary Committee's Subcommittee on Crime, Terrorism, and Homeland Security.
The design and test of VME clock distribution module of the Daya Bay RPC readout system
Zhao Heng; Liang Hao; Zhou Yongzhao
2011-01-01
It describes the design of the VME Clock Distribution module of the Daya Bay RPC readout system, including the function and the hardware structure of the module and the logic design of the FPGA on the module. After the building and debugging of the module, a series of tests have been made to check its function and stability. (authors)
An Analysis of Variance Approach for the Estimation of Response Time Distributions in Tests
Attali, Yigal
2010-01-01
Generalizability theory and analysis of variance methods are employed, together with the concept of objective time pressure, to estimate response time distributions and the degree of time pressure in timed tests. By estimating response time variance components due to person, item, and their interaction, and fixed effects due to item types and…
MCNP(TM) Release 6.1.1 beta: Creating and Testing the Code Distribution
Cox, Lawrence J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Casswell, Laura [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2014-06-12
This report documents the preparations for and testing of the production release of MCNP6™1.1 beta through RSICC at ORNL. It addresses tests on supported operating systems (Linux, MacOSX, Windows) with the supported compilers (Intel, Portland Group and gfortran). Verification and Validation test results are documented elsewhere. This report does not address in detail the overall packaging of the distribution. Specifically, it does not address the nuclear and atomic data collection, the other included software packages (MCNP5, MCNPX and MCNP6) and the collection of reference documents.
Final comparison report on ISP-35: Nupec hydrogen mixing and distribution test (Test M-7-1)
1994-12-01
This final comparison report summarizes the results of the OECD/CSNI sponsored ISP-35 exercise which was based on NUPEC's Hydrogen Mixing and Distribution Test M-7-1. 12 organizations from 10 different countries took part in the exercise. For the ISP-35 test, a steam/light gas (helium) mixture was released into the lower region of a simplified model of a PWR containment. At the same time, the dome cooling spray was also activated. the transient time histories for gas temperature and concentrations were recorded for each of the 25 compartments of the model containment. The wall temperatures as well as the dome pressure were also recorded. The ISP-35 participants simulated the test conditions and attempted to predict the time histories using their accident analysis codes. Results of these analyses are presented, and comparisons are made between the experimental data and the calculated data. In general, predictions for pressure, helium concentration and gas distribution patterns were achieved with acceptable accuracy
On-line test of power distribution prediction system for boiling water reactors
Nishizawa, Y.; Kiguchi, T.; Kobayashi, S.; Takumi, K.; Tanaka, H.; Tsutsumi, R.; Yokomi, M.
1982-01-01
A power distribution prediction system for boiling water reactors has been developed and its on-line performance test has proceeded at an operating commercial reactor. This system predicts the power distribution or thermal margin in advance of control rod operations and core flow rate change. This system consists of an on-line computer system, an operator's console with a color cathode-ray tube, and plant data input devices. The main functions of this system are present power distribution monitoring, power distribution prediction, and power-up trajectory prediction. The calculation method is based on a simplified nuclear thermal-hydraulic calculation, which is combined with a method of model identification to the actual reactor core state. It has been ascertained by the on-line test that the predicted power distribution (readings of traversing in-core probe) agrees with the measured data within 6% root-mean-square. The computing time required for one prediction calculation step is less than or equal to 1.5 min by an HIDIC-80 on-line computer
Equivalence Testing of Complex Particle Size Distribution Profiles Based on Earth Mover's Distance.
Hu, Meng; Jiang, Xiaohui; Absar, Mohammad; Choi, Stephanie; Kozak, Darby; Shen, Meiyu; Weng, Yu-Ting; Zhao, Liang; Lionberger, Robert
2018-04-12
Particle size distribution (PSD) is an important property of particulates in drug products. In the evaluation of generic drug products formulated as suspensions, emulsions, and liposomes, the PSD comparisons between a test product and the branded product can provide useful information regarding in vitro and in vivo performance. Historically, the FDA has recommended the population bioequivalence (PBE) statistical approach to compare the PSD descriptors D50 and SPAN from test and reference products to support product equivalence. In this study, the earth mover's distance (EMD) is proposed as a new metric for comparing PSD particularly when the PSD profile exhibits complex distribution (e.g., multiple peaks) that is not accurately described by the D50 and SPAN descriptor. EMD is a statistical metric that measures the discrepancy (distance) between size distribution profiles without a prior assumption of the distribution. PBE is then adopted to perform statistical test to establish equivalence based on the calculated EMD distances. Simulations show that proposed EMD-based approach is effective in comparing test and reference profiles for equivalence testing and is superior compared to commonly used distance measures, e.g., Euclidean and Kolmogorov-Smirnov distances. The proposed approach was demonstrated by evaluating equivalence of cyclosporine ophthalmic emulsion PSDs that were manufactured under different conditions. Our results show that proposed approach can effectively pass an equivalent product (e.g., reference product against itself) and reject an inequivalent product (e.g., reference product against negative control), thus suggesting its usefulness in supporting bioequivalence determination of a test product to the reference product which both possess multimodal PSDs.
2010-04-01
... OTC test sample collection systems for drugs of abuse testing. 809.40 Section 809.40 Food and Drugs... Restrictions on the sale, distribution, and use of OTC test sample collection systems for drugs of abuse testing. (a) Over-the-counter (OTC) test sample collection systems for drugs of abuse testing (§ 864.3260...
Optimal design of accelerated life tests for an extension of the exponential distribution
Haghighi, Firoozeh
2014-01-01
Accelerated life tests provide information quickly on the lifetime distribution of the products by testing them at higher than usual levels of stress. In this paper, the lifetime of a product at any level of stress is assumed to have an extension of the exponential distribution. This new family has been recently introduced by Nadarajah and Haghighi (2011 [1]); it can be used as an alternative to the gamma, Weibull and exponentiated exponential distributions. The scale parameter of lifetime distribution at constant stress levels is assumed to be a log-linear function of the stress levels and a cumulative exposure model holds. For this model, the maximum likelihood estimates (MLEs) of the parameters, as well as the Fisher information matrix, are derived. The asymptotic variance of the scale parameter at a design stress is adopted as an optimization objective and its expression formula is provided using the maximum likelihood method. A Monte Carlo simulation study is carried out to examine the performance of these methods. The asymptotic confidence intervals for the parameters and hypothesis test for the parameter of interest are constructed
Scherpelz, R.I.; MacLellan, J.A.
1987-09-01
The Pacific Northwest Laboratory (PNL) is sending a torso phantom with radioactive material uniformly distributed in the lungs to in vivo bioassay laboratories for analysis. Although the radionuclides ultimately chosen for the studies had relatively long half-lives, future accreditation testing will require repeated tests with short half-life test nuclides. Computer modeling was used to simulate the major components of the phantom. Radiation transport calculations were then performed using the computer models to calculate dose rates either 15 cm from the chest or at its surface. For 144 Ce and 60 Co, three configurations were used for the lung comparison tests. Calculations show that, for most detector positions, a single plug containing 40 K located in the back of the heart provides a good approximation to a uniform distribution of 40 K. The approximation would lead, however, to a positive bias for the detector reading if the detector were located at the chest surface near the center. Loading the 40 K in a uniform layer inside the chest wall is not a good approximation of the uniform distribution in the lungs, because most of the radionuclides would be situated close to the detector location and the only shielding would be the thickness of the chest wall. The calculated dose rates for 60 Co and 144 Ce were similar at all calculated reference points. 3 refs., 5 figs., 10 tabs
The Space Station Module Power Management and Distribution automation test bed
Lollar, Louis F.
1991-01-01
The Space Station Module Power Management And Distribution (SSM/PMAD) automation test bed project was begun at NASA/Marshall Space Flight Center (MSFC) in the mid-1980s to develop an autonomous, user-supportive power management and distribution test bed simulating the Space Station Freedom Hab/Lab modules. As the test bed has matured, many new technologies and projects have been added. The author focuses on three primary areas. The first area is the overall accomplishments of the test bed itself. These include a much-improved user interface, a more efficient expert system scheduler, improved communication among the three expert systems, and initial work on adding intermediate levels of autonomy. The second area is the addition of a more realistic power source to the SSM/PMAD test bed; this project is called the Large Autonomous Spacecraft Electrical Power System (LASEPS). The third area is the completion of a virtual link between the SSM/PMAD test bed at MSFC and the Autonomous Power Expert at Lewis Research Center.
Westinghouse-GOTHIC modeling of NUPEC's hydrogen mixing and distribution test M-4-3
Ofstun, R.P.; Woodcock, J.; Paulsen, D.L.
1994-01-01
NUPEC (NUclear Power Engineering Corporation) ran a series of hydrogen mixing and distribution tests which were completed in April 1992. These tests were performed in a 1/4 linearly scaled model containment and were specifically designed to be used for computer code validation. The results of test M-4-3, along with predictions from several computer codes, were presented to the participants of ISP-35 (a blind test comparison of code calculated results with data from NUPEC test M-7-1) at a meeting in March 1993. Test M-4-3, which was similar to test M-7-1, released a mixture of steam and helium into a steam generator compartment located on the lower level of containment. The majority of codes did well at predicting the global pressure and temperature trends, however, some typical lumped parameter modeling problems were identified at that time. In particular, the models had difficulty predicting the temperature and helium concentrations in the so called 'dead ended volumes' (pressurizer compartment and in-core chase region). Modeling of the dead-ended compartments using a single lumped parameter volume did not yield the appropriate temperature and helium response within that volume. The Westinghouse-GOTHIC (WGOTHIC) computer code is capable of modeling in one, two or three dimensions (or any combination thereof). This paper describes the WGOTHIC modeling of the dead-ended compartments for NUPEC test M-4-3 and gives comparisons to the test data. 1 ref., 1 tab., 14 figs
Distribution of base rock depth estimated from Rayleigh wave measurement by forced vibration tests
Hiroshi Hibino; Toshiro Maeda; Chiaki Yoshimura; Yasuo Uchiyama
2005-01-01
This paper shows an application of Rayleigh wave methods to a real site, which was performed to determine spatial distribution of base rock depth from the ground surface. At a certain site in Sagami Plain in Japan, the base rock depth from surface is assumed to be distributed up to 10 m according to boring investigation. Possible accuracy of the base rock depth distribution has been needed for the pile design and construction. In order to measure Rayleigh wave phase velocity, forced vibration tests were conducted with a 500 N vertical shaker and linear arrays of three vertical sensors situated at several points in two zones around the edges of the site. Then, inversion analysis was carried out for soil profile by genetic algorithm, simulating measured Rayleigh wave phase velocity with the computed counterpart. Distribution of the base rock depth inverted from the analysis was consistent with the roughly estimated inclination of the base rock obtained from the boring tests, that is, the base rock is shallow around edge of the site and gradually inclines towards the center of the site. By the inversion analysis, the depth of base rock was determined as from 5 m to 6 m in the edge of the site, 10 m in the center of the site. The determined distribution of the base rock depth by this method showed good agreement on most of the points where boring investigation were performed. As a result, it was confirmed that the forced vibration tests on the ground by Rayleigh wave methods can be useful as the practical technique for estimating surface soil profiles to a depth of up to 10 m. (authors)
Kim, J.
2016-12-01
Considering high levels of uncertainty, epistemological conflicts over facts and values, and a sense of urgency, normal paradigm-driven science will be insufficient to mobilize people and nation toward sustainability. The conceptual framework to bridge the societal system dynamics with that of natural ecosystems in which humanity operates remains deficient. The key to understanding their coevolution is to understand `self-organization.' Information-theoretic approach may shed a light to provide a potential framework which enables not only to bridge human and nature but also to generate useful knowledge for understanding and sustaining the integrity of ecological-societal systems. How can information theory help understand the interface between ecological systems and social systems? How to delineate self-organizing processes and ensure them to fulfil sustainability? How to evaluate the flow of information from data through models to decision-makers? These are the core questions posed by sustainability science in which visioneering (i.e., the engineering of vision) is an essential framework. Yet, visioneering has neither quantitative measure nor information theoretic framework to work with and teach. This presentation is an attempt to accommodate the framework of self-organizing hierarchical open systems with visioneering into a common information-theoretic framework. A case study is presented with the UN/FAO's communal vision of climate-smart agriculture (CSA) which pursues a trilemma of efficiency, mitigation, and resilience. Challenges of delineating and facilitating self-organizing systems are discussed using transdisciplinary toold such as complex systems thinking, dynamic process network analysis and multi-agent systems modeling. Acknowledgments: This study was supported by the Korea Meteorological Administration Research and Development Program under Grant KMA-2012-0001-A (WISE project).
Comment on the asymptotics of a distribution-free goodness of fit test statistic.
Browne, Michael W; Shapiro, Alexander
2015-03-01
In a recent article Jennrich and Satorra (Psychometrika 78: 545-552, 2013) showed that a proof by Browne (British Journal of Mathematical and Statistical Psychology 37: 62-83, 1984) of the asymptotic distribution of a goodness of fit test statistic is incomplete because it fails to prove that the orthogonal component function employed is continuous. Jennrich and Satorra (Psychometrika 78: 545-552, 2013) showed how Browne's proof can be completed satisfactorily but this required the development of an extensive and mathematically sophisticated framework for continuous orthogonal component functions. This short note provides a simple proof of the asymptotic distribution of Browne's (British Journal of Mathematical and Statistical Psychology 37: 62-83, 1984) test statistic by using an equivalent form of the statistic that does not involve orthogonal component functions and consequently avoids all complicating issues associated with them.
Preliminary Calculations of Bypass Flow Distribution in a Multi-Block Air Test
Kim, Min Hwan; Tak, Nam Il
2011-01-01
The development of a methodology for the bypass flow assessment in a prismatic VHTR (Very High Temperature Reactor) core has been conducted at KAERI. A preliminary estimation of variation of local bypass flow gap size between graphite blocks in the NHDD core were carried out. With the predicted gap sizes, their influence on the bypass flow distribution and the core hot spot was assessed. Due to the complexity of gap distributions, a system thermo-fluid analysis code is suggested as a tool for the core thermo-fluid analysis, the model and correlations of which should be validated. In order to generate data for validating the bypass flow analysis model, an experimental facility for a multi-block air test was constructed at Seoul National University (SNU). This study is focused on the preliminary evaluation of flow distribution in the test section to understand how the flow is distributed and to help the selection of experimental case. A commercial CFD code, ANSYS CFX is used for the analyses
von Hirschhausen, Christian R.; Cullmann, Astrid
2005-01-01
Abstract This paper applies parametric and non-parametric and parametric tests to assess the efficiency of electricity distribution companies in Germany. We address traditional issues in electricity sector benchmarking, such as the role of scale effects and optimal utility size, as well as new evidence specific to the situation in Germany. We use labour, capital, and peak load capacity as inputs, and units sold and the number of customers as output. The data cover 307 (out of 553) ...
Performance and life time test on a 5 kW SOFC system for distributed cogeneration
Barrera, Rosa; De Biase, Sabrina; Ginocchio, Stefano [Edison S.p.A, Via Giorgio La Pira, 2, 10028 Trofarello (Italy); Bedogni, Stefano; Montelatici, Lorenzo [Edison S.p.A, Foro Bonaparte 31, 20121 Milano (Italy)
2008-06-15
Edison R and D Centre is committed to test a wide range of commercial and prototypal fuel cell systems. The activities aim to evaluate the available state of the art of these technologies and their maturity for the relevant market. The laboratory is equipped with ad hoc test benches designed to study single cells, stacks and systems. The characterization of commercial and new generation PEMFC, also for high temperatures (160 C), together with the analysis of the behaviour of SOFC represent the core activities of the laboratory. On January 2007 a new 5 kW SOFC system supplied by Acumentrics was installed. The claimed electrical power output is 5 kW and thermal power is 3 kW. The aim of the test is the achievement of technical and economical assessment for future applications of small SOFC plants for distributed cogeneration. Performance and life time test of the system are shown. (author)
A distribution-free test for anomalous gamma-ray spectra
Chan, Kung-sik; Li, Jinzheng; Eichinger, William; Bai, Er-Wei
2014-01-01
Gamma-ray spectra are increasingly acquired in monitoring cross-border traffic, or in an area search for lost or orphan special nuclear material (SNM). The signal in such data is generally weak, resulting in poorly resolved spectra, thereby making it hard to detect the presence of SNM. We develop a new test for detecting anomalous spectra by characterizing the complete shape change in a spectrum from background radiation; the proposed method may serve as a tripwire for routine screening for SNM. We show that, with increasing detection time, the limiting distribution of the test is given by some functional of the Brownian bridge. The efficacy of the proposed method is illustrated by simulations. - Highlights: • We develop a new non-parametric test for detecting anomalous gamma-ray spectra. • The proposed test has good empirical power for detecting weak signals. • It can serve as an effective tripwire for invoking more thorough scrutiny of the source
Non-parametric comparison of histogrammed two-dimensional data distributions using the Energy Test
Reid, Ivan D; Lopes, Raul H C; Hobson, Peter R
2012-01-01
When monitoring complex experiments, comparison is often made between regularly acquired histograms of data and reference histograms which represent the ideal state of the equipment. With the larger HEP experiments now ramping up, there is a need for automation of this task since the volume of comparisons could overwhelm human operators. However, the two-dimensional histogram comparison tools available in ROOT have been noted in the past to exhibit shortcomings. We discuss a newer comparison test for two-dimensional histograms, based on the Energy Test of Aslan and Zech, which provides more conclusive discrimination between histograms of data coming from different distributions than methods provided in a recent ROOT release.
Equilibrium quality and mass flux distributions in an adiabatic three-subchannel test section
Yadigaroglu, G.; Maganas, A.
1993-01-01
An experiment was designed to measure the fully-developed quality and mass flux distributions in an adiabatic three-subchannel test section. The three subchannels had the geometrical characteristics of the corner, side, and interior subchannels of a BWR-5 rod bundle. Data collected with Refrigerant-144 at pressures ranging from 7 to 14 bar, simulating operation with water in the range 55 to 103 bar are reported. The average mass flux and quality in the test section were in the ranges 1300 to 1750 kg/m s and -0.03 to 0.25, respectively. The data are analyzed and presented in various forms
Wu, Huijuan; Sun, Zhenshi; Qian, Ya; Zhang, Tao; Rao, Yunjiang
2015-07-01
A hydrostatic leak test for water pipeline with a distributed optical fiber vibration sensing (DOVS) system based on the phase-sensitive OTDR technology is studied in this paper. By monitoring one end of a common communication optical fiber cable, which is laid in the inner wall of the pipe, we can detect and locate the water leakages easily. Different apertures under different pressures are tested and it shows that the DOVS has good responses when the aperture is equal or larger than 4 mm and the inner pressure reaches 0.2 Mpa for a steel pipe with DN 91cm×EN 2cm.
Improvement of the CULTEX® exposure technology by radial distribution of the test aerosol.
Aufderheide, Michaela; Heller, Wolf-Dieter; Krischenowski, Olaf; Möhle, Niklas; Hochrainer, Dieter
2017-07-05
The exposure of cellular based systems cultivated on microporous membranes at the air-liquid interface (ALI) has been accepted as an appropriate approach to simulate the exposure of cells of the respiratory tract to native airborne substances. The efficiency of such an exposure procedure with regard to stability and reproducibility depends on the optimal design at the interface between the cellular test system and the exposure technique. The actual exposure systems favor the dynamic guidance of the airborne substances to the surface of the cells in specially designed exposure devices. Two module types, based on a linear or radial feed of the test atmosphere to the test system, were used for these studies. In our technical history, the development started with the linear designed version, the CULTEX ® glass modules, fulfilling basic requirements for running ALI exposure studies (Mohr and Durst, 2005). The instability in the distribution of different atmospheres to the cells caused us to create a new exposure module, characterized by a stable and reproducible radial guidance of the aerosol to the cells. The outcome was the CULTEX ® RFS (Mohr et al., 2010). In this study, we describe the differences between the two systems with regard to particle distribution and deposition clarifying the advantages and disadvantages of a radial to a linear aerosol distribution concept. Copyright © 2017 Elsevier GmbH. All rights reserved.
Testing collinear factorization and nuclear parton distributions with pA collisions at the LHC
Quiroga-Arias, Paloma [Departamento de Fisica de PartIculas and IGFAE, Universidade de Santiago de Compostela 15706 Santiago de Compostela (Spain); Milhano, Jose Guilherme [CENTRA, Departamento de Fisica, Instituto Superior Tecnico (IST), Av. Rovisco Pais 1, P-1049-001 Lisboa (Portugal); Wiedemann, Urs Achim, E-mail: pquiroga@fpaxpl.usc.es [Physics Department, Theory Unit, CERN, CH-1211 Geneve 23 (Switzerland)
2011-01-01
Global perturbative QCD analyses, based on large data sets from electron-proton and hadron collider experiments, provide tight constraints on the parton distribution function (PDF) in the proton. The extension of these analyses to nuclear parton distributions (nPDF) has attracted much interest in recent years. nPDFs are needed as benchmarks for the characterization of hot QCD matter in nucleus-nucleus collisions, and attract further interest since they may show novel signatures of non- linear density-dependent QCD evolution. However, it is not known from first principles whether the factorization of long-range phenomena into process-independent parton distribution, which underlies global PDF extractions for the proton, extends to nuclear effects. As a consequence, assessing the reliability of nPDFs for benchmark calculations goes beyond testing the numerical accuracy of their extraction and requires phenomenological tests of the factorization assumption. Here we argue that a proton-nucleus collision program at the LHC would provide a set of measurements allowing for unprecedented tests of the factorization assumption underlying global nPDF fits.
Testing nuclear parton distributions with pA collisions at the TeV scale
Quiroga-Arias, Paloma; Milhano, Jose Guilherme; Wiedemann, Urs Achim
2010-01-01
Global perturbative QCD analyses, based on large data sets from electron-proton and hadron collider experiments, provide tight constraints on the parton distribution function (PDF) in the proton. The extension of these analyses to nuclear parton distribution functions (nPDFs) has attracted much interest in recent years. nPDFs are needed as benchmarks for the characterization of hot QCD matter in nucleus-nucleus collisions, and attract further interest since they may show novel signatures of nonlinear density-dependent QCD evolution. However, it is not known from first principles whether the factorization of long-range phenomena into process-independent parton distribution, which underlies global PDF extractions for the proton, extends to nuclear effects. As a consequence, assessing the reliability of nPDFs for benchmark calculations goes beyond testing the numerical accuracy of their extraction and requires phenomenological tests of the factorization assumption. Here, we argue that a proton-nucleus collision program at the Large Hadron Collider would provide a set of measurements, which allow for unprecedented tests of the factorization assumption, underlying global nPDF fits.
Acceptance sampling for attributes via hypothesis testing and the hypergeometric distribution
Samohyl, Robert Wayne
2017-10-01
This paper questions some aspects of attribute acceptance sampling in light of the original concepts of hypothesis testing from Neyman and Pearson (NP). Attribute acceptance sampling in industry, as developed by Dodge and Romig (DR), generally follows the international standards of ISO 2859, and similarly the Brazilian standards NBR 5425 to NBR 5427 and the United States Standards ANSI/ASQC Z1.4. The paper evaluates and extends the area of acceptance sampling in two directions. First, by suggesting the use of the hypergeometric distribution to calculate the parameters of sampling plans avoiding the unnecessary use of approximations such as the binomial or Poisson distributions. We show that, under usual conditions, discrepancies can be large. The conclusion is that the hypergeometric distribution, ubiquitously available in commonly used software, is more appropriate than other distributions for acceptance sampling. Second, and more importantly, we elaborate the theory of acceptance sampling in terms of hypothesis testing rigorously following the original concepts of NP. By offering a common theoretical structure, hypothesis testing from NP can produce a better understanding of applications even beyond the usual areas of industry and commerce such as public health and political polling. With the new procedures, both sample size and sample error can be reduced. What is unclear in traditional acceptance sampling is the necessity of linking the acceptable quality limit (AQL) exclusively to the producer and the lot quality percent defective (LTPD) exclusively to the consumer. In reality, the consumer should also be preoccupied with a value of AQL, as should the producer with LTPD. Furthermore, we can also question why type I error is always uniquely associated with the producer as producer risk, and likewise, the same question arises with consumer risk which is necessarily associated with type II error. The resolution of these questions is new to the literature. The
A more powerful test based on ratio distribution for retention noninferiority hypothesis.
Deng, Ling; Chen, Gang
2013-03-11
Rothmann et al. ( 2003 ) proposed a method for the statistical inference of fraction retention noninferiority (NI) hypothesis. A fraction retention hypothesis is defined as a ratio of the new treatment effect verse the control effect in the context of a time to event endpoint. One of the major concerns using this method in the design of an NI trial is that with a limited sample size, the power of the study is usually very low. This makes an NI trial not applicable particularly when using time to event endpoint. To improve power, Wang et al. ( 2006 ) proposed a ratio test based on asymptotic normality theory. Under a strong assumption (equal variance of the NI test statistic under null and alternative hypotheses), the sample size using Wang's test was much smaller than that using Rothmann's test. However, in practice, the assumption of equal variance is generally questionable for an NI trial design. This assumption is removed in the ratio test proposed in this article, which is derived directly from a Cauchy-like ratio distribution. In addition, using this method, the fundamental assumption used in Rothmann's test, that the observed control effect is always positive, that is, the observed hazard ratio for placebo over the control is greater than 1, is no longer necessary. Without assuming equal variance under null and alternative hypotheses, the sample size required for an NI trial can be significantly reduced if using the proposed ratio test for a fraction retention NI hypothesis.
Noyes, Richard W
1933-01-01
The pressure distribution data discussed in this report represents the results of part of an investigation conducted on the factors affecting the aerodynamic safety of airplanes. The present tests were made on semispan, circular-tipped Clark Y airfoil models mounted in the conventional manner on a separation plane. Pressure readings were made simultaneously at all test orifices at each of 20 angles of attack between -8 degrees and +90 degrees. The results of the tests on each wing arrangement are compared on the bases of maximum normal force coefficient, lateral stability at a low rate of roll, and relative longitudinal stability. Tabular data are also presented giving the center of pressure location of each wing.
Sang-Yun Yun
2014-02-01
Full Text Available This paper is a summary of the development and demonstration of an optimization program, voltage VAR optimization (VVO, in the Korean Smart Distribution Management System (KSDMS. KSDMS was developed to address the lack of receptivity of distributed generators (DGs, standardization and compatibility, and manual failure recovery in the existing Korean automated distribution system. Focusing on the lack of receptivity of DGs, we developed a real-time system analysis and control program. The KSDMS VVO enhances manual system operation of the existing distribution system and provides a solution with all control equipment operated at a system level. The developed VVO is an optimal power flow (OPF method that resolves violations, minimizes switching costs, and minimizes loss, and its function can vary depending on the operator’s command. The sequential mixed integer linear programming (SMILP method was adopted to find the solution of the OPF. We tested the precision of the proposed VVO on selected simulated systems and its applicability to actual systems at two substations on the Jeju Island. Running the KSDMS VVO on a regular basis improved system stability, and it also raised no issues regarding its applicability to actual systems.
John R. Jones
1985-01-01
Quaking aspen is the most widely distributed native North American tree species (Little 1971, Sargent 1890). It grows in a great diversity of regions, environments, and communities (Harshberger 1911). Only one deciduous tree species in the world, the closely related Eurasian aspen (Populus tremula), has a wider range (Weigle and Frothingham 1911)....
Development and testing of a diagnostic system for intelligen distributed control at EBR-2
Edwards, R.M.; Ruhl, D.W.; Klevans, E.H.; Robinson, G.E.
1990-01-01
A diagnostic system is under development for demonstration of Intelligent Distributed Control at the Experimental Breeder Reactor (EBR--II). In the first phase of the project a diagnostic system is being developed for the EBR-II steam plant based on the DISYS expert systems approach. Current testing uses recorded plant data and data from simulated plant faults. The dynamical simulation of the EBR-II steam plant uses the Babcock and Wilcox (B ampersand W) Modular Modeling System (MMS). At EBR-II the diagnostic system operates in the UNIX workstation and receives live plant data from the plant Data Acquisition System (DAS). Future work will seek implementation of the steam plant diagnostic in a distributed manner using UNIX based computers and Bailey microprocessor-based control system. 10 refs., 6 figs
Field test of a continuous-variable quantum key distribution prototype
Fossier, S; Debuisschert, T; Diamanti, E; Villing, A; Tualle-Brouri, R; Grangier, P
2009-01-01
We have designed and realized a prototype that implements a continuous-variable quantum key distribution (QKD) protocol based on coherent states and reverse reconciliation. The system uses time and polarization multiplexing for optimal transmission and detection of the signal and phase reference, and employs sophisticated error-correction codes for reconciliation. The security of the system is guaranteed against general coherent eavesdropping attacks. The performance of the prototype was tested over preinstalled optical fibres as part of a quantum cryptography network combining different QKD technologies. The stable and automatic operation of the prototype over 57 h yielded an average secret key distribution rate of 8 kbit s -1 over a 3 dB loss optical fibre, including the key extraction process and all quantum and classical communication. This system is therefore ideal for securing communications in metropolitan size networks with high-speed requirements.
Zhang, Fode; Shi, Yimin; Wang, Ruibing
2017-02-01
In the information geometry suggested by Amari (1985) and Amari et al. (1987), a parametric statistical model can be regarded as a differentiable manifold with the parameter space as a coordinate system. Note that the q-exponential distribution plays an important role in Tsallis statistics (see Tsallis, 2009), this paper investigates the geometry of the q-exponential distribution with dependent competing risks and accelerated life testing (ALT). A copula function based on the q-exponential function, which can be considered as the generalized Gumbel copula, is discussed to illustrate the structure of the dependent random variable. Employing two iterative algorithms, simulation results are given to compare the performance of estimations and levels of association under different hybrid progressively censoring schemes (HPCSs).
Araújo, Thiago Antonio Sousa; Almeida, Alyson Luiz Santos; Melo, Joabe Gomes; Medeiros, Maria Franco Trindade; Ramos, Marcelo Alves; Silva, Rafael Ricardo Vasconcelos; Almeida, Cecília Fátima Castelo Branco Rangel; Albuquerque, Ulysses Paulino
2012-03-15
We propose a new quantitative measure that enables the researcher to make decisions and test hypotheses about the distribution of knowledge in a community and estimate the richness and sharing of information among informants. In our study, this measure has two levels of analysis: intracultural and intrafamily. Using data collected in northeastern Brazil, we evaluated how these new estimators of richness and sharing behave for different categories of use. We observed trends in the distribution of the characteristics of informants. We were also able to evaluate how outliers interfere with these analyses and how other analyses may be conducted using these indices, such as determining the distance between the knowledge of a community and that of experts, as well as exhibiting the importance of these individuals' communal information of biological resources. One of the primary applications of these indices is to supply the researcher with an objective tool to evaluate the scope and behavior of the collected data.
Griffiths, K. R.; Hicks, B. J.; Keogh, P. S.; Shires, D.
2016-08-01
In general, vehicle vibration is non-stationary and has a non-Gaussian probability distribution; yet existing testing methods for packaging design employ Gaussian distributions to represent vibration induced by road profiles. This frequently results in over-testing and/or over-design of the packaging to meet a specification and correspondingly leads to wasteful packaging and product waste, which represent 15bn per year in the USA and €3bn per year in the EU. The purpose of the paper is to enable a measured non-stationary acceleration signal to be replaced by a constructed signal that includes as far as possible any non-stationary characteristics from the original signal. The constructed signal consists of a concatenation of decomposed shorter duration signals, each having its own kurtosis level. Wavelet analysis is used for the decomposition process into inner and outlier signal components. The constructed signal has a similar PSD to the original signal, without incurring excessive acceleration levels. This allows an improved and more representative simulated input signal to be generated that can be used on the current generation of shaker tables. The wavelet decomposition method is also demonstrated experimentally through two correlation studies. It is shown that significant improvements over current international standards for packaging testing are achievable; hence the potential for more efficient packaging system design is possible.
Testing DARKexp against energy and density distributions of Millennium-II halos
Nolting, Chris; Williams, Liliya L.R. [School of Physics and Astronomy, University of Minnesota, 116 Church Street SE, Minneapolis, MN, 55454 (United States); Boylan-Kolchin, Michael [Department of Astronomy, The University of Texas at Austin, 2515 Speedway, Stop C1400, Austin, TX, 78712 (United States); Hjorth, Jens, E-mail: nolting@astro.umn.edu, E-mail: llrw@astro.umn.edu, E-mail: mbk@astro.as.utexas.edu, E-mail: jens@dark-cosmology.dk [Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, Juliane Maries Vej 30, Copenhagen, DK-2100 Denmark (Denmark)
2016-09-01
We test the DARKexp model for relaxed, self-gravitating, collisionless systems against equilibrium dark matter halos from the Millennium-II simulation. While limited tests of DARKexp against simulations and observations have been carried out elsewhere, this is the first time the testing is done with a large sample of simulated halos spanning a factor of ∼ 50 in mass, and using independent fits to density and energy distributions. We show that DARKexp, a one shape parameter family, provides very good fits to the shapes of density profiles, ρ( r ), and differential energy distributions, N ( E ), of individual simulated halos. The best fit shape parameter φ{sub 0} obtained from the two types of fits are correlated, though with scatter. Our most important conclusions come from ρ( r ) and N ( E ) that have been averaged over many halos. These show that the bulk of the deviations between DARKexp and individual Millennium-II halos come from halo-to-halo fluctuations, likely driven by substructure, and other density perturbations. The average ρ( r ) and N ( E ) are quite smooth and follow DARKexp very closely. The only deviation that remains after averaging is small, and located at most bound energies for N ( E ) and smallest radii for ρ( r ). Since the deviation is confined to 3–4 smoothing lengths, and is larger for low mass halos, it is likely due to numerical resolution effects.
Clerkin, L.; Kirk, D.; Manera, M.; Lahav, O.; Abdalla, F.
2016-01-01
It is well known that the probability distribution function (PDF) of galaxy density contrast is approximately lognormal; whether the PDF of mass fluctuations derived from weak lensing convergence (κWL) is lognormal is less well established. We derive PDFs of the galaxy and projected matter density distributions via the counts-in-cells (CiC) method. We use maps of galaxies and weak lensing convergence produced from the Dark Energy Survey Science Verification data over 139 deg"2. We test whether the underlying density contrast is well described by a lognormal distribution for the galaxies, the convergence and their joint PDF. We confirm that the galaxy density contrast distribution is well modelled by a lognormal PDF convolved with Poisson noise at angular scales from 10 to 40 arcmin (corresponding to physical scales of 3–10 Mpc). We note that as κWL is a weighted sum of the mass fluctuations along the line of sight, its PDF is expected to be only approximately lognormal. We find that the κWL distribution is well modelled by a lognormal PDF convolved with Gaussian shape noise at scales between 10 and 20 arcmin, with a best-fitting χ"2/dof of 1.11 compared to 1.84 for a Gaussian model, corresponding to p-values 0.35 and 0.07, respectively, at a scale of 10 arcmin. Above 20 arcmin a simple Gaussian model is sufficient. The joint PDF is also reasonably fitted by a bivariate lognormal. As a consistency check, we compare the variances derived from the lognormal modelling with those directly measured via CiC. Lastly, our methods are validated against maps from the MICE Grand Challenge N-body simulation.
Schaeffer, G.J.; Warmer, C.J.; Hommelberg, M.P.F.; Kamphuis, I.G.; Kok, J.K. [Energy in the Built Environment and Networks, Petten (Netherlands)
2007-01-15
Multi-agent technology is state of the art ICT. It is not yet widely applied in power control systems. However, it has a large potential for bottom-up, distributed control of a network with large-scale renewable energy sources (RES) and distributed energy resources (DER) in future power systems. At least two major European R and D projects (MicroGrids and CRISP) have investigated its potential. Both grid-related as well as market-related applications have been studied. This paper will focus on two field tests, performed in the Netherlands, applying multi-agent control by means of the PowerMatcher concept. The first field test focuses on the application of multi-agent technology in a commercial setting, i.e. by reducing the need for balancing power in the case of intermittent energy sources, such as wind energy. In this case the flexibility is used of demand and supply of industrial and residential consumers and producers. Imbalance reduction rates of over 40% have been achieved applying the PowerMatcher, and with a proper portfolio even larger rates are expected. In the second field test the multi-agent technology is used in the design and implementation of a virtual power plant (VPP). This VPP digitally connects a number of micro-CHP units, installed in residential dwellings, into a cluster that is controlled to reduce the local peak demand of the common low-voltage grid segment the micro-CHP units are connected to. In this way the VPP supports the local distribution system operator (DSO) to defer reinforcements in the grid infrastructure (substations and cables)
Schaeffer, G.J.; Warmer, C.J.; Hommelberg, M.P.F.; Kamphuis, I.G.; Kok, J.K.
2007-01-01
Multi-agent technology is state of the art ICT. It is not yet widely applied in power control systems. However, it has a large potential for bottom-up, distributed control of a network with large-scale renewable energy sources (RES) and distributed energy resources (DER) in future power systems. At least two major European R and D projects (MicroGrids and CRISP) have investigated its potential. Both grid-related as well as market-related applications have been studied. This paper will focus on two field tests, performed in the Netherlands, applying multi-agent control by means of the PowerMatcher concept. The first field test focuses on the application of multi-agent technology in a commercial setting, i.e. by reducing the need for balancing power in the case of intermittent energy sources, such as wind energy. In this case the flexibility is used of demand and supply of industrial and residential consumers and producers. Imbalance reduction rates of over 40% have been achieved applying the PowerMatcher, and with a proper portfolio even larger rates are expected. In the second field test the multi-agent technology is used in the design and implementation of a virtual power plant (VPP). This VPP digitally connects a number of micro-CHP units, installed in residential dwellings, into a cluster that is controlled to reduce the local peak demand of the common low-voltage grid segment the micro-CHP units are connected to. In this way the VPP supports the local distribution system operator (DSO) to defer reinforcements in the grid infrastructure (substations and cables)
Pi-Sat: A Low Cost Small Satellite and Distributed Spacecraft Mission System Test Platform
Cudmore, Alan
2015-01-01
Current technology and budget trends indicate a shift in satellite architectures from large, expensive single satellite missions, to small, low cost distributed spacecraft missions. At the center of this shift is the SmallSatCubesat architecture. The primary goal of the Pi-Sat project is to create a low cost, and easy to use Distributed Spacecraft Mission (DSM) test bed to facilitate the research and development of next-generation DSM technologies and concepts. This test bed also serves as a realistic software development platform for Small Satellite and Cubesat architectures. The Pi-Sat is based on the popular $35 Raspberry Pi single board computer featuring a 700Mhz ARM processor, 512MB of RAM, a flash memory card, and a wealth of IO options. The Raspberry Pi runs the Linux operating system and can easily run Code 582s Core Flight System flight software architecture. The low cost and high availability of the Raspberry Pi make it an ideal platform for a Distributed Spacecraft Mission and Cubesat software development. The Pi-Sat models currently include a Pi-Sat 1U Cube, a Pi-Sat Wireless Node, and a Pi-Sat Cubesat processor card.The Pi-Sat project takes advantage of many popular trends in the Maker community including low cost electronics, 3d printing, and rapid prototyping in order to provide a realistic platform for flight software testing, training, and technology development. The Pi-Sat has also provided fantastic hands on training opportunities for NASA summer interns and Pathways students.
Beach, R. F.; Kimnach, G. L.; Jett, T. A.; Trash, L. M.
1989-01-01
The Lewis Research Center's Power Management and Distribution (PMAD) System testbed and its use in the evaluation of control concepts applicable to the NASA Space Station Freedom electric power system (EPS) are described. The facility was constructed to allow testing of control hardware and software in an environment functionally similar to the space station electric power system. Control hardware and software have been developed to allow operation of the testbed power system in a manner similar to a supervisory control and data acquisition (SCADA) system employed by utility power systems for control. The system hardware and software are described.
Acceptance Sampling Plans Based on Truncated Life Tests for Sushila Distribution
Amer Ibrahim Al-Omari
2018-03-01
Full Text Available An acceptance sampling plan problem based on truncated life tests when the lifetime following a Sushila distribution is considered in this paper. For various acceptance numbers, confidence levels and values of the ratio between fixed experiment time and particular mean lifetime, the minimum sample sizes required to ascertain a specified mean life were found. The operating characteristic function values of the suggested sampling plans and the producer’s risk are presented. Some tables are provided and the results are illustrated by an example of a real data set.
Beatley, J.C.
1976-01-01
The physical environment of the Nevada Test Site and surrounding area is described with regard to physiography, geology, soils, and climate. A discussion of plant associations is given for the Mojave Desert, Transition Desert, and Great Basin Desert. The vegetation of disturbed sites is discussed with regard to introduced species as well as endangered and threatened species. Collections of vascular plants were made during 1959 to 1975. The plants, belonging to 1093 taxa and 98 families are listed together with information concerning ecologic and geographic distributions. Indexes to families, genera, and species are included. (HLW)
Coelho, Carlos A.; Marques, Filipe J.
2013-09-01
In this paper the authors combine the equicorrelation and equivariance test introduced by Wilks [13] with the likelihood ratio test (l.r.t.) for independence of groups of variables to obtain the l.r.t. of block equicorrelation and equivariance. This test or its single block version may find applications in many areas as in psychology, education, medicine, genetics and they are important "in many tests of multivariate analysis, e.g. in MANOVA, Profile Analysis, Growth Curve analysis, etc" [12, 9]. By decomposing the overall hypothesis into the hypotheses of independence of groups of variables and the hypothesis of equicorrelation and equivariance we are able to obtain the expressions for the overall l.r.t. statistic and its moments. From these we obtain a suitable factorization of the characteristic function (c.f.) of the logarithm of the l.r.t. statistic, which enables us to develop highly manageable and precise near-exact distributions for the test statistic.
TESTING THE GRAIN-SIZE DISTRIBUTION DETERMINED BY LASER DIFFRACTOMETRY FOR SICILIAN SOILS
Costanza Di Stefano
2012-06-01
Full Text Available In this paper the soil grain-size distribution determined by Laser Diffraction method (LDM is tested using the Sieve-Hydrometer method (SHM applied for 747 soil samples representing a different texture classification, sampled in Sicily. 005_Di_Stefano(599_39 28-12-2011 15:01 Pagina 45 The analysis showed that the sand content measured by SHM can be assumed equal to the one determined by LDM. An underestimation of the clay fraction measured by LDM was obtained with respect to the SHM and a set of equations useful to refer laser diffraction measurements to SHM was calibrated using the measurements carried out for 635 soil samples. Finally, the proposed equations were tested using independent measurements carried out by LDM and SHM for 112 soil samples with a different texture classification.
Equilibrium quality and mass flux distributions in an adiabatic three-subchannel test section
Yadigaroglu, G.; Maganas, A.
1995-01-01
An experiment was designed to measure the fully developed quality and mass flux distributions in an adiabatic three-subchannel test section. The three subchannels had the geometrical characteristics of the corner, side, and interior subchannels of a boiling water reactor (BWR-5) rod bundle. Data collected with Refrigerant-114 at pressures ranging from 7 to 14 bars, simulating operation with water in the range 55 to 103 bars are reported. The average mass flux and quality in the test section were in the ranges 1,300 to 1,750 kg/m 2 · s and -0.03 to 0.25, respectively. The data are analyzed and presented in various forms
Extension of the pseudo dynamic method to test structures with distributed mass
Renda, V.; Papa, L.; Bellorini, S.
1993-01-01
The PsD method is a mixed numerical and experimental procedure. At each time step the dynamic deformation of the structure, computed by solving the equation of the motion for a given input signal, is reproduced in the laboratory by means of actuators attached to the sample at specific points. The reaction forces at those points are measured and used to compute the deformation for the next time step. The reaction forces being known, knowledge of the stiffness of the structure is not needed, so that the method can be effective also for deformations leading to strong nonlinear behaviour of the structure. On the contrary, the mass matrix and the applied forces must be well known. For this reason the PsD method can be applied without approximations when the masses can be considered as lumped at the testing points of the sample. The present work investigates the possibility to extend the PsD method to test structures with distributed mass. A standard procedure is proposed to provide an equivalent mass matrix and force vector reduced to the testing points and to verify the reliability of the model. The verification is obtained comparing the results of multi-degrees of freedom dynamic analysis, done by means of a Finite Elements (FE) numerical program, with a simulation of the PsD method based on the reduced degrees of freedom mass matrix and external forces, assuming in place of the experimental reactions, those computed with the general FE model. The method has been applied to a numerical simulation of the behaviour of a realistic and complex structure with distributed mass consisting of a masonry building of two floors. The FE model consists of about two thousand degrees of freedom and the condensation has been made for four testing points. A dynamic analysis has been performed with the general FE model and the reactions of the structure have been recorded in a file and used as input for the PsD simulation with the four degree of freedom model. The comparison between
Development of a Test Facility to Simulate the Reactor Flow Distribution of APR+
Euh, D. J.; Cho, S.; Youn, Y. J.; Kim, J. T.; Kang, H. S.; Kwon, T. S.
2011-01-01
Recently a design of new reactor, APR+, is being developed, as an advanced type of APR1400. In order to analyze the thermal margin and hydraulic characteristics of APR+, quantification tests for flow and pressure distribution with a conservation of flow geometry are necessary. Hetsroni (1967) proposed four principal parameters for a hydraulic model representing a nuclear reactor prototype: geometry, relative roughness, Reynolds number, and Euler number. He concluded that the Euler number should be similar in the prototype and model under the preservation of the aspect ratio on the flow path. The effect of the Reynolds number at its higher values on the Euler number is rather small, since the dependency of the form and frictional loss coefficients on the Reynolds number is seen to be small. ABB-CE has carried out several reactor flow model test programs, mostly for its prototype reactors. A series of tests were conducted using a 3/16 scale reactor model. (see Lee et al., 2001). Lee et al (1991) performed experimental studies using a 1/5.03 scale reactor flow model of Yonggwang nuclear units 3 and 4. They showed that the measured data met the acceptance criteria and were suitable for their intended use in terms of performance and safety analyses. The design of current test facility was based on the conservation of Euler number which is a ratio of pressure drop to dynamic pressure with a sufficiently turbulent region having a high Reynolds number. By referring to the previous study, the APR+ design is linearly reduced to 1/5 ratio with a 1/2 of the velocity scale, which yields a 1/39.7 of Reynolds number scaling ratio. In the present study, the design feature of the facilities, named 'ACOP', in order to investigate flow and pressure distribution are described
Smart-DS: Synthetic Models for Advanced, Realistic Testing: Distribution Systems and Scenarios
Krishnan, Venkat K [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Palmintier, Bryan S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Hodge, Brian S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Hale, Elaine T [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Elgindy, Tarek [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Bugbee, Bruce [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Rossol, Michael N [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Lopez, Anthony J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Krishnamurthy, Dheepak [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Vergara, Claudio [MIT; Domingo, Carlos Mateo [IIT Comillas; Postigo, Fernando [IIT Comillas; de Cuadra, Fernando [IIT Comillas; Gomez, Tomas [IIT Comillas; Duenas, Pablo [MIT; Luke, Max [MIT; Li, Vivian [MIT; Vinoth, Mohan [GE Grid Solutions; Kadankodu, Sree [GE Grid Solutions
2017-08-09
The National Renewable Energy Laboratory (NREL) in collaboration with Massachusetts Institute of Technology (MIT), Universidad Pontificia Comillas (Comillas-IIT, Spain) and GE Grid Solutions, is working on an ARPA-E GRID DATA project, titled Smart-DS, to create: 1) High-quality, realistic, synthetic distribution network models, and 2) Advanced tools for automated scenario generation based on high-resolution weather data and generation growth projections. Through these advancements, the Smart-DS project is envisioned to accelerate the development, testing, and adoption of advanced algorithms, approaches, and technologies for sustainable and resilient electric power systems, especially in the realm of U.S. distribution systems. This talk will present the goals and overall approach of the Smart-DS project, including the process of creating the synthetic distribution datasets using reference network model (RNM) and the comprehensive validation process to ensure network realism, feasibility, and applicability to advanced use cases. The talk will provide demonstrations of early versions of synthetic models, along with the lessons learnt from expert engagements to enhance future iterations. Finally, the scenario generation framework, its development plans, and co-ordination with GRID DATA repository teams to house these datasets for public access will also be discussed.
Wind Tunnel Tests for Wind Pressure Distribution on Gable Roof Buildings
2013-01-01
Gable roof buildings are widely used in industrial buildings. Based on wind tunnel tests with rigid models, wind pressure distributions on gable roof buildings with different aspect ratios were measured simultaneously. Some characteristics of the measured wind pressure field on the surfaces of the models were analyzed, including mean wind pressure, fluctuating wind pressure, peak negative wind pressure, and characteristics of proper orthogonal decomposition results of the measured wind pressure field. The results show that extremely high local suctions often occur in the leading edges of longitudinal wall and windward roof, roof corner, and roof ridge which are the severe damaged locations under strong wind. The aspect ratio of building has a certain effect on the mean wind pressure coefficients, and the effect relates to wind attack angle. Compared with experimental results, the region division of roof corner and roof ridge from AIJ2004 is more reasonable than those from CECS102:2002 and MBMA2006.The contributions of the first several eigenvectors to the overall wind pressure distributions become much bigger. The investigation can offer some basic understanding for estimating wind load distribution on gable roof buildings and facilitate wind-resistant design of cladding components and their connections considering wind load path. PMID:24082851
Testing the mutual information expansion of entropy with multivariate Gaussian distributions.
Goethe, Martin; Fita, Ignacio; Rubi, J Miguel
2017-12-14
The mutual information expansion (MIE) represents an approximation of the configurational entropy in terms of low-dimensional integrals. It is frequently employed to compute entropies from simulation data of large systems, such as macromolecules, for which brute-force evaluation of the full configurational integral is intractable. Here, we test the validity of MIE for systems consisting of more than m = 100 degrees of freedom (dofs). The dofs are distributed according to multivariate Gaussian distributions which were generated from protein structures using a variant of the anisotropic network model. For the Gaussian distributions, we have semi-analytical access to the configurational entropy as well as to all contributions of MIE. This allows us to accurately assess the validity of MIE for different situations. We find that MIE diverges for systems containing long-range correlations which means that the error of consecutive MIE approximations grows with the truncation order n for all tractable n ≪ m. This fact implies severe limitations on the applicability of MIE, which are discussed in the article. For systems with correlations that decay exponentially with distance, MIE represents an asymptotic expansion of entropy, where the first successive MIE approximations approach the exact entropy, while MIE also diverges for larger orders. In this case, MIE serves as a useful entropy expansion when truncated up to a specific truncation order which depends on the correlation length of the system.
RECONSTRUCTING REDSHIFT DISTRIBUTIONS WITH CROSS-CORRELATIONS: TESTS AND AN OPTIMIZED RECIPE
Matthews, Daniel J.; Newman, Jeffrey A.
2010-01-01
Many of the cosmological tests to be performed by planned dark energy experiments will require extremely well-characterized photometric redshift measurements. Current estimates for cosmic shear are that the true mean redshift of the objects in each photo-z bin must be known to better than 0.002(1 + z), and the width of the bin must be known to ∼0.003(1 + z) if errors in cosmological measurements are not to be degraded significantly. A conventional approach is to calibrate these photometric redshifts with large sets of spectroscopic redshifts. However, at the depths probed by Stage III surveys (such as DES), let alone Stage IV (LSST, JDEM, and Euclid), existing large redshift samples have all been highly (25%-60%) incomplete, with a strong dependence of success rate on both redshift and galaxy properties. A powerful alternative approach is to exploit the clustering of galaxies to perform photometric redshift calibrations. Measuring the two-point angular cross-correlation between objects in some photometric redshift bin and objects with known spectroscopic redshift, as a function of the spectroscopic z, allows the true redshift distribution of a photometric sample to be reconstructed in detail, even if it includes objects too faint for spectroscopy or if spectroscopic samples are highly incomplete. We test this technique using mock DEEP2 Galaxy Redshift survey light cones constructed from the Millennium Simulation semi-analytic galaxy catalogs. From this realistic test, which incorporates the effects of galaxy bias evolution and cosmic variance, we find that the true redshift distribution of a photometric sample can, in fact, be determined accurately with cross-correlation techniques. We also compare the empirical error in the reconstruction of redshift distributions to previous analytic predictions, finding that additional components must be included in error budgets to match the simulation results. This extra error contribution is small for surveys that sample
Plant management tools tested with a small-scale distributed generation laboratory
Ferrari, Mario L.; Traverso, Alberto; Pascenti, Matteo; Massardo, Aristide F.
2014-01-01
Highlights: • Thermal grid innovative layouts. • Experimental rig for distributed generation. • Real-time management tool. • Experimental results for plant management. • Comparison with results from an optimization complete software. - Abstract: Optimization of power generation with smart grids is an important issue for extensive sustainable development of distributed generation. Since an experimental approach is essential for implementing validated optimization software, the TPG research team of the University of Genoa has installed a laboratory facility for carrying out studies on polygeneration grids. The facility consists of two co-generation prime movers based on conventional technology: a 100 kWe gas turbine (mGT) and a 20 kWe internal combustion engine (ICE). The rig high flexibility allows the possibility of integration with renewable-source based devices, such as biomass-fed boilers and solar panels. Special attention was devoted to thermal distribution grid design. To ensure the possibility of application in medium-large districts, composed of several buildings including energy users, generators or both, an innovative layout based on two ring pipes was examined. Thermal storage devices were also included in order to have a complete hardware platform suitable for assessing the performance of different management tools. The test presented in this paper was carried out with both the mGT and the ICE connected to this innovative thermal grid, while users were emulated by means of fan coolers controlled by inverters. During this test the plant is controlled by a real-time model capable of calculating a machine performance ranking, which is necessary in order to split power demands between the prime movers (marginal cost decrease objective). A complete optimization tool devised by TPG (ECoMP program) was also used in order to obtain theoretical results considering the same machines and load values. The data obtained with ECoMP were compared with the
Patil, Riya Raghuvir
Networks of communicating agents require distributed algorithms for a variety of tasks in the field of network analysis and control. For applications such as swarms of autonomous vehicles, ad hoc and wireless sensor networks, and such military and civilian applications as exploring and patrolling a robust autonomous system that uses a distributed algorithm for selfpartitioning can be significantly helpful. A single team of autonomous vehicles in a field may need to self-dissemble into multiple teams, conducive to completing multiple control tasks. Moreover, because communicating agents are subject to changes, namely, addition or failure of an agent or link, a distributed or decentralized algorithm is favorable over having a central agent. A framework to help with the study of self-partitioning of such multi agent systems that have most basic mobility model not only saves our time in conception but also gives us a cost effective prototype without negotiating the physical realization of the proposed idea. In this thesis I present my work on the implementation of a flexible and distributed stochastic partitioning algorithm on the LegoRTM Mindstorms' NXT on a graphical programming platform using National Instruments' LabVIEW(TM) forming a team of communicating agents via NXT-Bee radio module. We single out mobility, communication and self-partition as the core elements of the work. The goal is to randomly explore a precinct for reference sites. Agents who have discovered the reference sites announce their target acquisition to form a network formed based upon the distance of each agent with the other wherein the self-partitioning begins to find an optimal partition. Further, to illustrate the work, an experimental test-bench of five Lego NXT robots is presented.
Jeong, Ji Hwan; Hwang, Seong Won; Choi, Kyeong Sik
2010-05-01
In the study, 3-dimensional thermal hydraulic analysis was carried out focusing on the thermal hydraulic behavior inside the reactor pools for both KALIMER-600 and one-fifth scale-down test facility. STAR-CD, one of the commercial CFD codes, was used to analyze 3-dimensional incompressible steady-state thermal hydraulic behavior in both designs of KALIMER-600 and the scale-down test facility. In the KALIMER-600 CFD analysis, the pressure drops in the core and IHX gave a good agreement within 1% error range. It was found that the porous media model was appropriate to analyze the pressure distribution inside reactor core and IHX. Also, a validation analysis showed the pressure drop through the porous media under the condition of 80% flow rate and thermal power was calculated 64% less than in 100% condition giving a physically reasonable analytic result. Since the temperatures in the hot-side pool and cold-side pool were estimated to be very close to 540 and 390 .deg. C specified on the design values respectively, the CFD models of heat source and sink was confirmed. Through the study, the methodology of 3-dimensional CFD analysis about KALIMER-600 has been established and proven. Performed with the methodology, the analysis data such as flow velocity, temperature and pressure distribution were compared by normalizing those data for the actual sized modeling and scale-down modeling. As a result, the characteristics of thermal hydraulic behavior were almost identical for the actual sized modeling and scale-down modeling and the similarity scaling law used in the design of the sodium test facility by KAERI was found to be correct
Fernando E. Postigo Marcos
2017-11-01
Full Text Available Under the increasing penetration of distributed energy resources and new smart network technologies, distribution utilities face new challenges and opportunities to ensure reliable operations, manage service quality, and reduce operational and investment costs. Simultaneously, the research community is developing algorithms for advanced controls and distribution automation that can help to address some of these challenges. However, there is a shortage of realistic test systems that are publically available for development, testing, and evaluation of such new algorithms. Concerns around revealing critical infrastructure details and customer privacy have severely limited the number of actual networks published and that are available for testing. In recent decades, several distribution test feeders and US-featured representative networks have been published, but the scale, complexity, and control data vary widely. This paper presents a first-of-a-kind structured literature review of published distribution test networks with a special emphasis on classifying their main characteristics and identifying the types of studies for which they have been used. This both aids researchers in choosing suitable test networks for their needs and highlights the opportunities and directions for further test system development. In particular, we highlight the need for building large-scale synthetic networks to overcome the identified drawbacks of current distribution test feeders.
Full scale lightning surge tests of distribution transformers and secondary systems
Goedde, G.L.; Dugan, R.C. Sr.; Rowe, L.D.
1992-01-01
This paper reports that low-side surges are known to cause failures of distribution transformers. They also subject load devices to overvoltages. A full-scale model of a residential service has been set up in a laboratory and subjected to impulses approximating lightning strokes. The tests were made to determine the impulse characteristics of the secondary system and to test the validity of previous analyses. Among the variables investigated were stroke location, the balance of the surges in the service cable, and the effectiveness of arrester protection. Low-side surges were found to consist of two basic components: the natural frequency of the system and the inductive response of the system to the stoke current. The latter component is responsible for transformer failures while the former may be responsible for discharge spots often found around secondary bushings. Arresters at the service entrance are effective in diverting most of the energy from a lightning strike, but may not protect sensitive loads. Additional local protection is also needed. The tests affirmed previous simulations and uncovered additional phenomena as well
Ala-Aho, Pertti; Tetzlaff, Doerthe; McNamara, James P; Laudon, Hjalmar; Kormos, Patrick; Soulsby, Chris
2017-07-01
Use of stable water isotopes has become increasingly popular in quantifying water flow paths and travel times in hydrological systems using tracer-aided modeling. In snow-influenced catchments, snowmelt produces a traceable isotopic signal, which differs from original snowfall isotopic composition because of isotopic fractionation in the snowpack. These fractionation processes in snow are relatively well understood, but representing their spatiotemporal variability in tracer-aided studies remains a challenge. We present a novel, parsimonious modeling method to account for the snowpack isotope fractionation and estimate isotope ratios in snowmelt water in a fully spatially distributed manner. Our model introduces two calibration parameters that alone account for the isotopic fractionation caused by sublimation from interception and ground snow storage, and snowmelt fractionation progressively enriching the snowmelt runoff. The isotope routines are linked to a generic process-based snow interception-accumulation-melt model facilitating simulation of spatially distributed snowmelt runoff. We use a synthetic modeling experiment to demonstrate the functionality of the model algorithms in different landscape locations and under different canopy characteristics. We also provide a proof-of-concept model test and successfully reproduce isotopic ratios in snowmelt runoff sampled with snowmelt lysimeters in two long-term experimental catchment with contrasting winter conditions. To our knowledge, the method is the first such tool to allow estimation of the spatially distributed nature of isotopic fractionation in snowpacks and the resulting isotope ratios in snowmelt runoff. The method can thus provide a useful tool for tracer-aided modeling to better understand the integrated nature of flow, mixing, and transport processes in snow-influenced catchments.
Sampling, testing and modeling particle size distribution in urban catch basins.
Garofalo, G; Carbone, M; Piro, P
2014-01-01
The study analyzed the particle size distribution of particulate matter (PM) retained in two catch basins located, respectively, near a parking lot and a traffic intersection with common high levels of traffic activity. Also, the treatment performance of a filter medium was evaluated by laboratory testing. The experimental treatment results and the field data were then used as inputs to a numerical model which described on a qualitative basis the hydrological response of the two catchments draining into each catch basin, respectively, and the quality of treatment provided by the filter during the measured rainfall. The results show that PM concentrations were on average around 300 mg/L (parking lot site) and 400 mg/L (road site) for the 10 rainfall-runoff events observed. PM with a particle diameter of model showed that a catch basin with a filter unit can remove 30 to 40% of the PM load depending on the storm characteristics.
Basic distribution free identification tests for small size samples of environmental data
Federico, A.G.; Musmeci, F.
1998-01-01
Testing two or more data sets for the hypothesis that they are sampled form the same population is often required in environmental data analysis. Typically the available samples have a small number of data and often then assumption of normal distributions is not realistic. On the other hand the diffusion of the days powerful Personal Computers opens new possible opportunities based on a massive use of the CPU resources. The paper reviews the problem introducing the feasibility of two non parametric approaches based on intrinsic equi probability properties of the data samples. The first one is based on a full re sampling while the second is based on a bootstrap approach. A easy to use program is presented. A case study is given based on the Chernobyl children contamination data [it
Using an Integrated Distributed Test Architecture to Develop an Architecture for Mars
Othon, William L.
2016-01-01
The creation of a crew-rated spacecraft architecture capable of sending humans to Mars requires the development and integration of multiple vehicle systems and subsystems. Important new technologies will be identified and matured within each technical discipline to support the mission. Architecture maturity also requires coordination with mission operations elements and ground infrastructure. During early architecture formulation, many of these assets will not be co-located and will required integrated, distributed test to show that the technologies and systems are being developed in a coordinated way. When complete, technologies must be shown to function together to achieve mission goals. In this presentation, an architecture will be described that promotes and advances integration of disparate systems within JSC and across NASA centers.
Multiplicity distributions of gluon and quark jets and a test of QCD analytic calculations
Gary, J. William
1999-01-01
Gluon jets are identified in e + e - hadronic annihilation events by tagging two quark jets in the same hemisphere of an event. The gluon jet is defined inclusively as all the particles in the opposite hemisphere. Gluon jets defined in this manner have a close correspondence to gluon jets as they are defined for analytic calculations, and are almost independent of a jet finding algorithm. The mean and first few higher moments of the gluon jet charged particle multiplicity distribution are compared to the analogous results found for light quark (uds) jets, also defined inclusively. Large differences are observed between the mean, skew and curtosis values of the gluon and quark jets, but not between their dispersions. The cumulant factorial moments of the distributions are also measured, and are used to test the predictions of QCD analytic calculations. A calculation which includes next-to-next-to-leading order corrections and energy conservation is observed to provide a much improved description of the separated gluon and quark jet cumulant moments compared to a next-to-leading order calculation without energy conservation. There is good quantitative agreement between the data and calculations for the ratios of the cumulant moments between gluon and quark jets. The data sample used is the LEP-1 sample of the OPAL experiment at LEP
Multiplicity distributions of gluon and quark jets and a test of QCD analytic calculations
Gary, J. William
1999-03-01
Gluon jets are identified in e{sup +}e{sup -} hadronic annihilation events by tagging two quark jets in the same hemisphere of an event. The gluon jet is defined inclusively as all the particles in the opposite hemisphere. Gluon jets defined in this manner have a close correspondence to gluon jets as they are defined for analytic calculations, and are almost independent of a jet finding algorithm. The mean and first few higher moments of the gluon jet charged particle multiplicity distribution are compared to the analogous results found for light quark (uds) jets, also defined inclusively. Large differences are observed between the mean, skew and curtosis values of the gluon and quark jets, but not between their dispersions. The cumulant factorial moments of the distributions are also measured, and are used to test the predictions of QCD analytic calculations. A calculation which includes next-to-next-to-leading order corrections and energy conservation is observed to provide a much improved description of the separated gluon and quark jet cumulant moments compared to a next-to-leading order calculation without energy conservation. There is good quantitative agreement between the data and calculations for the ratios of the cumulant moments between gluon and quark jets. The data sample used is the LEP-1 sample of the OPAL experiment at LEP.
Multiplicity distributions of gluon and quark jets and a test of QCD analytic calculations
Gary, J.W. [California Univ., Riverside, CA (United States). Dept. of Physics
1999-03-01
Gluon jets are identified in e{sup +}e{sup -} hadronic annihilation events by tagging two quark jets in the same hemisphere of an event. The gluon jet is defined inclusively as all the particles in the opposite hemisphere. Gluon jets defined in this manner have a close correspondence to gluon jets as they are defined for analytic calculations, and are almost independent of a jet finding algorithm. The mean and first few higher moments of the gluon jet charged particle multiplicity distribution are compared to the analogous results found for light quark (uds) jets, also defined inclusively. Large differences are observed between the mean, skew and curtosis values of the gluon and quark jets, but not between their dispersions. The cumulant factorial moments of the distributions are also measured, and are used to test the predictions of QCD analytic calculations. A calculation which includes next-to-next-to-leading order corrections and energy conservation is observed to provide a much improved description of the separated gluon and quark jet cumulant moments compared to a next-to-leading order calculation without energy conservation. There is good quantitative agreement between the data and calculations for the ratios of the cumulant moments between gluon and quark jets. The data sample used is the LEP-1 sample of the OPAL experiment at LEP. (orig.) 6 refs.
Multiplicity distributions of gluon and quark jets and a test of QCD analytic calculations
Gary, J.W.
1999-01-01
Gluon jets are identified in e + e - hadronic annihilation events by tagging two quark jets in the same hemisphere of an event. The gluon jet is defined inclusively as all the particles in the opposite hemisphere. Gluon jets defined in this manner have a close correspondence to gluon jets as they are defined for analytic calculations, and are almost independent of a jet finding algorithm. The mean and first few higher moments of the gluon jet charged particle multiplicity distribution are compared to the analogous results found for light quark (uds) jets, also defined inclusively. Large differences are observed between the mean, skew and curtosis values of the gluon and quark jets, but not between their dispersions. The cumulant factorial moments of the distributions are also measured, and are used to test the predictions of QCD analytic calculations. A calculation which includes next-to-next-to-leading order corrections and energy conservation is observed to provide a much improved description of the separated gluon and quark jet cumulant moments compared to a next-to-leading order calculation without energy conservation. There is good quantitative agreement between the data and calculations for the ratios of the cumulant moments between gluon and quark jets. The data sample used is the LEP-1 sample of the OPAL experiment at LEP. (orig.)
Multiplicity distributions of gluon and quark jets and a test of QCD analytic calculations
Gary, J. William
1999-03-01
Gluon jets are identified in e +e - hadronic annihilation events by tagging two quark jets in the same hemisphere of an event. The gluon jet is defined inclusively as all the particles in the opposite hemisphere. Gluon hets defined in this manner have a close correspondence to gluon jets as they are defined for analytic calculations, and are almost independent of a jet finding algorithm. The mean and first few higher moments of the gluon jet charged particle multiplicity distribution are compared to the analogous results found for light quark (uds) jets, also defined inclusively. Large differences are observed between the mean, skew and curtosis values of the gluon and quark jets, but not between their dispersions. The cumulant factorial moments of the distributions are also measured, and are used to test the predictions of QCD analytic calculations. A calculation which includes next-to-next-to-leading order corrections and energy conservation is observed to provide a much improved description of the separated gluon and quark jet cumulant moments compared to a next-to-leading order calculation without energy conservation. There is good quantitative agreement between the data and calculations for the ratios of the cumulant moments between gluon and quark jets. The data sample used is the LEP-1 sample of the OPAL experiment at LEP.
Distribution of the Chuckwalla, Western Burrowing Owl, and Six Bat Species on the Nevada Test Site
Cathy A. Willis
1997-05-01
Field Surveys were conducted in 1996 to determine the current distribution of several animal species of concern on the Nevada Test Site (NTS). They included the chuckwall (Sauromalus obesus), western burrowing owl (Speotyto cunicularia), and six species of bats. Nineteen chuckwallas and 118 scat locations were found during the chuckwalla field study. Eighteen western burrowing owls were found at 12 sighting locations during the 1996 field study. Of the eleven bat species of concern which might occur on the NTS, five, and possibly six, were captured during this survey. The U.S. Department of Energy, Nevada Operations Office, takes certain management actions to protect and conserve the chuckwalla, western burrowing owl, and bats on the NTS. These actions are described and include: (1) conducting surveys at sites of proposed land-disturbing activities (2) altering projects whenever possible to avoid or minimize impacts to these species (3) maintaining a geospatial database of known habitat for species of concern (4) sharing sighting and trap location data gathered on the NTS with other local land and resource managers, and (5) conducting periodic field surveys to monitor these species distribution and relative abundance on the NTS.
Beatley, J C
1965-04-01
A checklist of vascular plants of the Nevada Test Site is presented for use in studies of plant ecology. Data on the occurrence and distribution of plant species are included. Collections were made from both undisturbed and disturbed sites.
Comparison of measured and calculated reaction rate distributions in an scwr-like test lattice
Raetz, Dominik, E-mail: dominik.raetz@psi.ch [Paul Scherrer Institut, CH-5232 Villigen PSI (Switzerland); Jordan, Kelly A., E-mail: kelly.jordan@psi.ch [Paul Scherrer Institut, CH-5232 Villigen PSI (Switzerland); Murphy, Michael F., E-mail: mike.murphy@psi.ch [Paul Scherrer Institut, CH-5232 Villigen PSI (Switzerland); Perret, Gregory, E-mail: gregory.perret@psi.ch [Paul Scherrer Institut, CH-5232 Villigen PSI (Switzerland); Chawla, Rakesh, E-mail: rakesh.chawla@psi.ch [Paul Scherrer Institut, CH-5232 Villigen PSI (Switzerland); Ecole Polytechnique Federale de Lausanne (EPFL), CH-1015 Lausanne, EPFL (Switzerland)
2011-04-15
High resolution gamma-ray spectroscopy measurements were performed on 61 rods of an SCWR-like fuel lattice, after irradiation in the central test zone of the PROTEUS zero-power research reactor at the Paul Scherrer Institute in Switzerland. The derived reaction rates are the capture rate in {sup 238}U (C{sub 8}) and the total fission rate (F{sub tot}), and also the reaction rate ratio C{sub 8}/F{sub tot}. Each of these has been mapped rod-wise on the lattice and compared to calculated results from whole-reactor Monte Carlo simulations with MCNPX. Ratios of calculated to experimental values (C/E's) have been assessed for the C{sub 8}, F{sub tot} and C{sub 8}/F{sub tot} distributions across the lattice. These C/E's show excellent agreement between the calculations and the measurements. For the {sup 238}U capture rate distribution, the 1{sigma} level in the comparisons corresponds to an uncertainty of {+-}0.8%, while for the total fission rate the corresponding value is {+-}0.4%. The uncertainty for C{sub 8}/F{sub tot}, assessed as a reaction rate ratio characterizing each individual rod position in the test lattice, is significantly higher at {+-}2.2%. To determine the reproducibility of these results, the measurements were performed twice, once in 2006 and again in 2009. The agreement between these two measurement sets is within the respective statistical uncertainties.
Subekti, Muhammad; Ohno, Tomio; Kudo, Kazuhiko; Takamatsu, Kuniyoshi; Nabeshima, Kunihiko
2005-01-01
A new monitoring system scheme based on distributed architecture for the High Temperature Engineering Test Reactor (HTTR) is proposed to assure consistency of the real-time process of expanded system. A distributed monitoring task on client PCs as an alternative architecture maximizes the throughput and capabilities of the system even if the monitoring tasks suffer a shortage of bandwidth. The prototype of the on-line monitoring system has been developed successfully and will be tested at the actual HTTR site. (author)
Sikora, W; Chodura, J [Politechnika Sladska, Gliwice (Poland). Instytut Mechanizacji Gornictwa
1989-01-01
Evaluates a method for forecasting size distribution of black coal mined by shearer loaders in one coal seam. Laboratory tests for determining coal comminution during cutting and haulage along the face are analyzed. Methods for forecasting grain size distribution of coal under operational conditions using formulae developed on the basis of laboratory tests are discussed. Recommendations for design of a test stand and test conditions are discussed. A laboratory stand should accurately model operational conditions of coal cutting, especially dimensions of the individual elements of the shearer loader, geometry of the cutting drum and cutting tools, and strength characteristics of the coal seam. 9 refs.
Nadim Nachar
2008-03-01
Full Text Available It is often difficult, particularly when conducting research in psychology, to have access to large normally distributed samples. Fortunately, there are statistical tests to compare two independent groups that do not require large normally distributed samples. The Mann-Whitney U is one of these tests. In the following work, a summary of this test is presented. The explanation of the logic underlying this test and its application are presented. Moreover, the forces and weaknesses of the Mann-Whitney U are mentioned. One major limit of the Mann-Whitney U is that the type I error or alpha (? is amplified in a situation of heteroscedasticity.
Morishita, Yuki; Yoshioka, Yasuo; Satoh, Hiroyoshi; Nojiri, Nao; Nagano, Kazuya; Abe, Yasuhiro; Kamada, Haruhiko; Tsunoda, Shin-ichi; Nabeshi, Hiromi; Yoshikawa, Tomoaki; Tsutsumi, Yasuo
2012-01-01
Highlights: ► There is rising concern regarding the potential health risks of nanomaterials. ► Few studies have investigated the effect of nanomaterials on the reproductive system. ► Here, we evaluated the intra-testicular distribution of nanosilica particles. ► We showed that nanosilica particles can penetrate the blood-testis barrier. ► These data provide basic information on ways to create safer nanomaterials. -- Abstract: Amorphous nanosilica particles (nSP) are being utilized in an increasing number of applications such as medicine, cosmetics, and foods. The reduction of the particle size to the nanoscale not only provides benefits to diverse scientific fields but also poses potential risks. Several reports have described the in vivo and in vitro toxicity of nSP, but few studies have examined their effects on the male reproductive system. The aim of this study was to evaluate the testicular distribution and histologic effects of systemically administered nSP. Mice were injected intravenously with nSP with diameters of 70 nm (nSP70) or conventional microsilica particles with diameters of 300 nm (nSP300) on two consecutive days. The intratesticular distribution of these particles 24 h after the second injection was analyzed by transmission electron microscopy. nSP70 were detected within sertoli cells and spermatocytes, including in the nuclei of spermatocytes. No nSP300 were observed in the testis. Next, mice were injected intravenously with 0.4 or 0.8 mg nSP70 every other day for a total of four administrations. Testes were harvested 48 h and 1 week after the last injection and stained with hematoxylin–eosin for histologic analysis. Histologic findings in the testes of nSP70-treated mice did not differ from those of control mice. Taken together, our results suggest that nSP70 can penetrate the blood-testis barrier and the nuclear membranes of spermatocytes without producing apparent testicular injury.
Morishita, Yuki [Laboratory of Toxicology and Safety Science, Graduate School of Pharmaceutical Sciences, Osaka University, 1-6 Yamadaoka, Suita, Osaka 565-0871 (Japan); Yoshioka, Yasuo, E-mail: yasuo@phs.osaka-u.ac.jp [Laboratory of Toxicology and Safety Science, Graduate School of Pharmaceutical Sciences, Osaka University, 1-6 Yamadaoka, Suita, Osaka 565-0871 (Japan); Satoh, Hiroyoshi; Nojiri, Nao [Laboratory of Toxicology and Safety Science, Graduate School of Pharmaceutical Sciences, Osaka University, 1-6 Yamadaoka, Suita, Osaka 565-0871 (Japan); Nagano, Kazuya [Laboratory of Biopharmaceutical Research, National Institute of Biomedical Innovation, 7-6-8 Saitoasagi, Ibaraki, Osaka 567-0085 (Japan); Abe, Yasuhiro [Cancer Biology Research Center, Sanford Research/USD, 2301 E. 60th Street N, Sioux Falls, SD 57104 (United States); Kamada, Haruhiko; Tsunoda, Shin-ichi [Laboratory of Biopharmaceutical Research, National Institute of Biomedical Innovation, 7-6-8 Saitoasagi, Ibaraki, Osaka 567-0085 (Japan); The Center for Advanced Medical Engineering and Informatics, Osaka University, 1-6 Yamadaoka, Suita, Osaka 565-0871 (Japan); Nabeshi, Hiromi [Division of Foods, National Institute of Health Sciences, 1-18-1, Kamiyoga, Setagaya-ku, Tokyo 158-8501 (Japan); Yoshikawa, Tomoaki [Laboratory of Toxicology and Safety Science, Graduate School of Pharmaceutical Sciences, Osaka University, 1-6 Yamadaoka, Suita, Osaka 565-0871 (Japan); Tsutsumi, Yasuo, E-mail: ytsutsumi@phs.osaka-u.ac.jp [Laboratory of Toxicology and Safety Science, Graduate School of Pharmaceutical Sciences, Osaka University, 1-6 Yamadaoka, Suita, Osaka 565-0871 (Japan); Laboratory of Biopharmaceutical Research, National Institute of Biomedical Innovation, 7-6-8 Saitoasagi, Ibaraki, Osaka 567-0085 (Japan); The Center for Advanced Medical Engineering and Informatics, Osaka University, 1-6 Yamadaoka, Suita, Osaka 565-0871 (Japan)
2012-04-06
Highlights: Black-Right-Pointing-Pointer There is rising concern regarding the potential health risks of nanomaterials. Black-Right-Pointing-Pointer Few studies have investigated the effect of nanomaterials on the reproductive system. Black-Right-Pointing-Pointer Here, we evaluated the intra-testicular distribution of nanosilica particles. Black-Right-Pointing-Pointer We showed that nanosilica particles can penetrate the blood-testis barrier. Black-Right-Pointing-Pointer These data provide basic information on ways to create safer nanomaterials. -- Abstract: Amorphous nanosilica particles (nSP) are being utilized in an increasing number of applications such as medicine, cosmetics, and foods. The reduction of the particle size to the nanoscale not only provides benefits to diverse scientific fields but also poses potential risks. Several reports have described the in vivo and in vitro toxicity of nSP, but few studies have examined their effects on the male reproductive system. The aim of this study was to evaluate the testicular distribution and histologic effects of systemically administered nSP. Mice were injected intravenously with nSP with diameters of 70 nm (nSP70) or conventional microsilica particles with diameters of 300 nm (nSP300) on two consecutive days. The intratesticular distribution of these particles 24 h after the second injection was analyzed by transmission electron microscopy. nSP70 were detected within sertoli cells and spermatocytes, including in the nuclei of spermatocytes. No nSP300 were observed in the testis. Next, mice were injected intravenously with 0.4 or 0.8 mg nSP70 every other day for a total of four administrations. Testes were harvested 48 h and 1 week after the last injection and stained with hematoxylin-eosin for histologic analysis. Histologic findings in the testes of nSP70-treated mice did not differ from those of control mice. Taken together, our results suggest that nSP70 can penetrate the blood-testis barrier and the
Reexamination of shell model tests of the Porter-Thomas distribution
Grimes, S.M.
1983-01-01
Recent shell model calculations have yielded width amplitude distributions which have apparently not agreed with the Porter-Thomas distribution. This result conflicts with the present experimental evidence. A reanalysis of these calculations suggests that, although correct, they do not imply that the Porter-Thomas distribution will fail to describe the width distributions observed experimentally. The conditions for validity of the Porter-Thomas distribution are discussed
New aspects in distribution of population dose loads in Semipalatinsk Nuclear Test Site region
Hill, P.; Pivovarov, S.; Rukhin, A.; Seredavina, T.; Sushkova, N.
2008-01-01
Full text: The question on dose loads of Semipalatinsk Nuclear Test Site (SNTS) region population is not fully solved till now. There is rather different estimations of doses, received by people of nearest to SNTS settlements. It may be explain by absence of individual dosimeters during and after nuclear weapon tests and also many various ways of radiation exposure receiving. During last some years we have done a people dose loads estimations by Electron Paramagnetic Resonance (EPR) tooth enamel dosimetry method - one of the best and reliable for retrospective dosimetry. It was studied tooth enamel people from settlements Dolon, Bodene, Cheremushki, Mostik, which was irradiated mainly by the first atomic explosion 1949, settlement Sarjal, irradiated by the first thermonuclear explosion in 1953, and control settlement Maysk, which is sited close to SNTS, but there was no any radioactive traces due to east wind. The results display a not expected rather surprising picture: in all settlements, including control one Maysk, the dose loads distribution was rather similar, it has ex fast bimodal form with rather high doses in the second one. The possible reasons of such situation is discussed. The results obtained is compared with last estimations of Semipalatinsk region dose loads of population, which were specially attentively discussed at International Symposiums in Hiroshima (Japan, 2005) and Bethesda (MD, USA, 2006). (author)
Integration of the SSPM and STAGE with the MPACT Virtual Facility Distributed Test Bed.
Cipiti, Benjamin B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Shoman, Nathan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-08-01
The Material Protection Accounting and Control Technologies (MPACT) program within DOE NE is working toward a 2020 milestone to demonstrate a Virtual Facility Distributed Test Bed. The goal of the Virtual Test Bed is to link all MPACT modeling tools, technology development, and experimental work to create a Safeguards and Security by Design capability for fuel cycle facilities. The Separation and Safeguards Performance Model (SSPM) forms the core safeguards analysis tool, and the Scenario Toolkit and Generation Environment (STAGE) code forms the core physical security tool. These models are used to design and analyze safeguards and security systems and generate performance metrics. Work over the past year has focused on how these models will integrate with the other capabilities in the MPACT program and specific model changes to enable more streamlined integration in the future. This report describes the model changes and plans for how the models will be used more collaboratively. The Virtual Facility is not designed to integrate all capabilities into one master code, but rather to maintain stand-alone capabilities that communicate results between codes more effectively.
Foray, G.; Descamps-Mandine, A.; R’Mili, M.; Lamon, J.
2012-01-01
The present paper investigates glass fibre flaw size distributions. Two commercial fibre grades (HP and HD) mainly used in cement-based composite reinforcement were studied. Glass fibre fractography is a difficult and time consuming exercise, and thus is seldom carried out. An approach based on tensile tests on multifilament bundles and examination of the fibre surface by atomic force microscopy (AFM) was used. Bundles of more than 500 single filaments each were tested. Thus a statistically significant database of failure data was built up for the HP and HD glass fibres. Gaussian flaw distributions were derived from the filament tensile strength data or extracted from the AFM images. The two distributions were compared. Defect sizes computed from raw AFM images agreed reasonably well with those derived from tensile strength data. Finally, the pertinence of a Gaussian distribution was discussed. The alternative Pareto distribution provided a fair approximation when dealing with AFM flaw size.
Basic distribution free identification tests for small size samples of environmental data
Federico, A.G.; Musmeci, F. [ENEA, Centro Ricerche Casaccia, Rome (Italy). Dipt. Ambiente
1998-01-01
Testing two or more data sets for the hypothesis that they are sampled form the same population is often required in environmental data analysis. Typically the available samples have a small number of data and often then assumption of normal distributions is not realistic. On the other hand the diffusion of the days powerful Personal Computers opens new possible opportunities based on a massive use of the CPU resources. The paper reviews the problem introducing the feasibility of two non parametric approaches based on intrinsic equi probability properties of the data samples. The first one is based on a full re sampling while the second is based on a bootstrap approach. A easy to use program is presented. A case study is given based on the Chernobyl children contamination data. [Italiano] Nell`analisi di dati ambientali ricorre spesso il caso di dover sottoporre a test l`ipotesi di provenienza di due, o piu`, insiemi di dati dalla stessa popolazione. Tipicamente i dati disponibili sono pochi e spesso l`ipotesi di provenienza da distribuzioni normali non e` sostenibile. D`altra aprte la diffusione odierna di Personal Computer fornisce nuove possibili soluzioni basate sull`uso intensivo delle risorse della CPU. Il rapporto analizza il problema e presenta la possibilita` di utilizzo di due test non parametrici basati sulle proprieta` intrinseche di equiprobabilita` dei campioni. Il primo e` basato su una tecnica di ricampionamento esaustivo mentre il secondo su un approccio di tipo bootstrap. E` presentato un programma di semplice utilizzo e un caso di studio basato su dati di contaminazione di bambini a Chernobyl.
Ercan, İlke; Suyabatmaz, Enes
2018-06-01
The saturation in the efficiency and performance scaling of conventional electronic technologies brings about the development of novel computational paradigms. Brownian circuits are among the promising alternatives that can exploit fluctuations to increase the efficiency of information processing in nanocomputing. A Brownian cellular automaton, where signals propagate randomly and are driven by local transition rules, can be made computationally universal by embedding arbitrary asynchronous circuits on it. One of the potential realizations of such circuits is via single electron tunneling (SET) devices since SET technology enable simulation of noise and fluctuations in a fashion similar to Brownian search. In this paper, we perform a physical-information-theoretic analysis on the efficiency limitations in a Brownian NAND and half-adder circuits implemented using SET technology. The method we employed here establishes a solid ground that enables studying computational and physical features of this emerging technology on an equal footing, and yield fundamental lower bounds that provide valuable insights into how far its efficiency can be improved in principle. In order to provide a basis for comparison, we also analyze a NAND gate and half-adder circuit implemented in complementary metal oxide semiconductor technology to show how the fundamental bound of the Brownian circuit compares against a conventional paradigm.
Thompson, William L.; Lee, Danny C.
2000-11-01
Many anadromous salmonid stocks in the Pacific Northwest are at their lowest recorded levels, which has raised questions regarding their long-term persistence under current conditions. There are a number of factors, such as freshwater spawning and rearing habitat, that could potentially influence their numbers. Therefore, we used the latest advances in information-theoretic methods in a two-stage modeling process to investigate relationships between landscape-level habitat attributes and maximum recruitment of 25 index stocks of chinook salmon (Oncorhynchus tshawytscha) in the Columbia River basin. Our first-stage model selection results indicated that the Ricker-type, stock recruitment model with a constant Ricker a (i.e., recruits-per-spawner at low numbers of fish) across stocks was the only plausible one given these data, which contrasted with previous unpublished findings. Our second-stage results revealed that maximum recruitment of chinook salmon had a strongly negative relationship with percentage of surrounding subwatersheds categorized as predominantly containing U.S. Forest Service and private moderate-high impact managed forest. That is, our model predicted that average maximum recruitment of chinook salmon would decrease by at least 247 fish for every increase of 33% in surrounding subwatersheds categorized as predominantly containing U.S. Forest Service and privately managed forest. Conversely, mean annual air temperature had a positive relationship with salmon maximum recruitment, with an average increase of at least 179 fish for every increase in 2 C mean annual air temperature.
2013-12-03
... FEDERAL DEPOSIT INSURANCE CORPORATION 12 CFR Part 325 Policy Statement on the Principles for... stress test horizon. The variables specified for each scenario generally address economic activity, asset..., 2012, that articulated the principles the FDIC will apply to develop and distribute the stress test...
Wei, Xiaojing; Savage, Jessica A; Riggs, Charlotte E; Cavender-Bares, Jeannine
2017-05-01
Environmental filtering is an important community assembly process influencing species distributions. Contrasting species abundance patterns along environmental gradients are commonly used to provide evidence for environmental filtering. However, the same abundance patterns may result from alternative or concurrent assembly processes. Experimental tests are an important means to decipher whether species fitness varies with environment, in the absence of dispersal constraints and biotic interactions, and to draw conclusions about the importance of environmental filtering in community assembly. We performed an experimental test of environmental filtering in 14 closely related willow and poplar species (family Salicaceae) by transplanting cuttings of each species into 40 common gardens established along a natural hydrologic gradient in the field, where competition was minimized and herbivory was controlled. We analyzed species fitness responses to the hydrologic environment based on cumulative growth and survival over two years using aster fitness models. We also examined variation in nine drought and flooding tolerance traits expected to contribute to performance based on a priori understanding of plant function in relation to water availability and stress. We found substantial evidence that environmental filtering along the hydrologic gradient played a critical role in determining species distributions. Fitness variation of each species in the field experiment was used to model their water table depth optima. These optima predicted 68% of the variation in species realized hydrologic niches based on peak abundance in naturally assembled communities in the surrounding region. Multiple traits associated with water transport efficiency and water stress tolerance were correlated with species hydrologic niches, but they did not necessarily covary with each other. As a consequence, species occupying similar hydrologic niches had different combinations of trait values
Shi, Jing; Ausloos, Marcel; Zhu, Tingting
2018-02-01
We discuss a common suspicion about reported financial data, in 10 industrial sectors of the 6 so called "main developing countries" over the time interval [2000-2014]. These data are examined through Benford's law first significant digit and through distribution distances tests. It is shown that several visually anomalous data have to be a priori removed. Thereafter, the distributions much better follow the first digit significant law, indicating the usefulness of a Benford's law test from the research starting line. The same holds true for distance tests. A few outliers are pointed out.
Westinghouse-GOTHIC distributed parameter modelling for HDR test E11.2
Narula, J.S.; Woodcock, J.
1994-01-01
The Westinghouse-GOTHIC (WGOTHIC) code is a sophisticated mathematical computer code designed specifically for the thermal hydraulic analysis of nuclear power plant containment and auxiliary buildings. The code is capable of sophisticated flow analysis via the solution of mass, momentum, and energy conservation equations. Westinghouse has investigated the use of subdivided noding to model the flow patterns of hydrogen following its release into a containment atmosphere. For the investigation, several simple models were constructed to represent a scale similar to the German HDR containment. The calculational models were simplified to test the basic capability of the plume modeling methods to predict stratification while minimizing the number of parameters. A large empty volume was modeled, with the same volume and height as HDR. A scenario was selected that would be expected to stably stratify, and the effects of noding on the prediction of stratification was studied. A single phase hot gas was injected into the volume at a height similar to that of HDR test E11.2, and there were no heat sinks modeled. Helium was released into the calculational models, and the resulting flow patterns were judged relative to the expected results. For each model, only the number of subdivisions within the containment volume was varied. The results of the investigation of noding schemes has provided evidence of the capability of subdivided (distributed parameter) noding. The results also showed that highly inaccurate flow patterns could be obtained by using an insufficient number of subdivided nodes. This presents a significant challenge to the containment analyst, who must weigh the benefits of increased noding with the penalties the noding may incur on computational efficiency. Clearly, however, an incorrect noding choice may yield erroneous results even if great care has been taken in modeling accurately all other characteristics of containments. (author). 9 refs., 9 figs
Tygert, Mark
2010-09-21
We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).
Mavin, S; Watson, E J; Evans, R
2015-01-01
This study examines the distribution of laboratory-confirmed cases of Lyme borreliosis in Scotland and the clinical spectrum of presentations within NHS Highland. Methods General demographic data (age/sex/referring Health Board) from all cases of Lyme borreliosis serologically confirmed by the National Lyme Borreliosis Testing Laboratory from 1 January 2008 to 31 December 2013 were analysed. Clinical features of confirmed cases were ascertained from questionnaires sent to referring clinicians within NHS Highland during the study period. Results The number of laboratory-confirmed cases of Lyme borreliosis in Scotland peaked at 440 in 2010. From 2008 to 2013 the estimated average annual incidence was 6.8 per 100,000 (44.1 per 100,000 in NHS Highland). Of 594 questionnaires from NHS Highland patients: 76% had clinically confirmed Lyme borreliosis; 48% erythema migrans; 17% rash, 25% joint, 15% neurological and 1% cardiac symptoms. Only 61% could recall a tick bite. Conclusion The incidence of Lyme borreliosis may be stabilising in Scotland but NHS Highland remains an area of high incidence. Lyme borreliosis should be considered in symptomatic patients that have had exposure to ticks and not just those with a definite tick bite.
The SimPort-based simulators and distributed control systems development, upgrade and testing
Danilov, Victor A.; Yanushevich, Dmitry I.; Zenkov, Andrey D.
2004-01-01
This paper describes the SimPort computer technology application experience related with power plant simulators and Distributed Control System (DCS) emulators development. The paper presents basic characteristics of the SimPort as a Windows NT-based implementation variety of the developed in RRC 'Kurchatov Institute' math modeling and simulator creation AIS technology. The new approaches and benefits of using simulation for DCS development and, especially upgrade, testing and on plant tuning are presented in the paper briefly. The DCS-inclusive power plant SimPort-based simulators are also considered. The DCS emulation principal problems, solutions and the experience-based DCS emulators development methodology are presented also. The presented technology quality and efficiency as well as presented DCS emulation problem solutions were verified by the smallest power plant full scope computer simulators development man-hours (at least twice lower in comparison with the similar simulators) in combination with the simulators correspondence to the standard and the best quality of nuclear power plant simulators. (Author)
Millard, J.B.
1986-01-01
Radioactive leaching ponds adjacent to the Test Reactor Area (TRA) located on the Idaho National Engineering Laboratory (INEL) site were investigated to determine the seasonal distribution and ecological behavior of gamma emitting radionuclides in various pond compartments. The physical, chemical and biological properties of the TRA ponds were documented including basic morphometry, water chemistry and species identification. Penetrating radiation exposure rates at the ponds ranged from 35 to 65 mR/d at the water surface and up to 3400 mR/d one meter above bottom sediments. Seasonal concentrations and concentration ratios were determined for 16 principle radionuclides in filtered water, sediment, seston, zooplankton, net plankton, nannoplankton, periphyton, macrophytes, thistle, speedwell and willow. Seston and nannoplankton had the highest concentration ratios with substantial decreases observed for higher trophic level compartments. Significant (P < 0.01 to P < 0.001) seasonal effects wee found for concentration ratios. Radionuclides without nutrient analogs had the highest ratios in spring for periphyton, macrophytes and littoral plants. Concentration ratios were highest in summer, fall or winter for radionuclides with nutrient analogs
Distributed medical services within the ATM-based Berlin regional test bed
Thiel, Andreas; Bernarding, Johannes; Krauss, Manfred; Schulz, Sandra; Tolxdorff, Thomas
1996-05-01
The ATM-based Metropolitan Area Network (MAN) of Berlin connects two university hospitals (Benjamin Franklin University Hospital and Charite) with the computer resources of the Technical University of Berlin (TUB). Distributed new medical services have been implemented and will be evaluated within the highspeed MAN of Berlin. The network with its data transmission rates of up to 155 Mbit/s renders these medical services externally available to practicing physicians. Resource and application sharing is demonstrated by the use of two software systems. The first software system is an interactive 3D reconstruction tool (3D- Medbild), based on a client-server mechanism. This structure allows the use of high- performance computers at the TUB from the low-level workstations in the hospitals. A second software system, RAMSES, utilizes a tissue database of Magnetic Resonance Images. For the remote control of the software, the developed applications use standards such as DICOM 3.0 and features of the World Wide Web. Data security concepts are being tested and integrated for the needs of the sensitive medical data. The highspeed network is the necessary prerequisite for the clinical evaluation of data in a joint teleconference. The transmission of digitized real-time sequences such as video and ultrasound and the interactive manipulation of data are made possible by Multi Media tools.
Conradsen, Knut; Nielsen, Allan Aasbjerg; Schou, Jesper
2003-01-01
. Based on this distribution, a test statistic for equality of two such matrices and an associated asymptotic probability for obtaining a smaller value of the test statistic are derived and applied successfully to change detection in polarimetric SAR data. In a case study, EMISAR L-band data from April 17...... to HH, VV, or HV data alone, the derived test statistic reduces to the well-known gamma likelihood-ratio test statistic. The derived test statistic and the associated significance value can be applied as a line or edge detector in fully polarimetric SAR data also....
2003-04-01
This report describes RealEnergy's evolving distributed generation command and control system, called the"Distributed Energy Information System" (DEIS). This system uses algorithms to determine how to operate distributed generation systems efficiently and profitably. The report describes the system and RealEnergy's experiences in installing and applying the system to manage distributed generators for commercial building applications.The report is divided into six tasks. The first five describe the DEIS; the sixth describes RE's regulatory and contractual obligations: Task 1: Define Information and Communications Requirements; Task 2: Develop Command and Control Algorithms for Optimal Dispatch; Task 3: Develop Codes and Modules for Optimal Dispatch Algorithms; Task 4: Test Codes Using Simulated Data; Task 5: Install and Test Energy Management Software; Task 6: Contractual and Regulatory Issues.
Park, Hyeonwoo; Teramoto, Akinobu; Kuroda, Rihito; Suwa, Tomoyuki; Sugawa, Shigetoshi
2018-04-01
Localized stress-induced leakage current (SILC) has become a major problem in the reliability of flash memories. To reduce it, clarifying the SILC mechanism is important, and statistical measurement and analysis have to be carried out. In this study, we applied an array test circuit that can measure the SILC distribution of more than 80,000 nMOSFETs with various gate areas at a high speed (within 80 s) and a high accuracy (on the 10-17 A current order). The results clarified that the distributions of localized SILC in different gate areas follow a universal distribution assuming the same SILC defect density distribution per unit area, and the current of localized SILC defects does not scale down with the gate area. Moreover, the distribution of SILC defect density and its dependence on the oxide field for measurement (E OX-Measure) were experimentally determined for fabricated devices.
Cooke, R.M.
1989-01-01
Comments on papers by Bari and Park (Uncertainty Characterization of Data for probabilistic risk assessment) and Unwin et al (An information-theoretic Basis for Uncertainty Analysis) are made. These raise issues on the work expounded. The main one is that entropy is not a proper measure of uncertainty, especially for a continuous variable, mainly because the entropy for a continuous distribution is not invariant and can be negative. The use of expert opinion and the method proposed for evaluating uncertainty are discussed. One comment makes three points, first good elicitation practice is more useful than a few constraints on probability distributions, second the maximum entropy approach can be a useful addition to good elicitation practice, third that the maximum entropy approach is misused. (UK)
Ko, Y.J.; Euh, D.J.; Youn, Y.J.; Chu, I.C.; Kwon, T.S.
2011-01-01
A design of SMART reactor has been developed, of which the primary system is composed of four internal circulation pumps, a core of 57 fuel assemblies, eight cassettes of steam generators, flow mixing head assemblies, and other internal structures. Since primary design features are very different from conventional reactors, the characteristics of flow and pressure distribution are expected to be different accordingly. In order to analyze the thermal margin and hydraulic design characteristics of SMART reactor, design quantification tests for flow and pressure distribution with a preservation of flow geometry are necessary. In the present study, the design feature of the test facility in order to investigate flow and pressure distribution, named “SCOP” is described. In order to preserve the flow distribution characteristics, the SCOP is linearly reduced with a scaling ratio of 1/5. The core flow rate of each fuel assembly is measured by a venturi meter attached in the lower part of the core simulator having a similarity of pressure drop for nominally scaled flow conditions. All the 57 core simulators and 8 S/G simulators are precisely calibrated in advance of assembling in test facilities. The major parameters in tests are pressures, differential pressures, and core flow distribution. (author)
Rothhaar, Paul M.; Murphy, Patrick C.; Bacon, Barton J.; Gregory, Irene M.; Grauer, Jared A.; Busan, Ronald C.; Croom, Mark A.
2014-01-01
Control of complex Vertical Take-Off and Landing (VTOL) aircraft traversing from hovering to wing born flight mode and back poses notoriously difficult modeling, simulation, control, and flight-testing challenges. This paper provides an overview of the techniques and advances required to develop the GL-10 tilt-wing, tilt-tail, long endurance, VTOL aircraft control system. The GL-10 prototype's unusual and complex configuration requires application of state-of-the-art techniques and some significant advances in wind tunnel infrastructure automation, efficient Design Of Experiments (DOE) tunnel test techniques, modeling, multi-body equations of motion, multi-body actuator models, simulation, control algorithm design, and flight test avionics, testing, and analysis. The following compendium surveys key disciplines required to develop an effective control system for this challenging vehicle in this on-going effort.
Lebron, Ramon C.; Oliver, Angela C.; Bodi, Robert F.
1991-01-01
Power components hardware in support of the Space Station freedom dc Electric Power System were tested. One type of breadboard hardware tested is the dc Load Converter Unit, which constitutes the power interface between the electric power system and the actual load. These units are dc to dc converters that provide the final system regulation before power is delivered to the load. Three load converters were tested: a series resonant converter, a series inductor switch-mode converter, and a switching full-bridge forward converter. The topology, operation principles, and test results are described, in general. A comparative analysis of the three units is given with respect to efficiency, regulation, short circuit behavior (protection), and transient characteristics.
Measurement of distribution coefficients using a radial injection dual-tracer test
Pickens, J.F.; Jackson, R.E.; Inch, K.J.; Merritt, W.F.
1981-01-01
The dispersive and adsorptive properties of a sandy aquifer were evaluated by using a radial injection dual-tracer test with 131 I as the nonreactive tracer and 85 Sr as the reactive tracer. The tracer migration was monitored by using multilevel point-sampling devices located at various radial distances and depths. Nonequilibrium physical and chemical adsorption effects for 85 Sr were treated as a spreading or dispersion mechanism in the breakthrough curve analysis. The resulting effective dispersivity values for 85 Sr were typically a factor of 2 to 5 larger than those obtained for 131 I. The distribution coefficient (K/sub d//sup Sr/) values obtained from analysis of the breakthrough curves at three depths and two radial distances ranged from 2.6 to 4.5 ml/g. These compare favorably with values obtained by separation of fluids from solids in sediment cores, by batch experiments on core sediments and by analysis of a 25-year-old radioactive waste plume in another part of the same aquifer. Correlations of adsorbed 85 Sr radioactivity with grain size fractions demonstrated preferential adsorption to the coarsest fraction and to the finest fraction. The relative amounts of electrostatically and specifically adsorbed 85 Sr on the aquifer sediments were determined with desorption experiments on core sediments using selective chemical extractants. The withdrawal phase breakthrough curves for the well, obtained immediately following the injection phase, showed essentially full tracer recoveries for both 131 I and 85 Sr. Relatively slow desorption of 85 Sr provided further indication of the nonequilibrium nature of the adsorption-desorption phenomena
Temperature Distribution Simulation of a Polymer Bearing Basing on the Real Tribological Tests
Artur Król
2015-09-01
Full Text Available Polymer bearings are widely used due to dry-lubrication mechanism, low weight, corrosion resistance and free maintenance. They are applied in different tribological pairs, i.e. household appliances, mechatronics systems, medical devices, food machines and many more. However their use is limited by high coefficient of thermal expansion and softening at elevated temperature, especially when working outside recommended pv factors. The modification of bearing design to achieve better characteristics at more demanding conditions, requires full understanding of mechanical and thermal phenomena of bearing work. The first step was to observe a thermal behavior of polymer bearing under real test conditions (50, 100, 150 rpm and 350 and 700N until constant values were achieved, i.e. temperature and moment of friction. Subsequently collected data were used in a design of temperature distribution model. Thermal simulations of the polymer bearing were done using commercial software package ANSYS Fluent, which is based on finite volume method. All calculations were performed for 3D geometrical model that included polymer bearing, its housing, shaft and some volume of the surrounding air. The heat generation caused by friction forces was implemented by volumetric heat source. All three main heat transfer mechanism (conduction, convection and radiation were included in heat transfer calculations and the air flow around the bearing and adjacent parts was directly solved. The unknown parameters of the numerical model were adjusted by comparison of the results from computer calculations with the measured temperature rise. In the presented work the calculations were limited to steady state conditions only, but the model may be also used in transient analysis.DOI: http://dx.doi.org/10.5755/j01.ms.21.3.7342
A. D. Khomonenko
2016-07-01
Full Text Available Subject of Research.Software reliability and test planning models are studied taking into account the probabilistic nature of error detection and discovering. Modeling of software testing enables to plan the resources and final quality at early stages of project execution. Methods. Two dynamic models of processes (strategies are suggested for software testing, using error detection probability for each software module. The Erlang distribution is used for arbitrary distribution approximation of fault resolution duration. The exponential distribution is used for approximation of fault resolution discovering. For each strategy, modified labeled graphs are built, along with differential equation systems and their numerical solutions. The latter makes it possible to compute probabilistic characteristics of the test processes and states: probability states, distribution functions for fault detection and elimination, mathematical expectations of random variables, amount of detected or fixed errors. Evaluation of Results. Probabilistic characteristics for software development projects were calculated using suggested models. The strategies have been compared by their quality indexes. Required debugging time to achieve the specified quality goals was calculated. The calculation results are used for time and resources planning for new projects. Practical Relevance. The proposed models give the possibility to use the reliability estimates for each individual module. The Erlang approximation removes restrictions on the use of arbitrary time distribution for fault resolution duration. It improves the accuracy of software test process modeling and helps to take into account the viability (power of the tests. With the use of these models we can search for ways to improve software reliability by generating tests which detect errors with the highest probability.
McArtor, Daniel B; Lubke, Gitta H; Bergeman, C S
2017-12-01
Person-centered methods are useful for studying individual differences in terms of (dis)similarities between response profiles on multivariate outcomes. Multivariate distance matrix regression (MDMR) tests the significance of associations of response profile (dis)similarities and a set of predictors using permutation tests. This paper extends MDMR by deriving and empirically validating the asymptotic null distribution of its test statistic, and by proposing an effect size for individual outcome variables, which is shown to recover true associations. These extensions alleviate the computational burden of permutation tests currently used in MDMR and render more informative results, thus making MDMR accessible to new research domains.
A Study on the construction of 22.9 kV distribution test line of KEPCO
Yang, Hi Kyun; Choi, Hung Sik; Hwang, Si Dole; Jung, Young Ho [Korea Electric Power Corp. (KEPCO), Taejon (Korea, Republic of). Research Center; Kim, Dong Hwan; Choi, Chang Huk [Korea Power Engineering Company and Architecture Engineers (Korea, Republic of)
1995-12-31
In order to enhance the reliability of power supply and the quality of electricity, a study on the construction of 22.9 kV distribution test line was performed. The main objective of this study was to establish a construction plan and a basic design to perform the construction and the detailed design of the test line effectively. (author). 21 refs., 45 figs.
Livran, J; Parente, C; Riddone, G; Rybkowski, D; Veillet, N
2000-01-01
Three pre-series Test Cells of the LHC Cryogenic Distribution Line (QRL) [1], manufactured by three European industrial companies, will be tested in the year 2000 to qualify the design chosen and verify the thermal and mechanical performances. A dedicated test stand (170 m x 13 m) has been built for extensive testing and performance assessment of the pre-series units in parallel. They will be fed with saturated liquid helium at 4.2 K supplied by a mobile helium dewar. In addition, LN2 cooled helium will be used for cool-down and thermal shielding. For each of the three pre-series units, a set of end boxes has been designed and manufactured at CERN. This paper presents the layout of the cryogenic system for the pre-series units, the calorimetric methods as well as the results of the thermal calculation of the end box test.
Impact of uniform electrode current distribution on ETF. [Engineering Test Facility MHD generator
Bents, D. J.
1982-01-01
A basic reason for the complexity and sheer volume of electrode consolidation hardware in the MHD ETF Powertrain system is the channel electrode current distribution, which is non-uniform. If the channel design is altered to provide uniform electrode current distribution, the amount of hardware required decreases considerably, but at the possible expense of degraded channel performance. This paper explains the design impacts on the ETF electrode consolidation network associated with uniform channel electrode current distribution, and presents the alternate consolidation designs which occur. They are compared to the baseline (non-uniform current) design with respect to performance, and hardware requirements. A rational basis is presented for comparing the requirements for the different designs and the savings that result from uniform current distribution. Performance and cost impacts upon the combined cycle plant are discussed.
SMART-DS: Synthetic Models for Advanced, Realistic Testing: Distribution Systems and Scenarios
Palmintier, Bryan: Hodge, Bri-Mathias
2017-01-26
This presentation provides a Smart-DS project overview and status update for the ARPA-e GRID DATA program meeting 2017, including distribution systems, models, and scenarios, as well as opportunities for GRID DATA collaborations.
2013-10-28
.... OCC-2012-0016] Policy Statement on the Principles for Development and Distribution of Annual Stress... the stress test horizon. The variables specified for each scenario generally address economic activity... institutions by November 15th of each year. This document articulates the principles that the OCC will apply to...
Azman Ismail
2009-07-01
Full Text Available Compensation management literature highlights that income has three major features: salary, bonus and allowance. If the level and/or amount of income are distributed to employees based on proper rules this may increase pay satisfaction. More importantly, a thorough investigation in this area reveals that the effect of income distribution on pay satisfaction is not consistent if perceived value of money is present in organizations. The nature of this relationship is less emphasized in pay distribution literature. Therefore, this study was conducted to measure the effect of the perceived value of money and income distribution on pay satisfaction using 136 usable questionnaires gathered from employees who have worked in one city based local authority in Sabah, Malaysia (MSLAUTHORITY. Outcomes of hierarchical regression analysis showed that the interaction between perceived value of money and income distribution significantly correlated with pay satisfaction. This result confirms that perceived value of money does act as a moderating variable in the income distribution model of the organizational sample. In addition, discussion and implications of this study are elaborated.
Exeter, Daniel J; Moss, Lauren; Zhao, Jinfeng; Kyle, Cam; Riddell, Tania; Jackson, Rod; Wells, Susan
2015-09-01
National cardiovascular disease (CVD) guidelines recommend that adults have cholesterol levels monitored regularly. However, little is known about the extent and equity of cholesterol testing in New Zealand. To investigate the distribution and frequency of blood lipid testing by sociodemographic status in Auckland, New Zealand. We anonymously linked five national health datasets (primary care enrolment, laboratory tests, pharmaceuticals, hospitalisations and mortality) to identify adults aged ≥25 years without CVD or diabetes who had their lipids tested in 2006-2010, by age, gender, ethnicity and area of residence and deprivation. Multivariate logistic regression was used to estimate the likelihood of testing associated with these factors. Of the 627 907 eligible adults, 66.3% had at least one test between 2006 and 2010. Annual testing increased from 24.7% in 2006 to 35.1% in 2010. Testing increased with age similarly for men and women. Indian people were 87% more likely than New Zealand European and Others (NZEO) to be tested, Pacific people 8% more likely, but rates for Maori were similar to NZEO. There was marked variation within the region, with residents of the most deprived areas less likely to be tested than residents in least deprived areas. Understanding differences within and between population groups supports the development of targeted strategies for better service utilisation. While lipid testing has increased, sociodemographic variations persist by place of residence, and deprivation. Of the high CVD risk populations, lipid testing for Maori and Pacific is not being conducted according to need.
Halamish, Vered; Bjork, Robert A.
2011-01-01
Tests, as learning events, can enhance subsequent recall more than do additional study opportunities, even without feedback. Such advantages of testing tend to appear, however, only at long retention intervals and/or when criterion tests stress recall, rather than recognition, processes. We propose that the interaction of the benefits of testing…
Percentiles of the null distribution of 2 maximum lod score tests.
Ulgen, Ayse; Yoo, Yun Joo; Gordon, Derek; Finch, Stephen J; Mendell, Nancy R
2004-01-01
We here consider the null distribution of the maximum lod score (LOD-M) obtained upon maximizing over transmission model parameters (penetrance values, dominance, and allele frequency) as well as the recombination fraction. Also considered is the lod score maximized over a fixed choice of genetic model parameters and recombination-fraction values set prior to the analysis (MMLS) as proposed by Hodge et al. The objective is to fit parametric distributions to MMLS and LOD-M. Our results are based on 3,600 simulations of samples of n = 100 nuclear families ascertained for having one affected member and at least one other sibling available for linkage analysis. Each null distribution is approximately a mixture p(2)(0) + (1 - p)(2)(v). The values of MMLS appear to fit the mixture 0.20(2)(0) + 0.80chi(2)(1.6). The mixture distribution 0.13(2)(0) + 0.87chi(2)(2.8). appears to describe the null distribution of LOD-M. From these results we derive a simple method for obtaining critical values of LOD-M and MMLS. Copyright 2004 S. Karger AG, Basel
Wake, Kanako; Watanabe, Soichi; Taki, Masao; Varsier, Nadege; Wiart, Joe; Mann, Simon; Deltour, Isabelle; Cardis, Elisabeth
2009-01-01
A worldwide epidemiological study called 'INTERPHONE' has been conducted to estimate the hypothetical relationship between brain tumors and mobile phone use. In this study, we proposed a method to estimate 3D distribution of the specific absorption rate (SAR) in the human head due to mobile phone use to provide the exposure gradient for epidemiological studies. 3D SAR distributions due to exposure to an electromagnetic field from mobile phones are estimated from mobile phone compliance testing data for actual devices. The data for compliance testing are measured only on the surface in the region near the device and in a small 3D region around the maximum on the surface in a homogeneous phantom with a specific shape. The method includes an interpolation/extrapolation and a head shape conversion. With the interpolation/extrapolation, SAR distributions in the whole head are estimated from the limited measured data. 3D SAR distributions in the numerical head models, where the tumor location is identified in the epidemiological studies, are obtained from measured SAR data with the head shape conversion by projection. Validation of the proposed method was performed experimentally and numerically. It was confirmed that the proposed method provided good estimation of 3D SAR distribution in the head, especially in the brain, which is the tissue of major interest in epidemiological studies. We conclude that it is possible to estimate 3D SAR distributions in a realistic head model from the data obtained by compliance testing measurements to provide a measure for the exposure gradient in specific locations of the brain for the purpose of exposure assessment in epidemiological studies. The proposed method has been used in several studies in the INTERPHONE.
Hopkins, Philip F.; Hernquist, Lars
2009-01-01
We use the observed distribution of Eddington ratios as a function of supermassive black hole (BH) mass to constrain models of quasar/active galactic nucleus (AGN) lifetimes and light curves. Given the observed (well constrained) AGN luminosity function, a particular model for AGN light curves L(t) or, equivalently, the distribution of AGN lifetimes (time above a given luminosity t(>L)) translates directly and uniquely (without further assumptions) to a predicted distribution of Eddington ratios at each BH mass. Models for self-regulated BH growth, in which feedback produces a self-regulating 'decay' or 'blowout' phase after the AGN reaches some peak luminosity/BH mass and begins to expel gas and shut down accretion, make specific predictions for the light curves/lifetimes, distinct from, e.g., the expected distribution if AGN simply shut down by gas starvation (without feedback) and very different from the prediction of simple phenomenological 'light bulb' scenarios. We show that the present observations of the Eddington ratio distribution, spanning nearly 5 orders of magnitude in Eddington ratio, 3 orders of magnitude in BH mass, and redshifts z = 0-1, agree well with the predictions of self-regulated models, and rule out phenomenological 'light bulb' or pure exponential models, as well as gas starvation models, at high significance (∼5σ). We also compare with observations of the distribution of Eddington ratios at a given AGN luminosity, and find similar good agreement (but show that these observations are much less constraining). We fit the functional form of the quasar lifetime distribution and provide these fits for use, and show how the Eddington ratio distributions place precise, tight limits on the AGN lifetimes at various luminosities, in agreement with model predictions. We compare with independent estimates of episodic lifetimes and use this to constrain the shape of the typical AGN light curve, and provide simple analytic fits to these for use in
A qualitative study of secondary distribution of HIV self-test kits by female sex workers in Kenya.
Suzanne Maman
Full Text Available Promoting awareness of serostatus and frequent HIV testing is especially important among high risk populations such as female sex workers (FSW and their sexual partners. HIV self-testing is an approach that is gaining ground in sub-Saharan Africa as a strategy to increase knowledge of HIV status and promote safer sexual decisions. However, little is known about self-test distribution strategies that are optimal for increasing testing access among hard-to-reach and high risk individuals. We conducted a qualitative study with 18 FSW who participated in a larger study that provided them with five oral fluid-based self-tests, training on how to use the tests, and encouragement to offer the self-tests to their sexual partners using their discretion. Women demonstrated agency in the strategies they used to introduce self-tests to their partners and to avoid conflict with partners. They carefully considered with whom to share self-tests, often assessing the possibility for negative reactions from partners as part of their decision making process. When women faced negative reactions from partners, they drew on strategies they had used before to avoid conflict and physical harm from partners, such as not responding to angry partners and forgoing payment to leave angry partners quickly. Some women also used self-tests to make more informed sexual decisions with their partners.
Test of Different Air Distribution Concepts for a Single-Aisle Aircraft Cabin
Nielsen, Peter V.; Damsgaard, Charlotte; Liu, Li
2013-01-01
Traditionally, air is supplied to the aircraft cabin either by individual nozzles or by supply slots. The air is expected to be fully mixed in the cabin, and the system is considered to be a mixing ventilation system. This paper will describe different air distribution systems known from other ap...
Understanding the Sampling Distribution and Its Use in Testing Statistical Significance.
Breunig, Nancy A.
Despite the increasing criticism of statistical significance testing by researchers, particularly in the publication of the 1994 American Psychological Association's style manual, statistical significance test results are still popular in journal articles. For this reason, it remains important to understand the logic of inferential statistics. A…
Software Development and Testing Approach and Challenges in a distributed HEP Collaboration
Burckhart-Chromek, Doris
2007-01-01
In developing the ATLAS [1] Trigger and Data Acquisition (TDAQ) software, the team is applying the iterative waterfall model, evolutionary process management, formal software inspection, and lightweight review techniques. The long preparation phase, with a geographically widespread development team required that the standard techniques be adapted to this HEP environment. The testing process is receiving special attention. Unit tests and check targets in nightly project builds form the basis for the subsequent software project release testing. The integrated software is then being run on computing farms that give further opportunites for gaining experience, fault finding, and acquiring ideas for improvement. Dedicated tests on a farm of up to 1000 nodes address the large-scale aspect of the project. Integration test activities on the experimental site include the special purpose-built event readout hardware. Deployment in detector commissioning starts the countdown towards running the final ATLAS experiment. T...
Development and Testing of Protection Scheme for Renewable-Rich Distribution System
Brahma, Sukumar [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ranade, Satish [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Elkhatib, Mohamed E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ellis, Abraham [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Reno, Matthew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-09-01
As the penetration of renewables increases in the distribution systems, and microgrids are conceived with high penetration of such generation that connects through inverters, fault location and protection of microgrids needs consideration. This report proposes averaged models that help simulate fault scenarios in renewable-rich microgrids, models for locating faults in such microgrids, and comments on the protection models that may be considered for microgrids. Simulation studies are reported to justify the models.
Huang, Zhiyuan; Lam, Henry; Zhao, Ding
2017-01-01
This paper proposes a new framework based on joint statistical models for evaluating risks of automated vehicles in a naturalistic driving environment. The previous studies on the Accelerated Evaluation for automated vehicles are extended from multi-independent-variate models to joint statistics. The proposed toolkit includes exploration of the rare event (e.g. crash) sets and construction of accelerated distributions for Gaussian Mixture models using Importance Sampling techniques. Furthermo...
Empirical tests of Zipf's law mechanism in open source Linux distribution.
Maillart, T; Sornette, D; Spaeth, S; von Krogh, G
2008-11-21
Zipf's power law is a ubiquitous empirical regularity found in many systems, thought to result from proportional growth. Here, we establish empirically the usually assumed ingredients of stochastic growth models that have been previously conjectured to be at the origin of Zipf's law. We use exceptionally detailed data on the evolution of open source software projects in Linux distributions, which offer a remarkable example of a growing complex self-organizing adaptive system, exhibiting Zipf's law over four full decades.
Field distribution on an HVDC wall bushing during laboratory rain tests
Lampe, W.; Wikstrom, D.; Jacobson, B.
1991-01-01
This paper reports that an efficient counter-measure to suppress flashovers across HVDC wall bushings is to make their surfaces hydrophobic. This laboratory investigation reports the measured electric field along such a bushing under different environmental conditions. A significantly reduced radial field strength has been found for the hydrophobic bushing. Moreover, the total field strength distribution becomes almost independent of the prevailing dry zone. The flashover voltage for bushings with a hydrophobic surface is therefore significantly increased
LaRue, Michelle A.; Stapleton, Seth P.; Porter, Claire; Atkinson, Stephen N.; Atwood, Todd C.; Dyck, Markus; Lecomte, Nicolas
2015-01-01
High-resolution satellite imagery is a promising tool for providing coarse information about polar species abundance and distribution, but current applications are limited. With polar bears (Ursus maritimus), the technique has only proven effective on landscapes with little topographic relief that are devoid of snow and ice, and time-consuming manual review of imagery is required to identify bears. Here, we evaluated mechanisms to further develop methods for satellite imagery by examining data from Rowley Island, Canada. We attempted to automate and expedite detection via a supervised spectral classification and image differencing to expedite image review. We also assessed what proportion of a region should be sampled to obtain reliable estimates of density and abundance. Although the spectral signature of polar bears differed from nontarget objects, these differences were insufficient to yield useful results via a supervised classification process. Conversely, automated image differencing—or subtracting one image from another—correctly identified nearly 90% of polar bear locations. This technique, however, also yielded false positives, suggesting that manual review will still be required to confirm polar bear locations. On Rowley Island, bear distribution approximated a Poisson distribution across a range of plot sizes, and resampling suggests that sampling >50% of the site facilitates reliable estimation of density (CV in certain areas, but large-scale applications remain limited because of the challenges in automation and the limited environments in which the method can be effectively applied. Improvements in resolution may expand opportunities for its future uses.
LaRue, Michelle A.; Stapleton, Seth P.; Porter, Claire; Atkinson, Stephen N.; Atwood, Todd C.; Dyck, Markus; Lecomte, Nicolas
2015-01-01
High-resolution satellite imagery is a promising tool for providing coarse information about polar species abundance and distribution, but current applications are limited. With polar bears (Ursus maritimus), the technique has only proven effective on landscapes with little topographic relief that are devoid of snow and ice, and time-consuming manual review of imagery is required to identify bears. Here, we evaluated mechanisms to further develop methods for satellite imagery by examining data from Rowley Island, Canada. We attempted to automate and expedite detection via a supervised spectral classification and image differencing to expedite image review. We also assessed what proportion of a region should be sampled to obtain reliable estimates of density and abundance. Although the spectral signature of polar bears differed from nontarget objects, these differences were insufficient to yield useful results via a supervised classification process. Conversely, automated image differencing—or subtracting one image from another—correctly identified nearly 90% of polar bear locations. This technique, however, also yielded false positives, suggesting that manual review will still be required to confirm polar bear locations. On Rowley Island, bear distribution approximated a Poisson distribution across a range of plot sizes, and resampling suggests that sampling >50% of the site facilitates reliable estimation of density (CV large-scale applications remain limited because of the challenges in automation and the limited environments in which the method can be effectively applied. Improvements in resolution may expand opportunities for its future uses.
Tanaka, Kosuke; Hirosawa, Takashi; Obayashi, Hiroshi; Koyama, Shin Ichi; Yoshimochi, Hiroshi; Tanaka, Kenya
2008-01-01
In order to investigate the effect of americium addition to MOX fuels on the irradiation behavior, the 'Am-1' program is being conducted in JAEA. The Am-1 program consists of two short term irradiation tests of 10-minute and 24 hour irradiations and a steady-state irradiation test. The short-term irradiation tests were successfully completed and the post irradiation examinations (PIEs) are in progress. The PIEs for Am-containing MOX fuels focused on the microstructural evolution and redistribution behavior of Am at the initial stage of irradiation and the results to date are reported
Diaz, Francisco; Duran, Oscar; Henriquez, Pedro; Vega, Pedro; Padilla, Liliana; Gonzalez, David; Garcia Agudo, Edmundo
2000-01-01
This work was prepared by the Chilean and International Atomic Energy Agencies and covers the hydrodynamic functioning of sewage stability pools using tracers. The plant selected in the city of Cabrero, 500 km. south of Santiago, and is a rectangular facultative pool with a surface area of 7100 m 2 and a maximum volume of 12,327 m2 that receives an average flow of 20 l/s, serving a population of 7000 individuals. The work aims to characterize the runoff from the flow that enters the pool, using a radioactive tracer test, where the incoming water is marked, and its out-coming passage is determined, to establish the residence time distribution. Tritium was selected in the form of tritiated water as a tracer that is precisely emptied into the water flow from the distribution ravine at the lake entrance. Samples are taken at the outflow to determine the concentration of tritium after distillation, simultaneously measuring the flow, to be analyzed in a liquid flicker counter. An average test time of 5.3 days was obtained and an analysis of the residence time distribution for the tracer shows that it leaves quickly and indicates bad flow distribution in the lake with a major short circuit and probable dead zones
Distribution and Fate of Energetics on DoD Test and Training Ranges
Pennington, Judith
2001-01-01
The current state of knowledge concerning the nature and extent of residual explosives contamination on military testing and firing ranges is inadequate to ensure management of these facilities as sustainable resources...
Dataset for Testing Contamination Source Identification Methods for Water Distribution Networks
U.S. Environmental Protection Agency — This dataset includes the results of a simulation study using the source inversion techniques available in the Water Security Toolkit. The data was created to test...
2010-07-01
... Permitted Tolerance for Conducting Radiative Tests E Table E-2 to Subpart E of Part 53 Protection of... Reference Methods and Class I and Class II Equivalent Methods for PM2.5 or PM10â2.5 Pt. 53, Subpt. E, Table E-2 Table E-2 to Subpart E of Part 53—Spectral Energy Distribution and Permitted Tolerance for...
Năpăruş, Magdalena; Kuntner, Matjaž
2012-01-01
potential may be tested in foreseeing species distribution shifts due to habitat destruction and global climate change.
Magdalena Năpăruş
distributions of globally distributed terrestrial lineages. Its predictive potential may be tested in foreseeing species distribution shifts due to habitat destruction and global climate change.
Wing Kam Fung
2010-02-01
Full Text Available The case-control study is an important design for testing association between genetic markers and a disease. The Cochran-Armitage trend test (CATT is one of the most commonly used statistics for the analysis of case-control genetic association studies. The asymptotically optimal CATT can be used when the underlying genetic model (mode of inheritance is known. However, for most complex diseases, the underlying genetic models are unknown. Thus, tests robust to genetic model misspecification are preferable to the model-dependant CATT. Two robust tests, MAX3 and the genetic model selection (GMS, were recently proposed. Their asymptotic null distributions are often obtained by Monte-Carlo simulations, because they either have not been fully studied or involve multiple integrations. In this article, we study how components of each robust statistic are correlated, and find a linear dependence among the components. Using this new finding, we propose simple algorithms to calculate asymptotic null distributions for MAX3 and GMS, which greatly reduce the computing intensity. Furthermore, we have developed the R package Rassoc implementing the proposed algorithms to calculate the empirical and asymptotic p values for MAX3 and GMS as well as other commonly used tests in case-control association studies. For illustration, Rassoc is applied to the analysis of case-control data of 17 most significant SNPs reported in four genome-wide association studies.
Cao, Yuan; Li, Yu-Huai; Zou, Wen-Jie; Li, Zheng-Ping; Shen, Qi; Liao, Sheng-Kai; Ren, Ji-Gang; Yin, Juan; Chen, Yu-Ao; Peng, Cheng-Zhi; Pan, Jian-Wei
2018-04-01
Quantum entanglement was termed "spooky action at a distance" in the well-known paper by Einstein, Podolsky, and Rosen. Entanglement is expected to be distributed over longer and longer distances in both practical applications and fundamental research into the principles of nature. Here, we present a proposal for distributing entangled photon pairs between Earth and the Moon using a Lagrangian point at a distance of 1.28 light seconds. One of the most fascinating features in this long-distance distribution of entanglement is as follows. One can perform the Bell test with human supplying the random measurement settings and recording the results while still maintaining spacelike intervals. To realize a proof-of-principle experiment, we develop an entangled photon source with 1 GHz generation rate, about 2 orders of magnitude higher than previous results. Violation of Bell's inequality was observed under a total simulated loss of 103 dB with measurement settings chosen by two experimenters. This demonstrates the feasibility of such long-distance Bell test over extremely high-loss channels, paving the way for one of the ultimate tests of the foundations of quantum mechanics.
Analysis of Radial Plutonium Isotope Distribution in Irradiated Test MOX Fuel Rods
Oh, Jae Yong; Lee, Byung Ho; Koo, Yang Hyun; Kim, Han Soo
2009-01-15
After Rod 3 and 6 (KAERI MOX) were irradiated in the Halden reactor, their post-irradiation examinations are being carried out now. In this report, PLUTON code was implemented to analyze Rod 3 and 6 (KAERI MOX). In the both rods, the ratio of a maximum burnup to an average burnup in the radial distribution was 1.3 and the contents of {sup 239}Pu tended to increase as the radial position approached the periphery of the fuel pellet. The detailed radial distribution of {sup 239}Pu and {sup 240}Pu, however, were somewhat different. To find the reason for this difference, the contents of Pu isotopes were investigated as the burnup increased. The content of {sup 239}Pu decreased with the burnup. The content of {sup 240}Pu increased with the burnup by the 20 GWd/tM but decreased over the 20 GWd/tM. The local burnup of Rod 3 is higher than that of Rod 6 due to the hole penetrated through the fuel rod. The content of {sup 239}Pu decreased more rapidly than that of {sup 240}Pu in the Rod 6 with the increased burnup. It resulted in a radial distribution of {sup 239}Pu and {sup 240}Pu similar to Rod 3. The ratio of Xe to Kr is a parameter to find where the fissions occur in the nuclear fuel. In both Rod 3 and 6, it was 18.3 in the whole fuel rod cross section, which showed that the fissions occurred in the plutonium.
Greenberg, S.; Cooley, C.
2005-01-01
This report details progress on subcontract NAD-1-30605-1 between the National Renewable Energy Laboratory and RealEnergy (RE), the purpose of which is to describe RE's approach to the challenges it faces in the implementation of a nationwide fleet of clean cogeneration systems to serve contemporary energy markets. The Phase 2 report covers: utility tariff risk and its impact on market development; the effect on incentives on distributed energy markets; the regulatory effectiveness of interconnection in California; a survey of practical field interconnection issues; trend analysis for on-site generation; performance of dispatch systems; and information design hierarchy for combined heat and power.
Singh, R.K.; Redlinger, R.; Breitung, W.
2005-09-01
Design and analysis of blast resistant structures is an important area of safety research in nuclear, aerospace, chemical process and vehicle industries. Institute for Nuclear and Energy Technologies (IKET) of Research Centre- Karlsruhe (Forschungszentrum Karlsruhe or FZK) in Germany is pursuing active research on the entire spectrum of safety evaluation for efficient hydrogen management in case of the postulated design basis and beyond the design basis severe accidents for nuclear and non-nuclear applications. This report concentrates on the consequence analysis of hydrogen combustion accidents with emphasis on the structural safety assessment. The transient finite element simulation results obtained for 2gm, 4gm, 8gm and 16gm hydrogen combustion experiments concluded recently on the test-cell structure are described. The frequencies and damping of the test-cell observed during the hammer tests and the combustion experiments are used for the present three dimensional finite element model qualification. For the numerical transient dynamic evaluation of the test-cell structure, the pressure time history data computed with CFD code COM-3D is used for the four combustion experiments. Detail comparisons of the present numerical results for the four combustion experiments with the observed time signals are carried out to evaluate the structural connection behavior. For all the combustion experiments excellent agreement is noted for the computed accelerations and displacements at the standard transducer locations, where the measurements were made during the different combustion tests. In addition inelastic analysis is also presented for the test-cell structure to evaluate the limiting impulsive and quasi-static pressure loads. These results are used to evaluate the response of the test cell structure for the postulated over pressurization of the test-cell due to the blast load generated in case of 64 gm hydrogen ignition for which additional sets of computations were
Cohen, Jessica; Fink, Günther; Berg, Katrina; Aber, Flavia; Jordan, Matthew; Maloney, Kathleen; Dickens, William
2012-01-01
Despite the benefits of malaria diagnosis, most presumed malaria episodes are never tested. A primary reason is the absence of diagnostic tests in retail establishments, where many patients seek care. Malaria rapid diagnostic tests (RDTs) in drug shops hold promise for guiding appropriate treatment. However, retail providers generally lack awareness of RDTs and training to administer them. Further, unsubsidized RDTs may be unaffordable to patients and unattractive to retailers. This paper reports results from an intervention study testing the feasibility of RDT distribution in Ugandan drug shops. 92 drug shops in 58 villages were offered subsidized RDTs for sale after completing training. Data on RDT purchases, storage, administration and disposal were collected, and samples were sent for quality testing. Household surveys were conducted to capture treatment outcomes. Estimated daily RDT sales varied substantially across shops, from zero to 8.46 RDTs per days. Overall compliance with storage, treatment and disposal guidelines was excellent. All RDTs (100%) collected from shops passed quality testing. The median price charged for RDTs was 1000USH ($0.40), corresponding to a 100% markup, and the same price as blood slides in local health clinics. RDTs affected treatment decisions. RDT-positive patients were 23 percentage points more likely to buy Artemisinin Combination Therapies (ACTs) (p = .005) and 33.1 percentage points more likely to buy other antimalarials (ppercentage points more likely to buy ACTs (p = .05) and 31.4 percentage points more likely to buy other antimalarials (p<.001) than those not tested at all. Despite some heterogeneity, shops demonstrated a desire to stock RDTs and use them to guide treatment recommendations. Most shops stored, administered and disposed of RDTs properly and charged mark-ups similar to those charged on common medicines. Results from this study suggest that distributing RDTs through the retail sector is feasible and
Kokusho, T.; Nishi, K.; Okamoto, T.; Tanaka, Y.; Ueshima, T.; Kudo, K.; Kataoka, T.; Ikemi, M.; Kawai, T.; Sawada, Y.; Suzuki, K.; Yajima, K.; Higashi, S.
1997-01-01
An international joint research program called HLSST is proceeding. HLSST is large-scale seismic test (LSST) to investigate soil-structure interaction (SSI) during large earthquake in the field in Hualien, a high seismic region in Taiwan. A 1/4-scale model building was constructed on the gravelly soil in this site, and the backfill material of crushed stone was placed around the model plant after excavation for the construction. Also the model building and the foundation ground were extensively instrumental to monitor structure and ground response. To accurately evaluate SSI during earthquakes, geotechnical investigation and forced vibration test were performed during construction process namely before/after base excavation, after structure construction and after backfilling. And the distribution of the mechanical properties of the gravelly soil and the backfill are measured after the completion of the construction by penetration test and PS-logging etc. This paper describes the distribution and the change of the shear wave velocity (V s ) measured by the field test. Discussion is made on the effect of overburden pressure during the construction process on V s in the neighbouring soil and, further on the numerical soil model for SSI analysis. (orig.)
Test results of distributed ion pump designs for the PEP-II Asymmetric B-Factory collider
Calderon, M.; Holdener, F.; Peterson, D. [Lawrence Livermore National Lab., CA (United States)] [and others
1994-07-01
The testing facility measurement methods and results of prototype distributed ion pump (DIP) designs for the PEP-II B-Factory High Energy Ring are presented. Two basic designs with 5- or 7-anode plates were tested at LLNL with penning cell sizes of 15, 18, and 21 mm. Direct comparison of 5- and 7-plate anodes with 18 mm holes shows increased pumping speed with the 7-plate design. The 5-plate, 18 mm and 7-plate, 15 mm designs both gave an average pumping speed of 135 1/s/m at 1 {times} 10{sup {minus}8} Torr nitrogen base pressure in a varying 0.18 T peak B-field. Comparison of the three hole sizes indicates that cells smaller than the 15 mm tested can be efficiently used to obtain higher pumping speeds for the same anode plate sizes used.
Test results of distributed ion pump designs for the PEP-II Asymmetric B-Factory collider
Calderon, M.; Holdener, F.; Peterson, D.
1994-07-01
The testing facility measurement methods and results of prototype distributed ion pump (DIP) designs for the PEP-II B-Factory High Energy Ring are presented. Two basic designs with 5- or 7-anode plates were tested at LLNL with penning cell sizes of 15, 18, and 21 mm. Direct comparison of 5- and 7-plate anodes with 18 mm holes shows increased pumping speed with the 7-plate design. The 5-plate, 18 mm and 7-plate, 15 mm designs both gave an average pumping speed of 135 1/s/m at 1 x 10 -8 Torr nitrogen base pressure in a varying 0.18 T peak B-field. Comparison of the three hole sizes indicates that cells smaller than the 15 mm tested can be efficiently used to obtain higher pumping speeds for the same anode plate sizes used
EM algorithm for one-shot device testing with competing risks under exponential distribution
Balakrishnan, N.; So, H.Y.; Ling, M.H.
2015-01-01
This paper provides an extension of the work of Balakrishnan and Ling [1] by introducing a competing risks model into a one-shot device testing analysis under an accelerated life test setting. An Expectation Maximization (EM) algorithm is then developed for the estimation of the model parameters. An extensive Monte Carlo simulation study is carried out to assess the performance of the EM algorithm and then compare the obtained results with the initial estimates obtained by the Inequality Constrained Least Squares (ICLS) method of estimation. Finally, we apply the EM algorithm to a clinical data, ED01, to illustrate the method of inference developed here. - Highlights: • ALT data analysis for one-shot devices with competing risks is considered. • EM algorithm is developed for the determination of the MLEs. • The estimations of lifetime under normal operating conditions are presented. • The EM algorithm improves the convergence rate
Influence of friction on stress and strain distributions in small punch creep test models
Dymáček, Petr; Seitl, Stanislav; Milička, Karel; Dobeš, Ferdinand
417-418, - (2010), s. 561-564 ISSN 1013-9826. [International Conference on Fracture and Damage Mechanics /8./. Malta, 08.09.2009-10.09.2009] R&D Projects: GA AV ČR(CZ) IAA200410801; GA AV ČR(CZ) 1QS200410502 Institutional research plan: CEZ:AV0Z20410507 Keywords : small punch test * creep * chromium steel * finite element method Subject RIV: JG - Metallurgy www.scientific.net/KEM.417-418.561
Tests to control the power distribution in the IRT-2,000 reactor
Filipcuk, E.V.; Potapenko, P.T.; Krjukov, A.P.; Trofimov, A.P.; Kosilov, A.N.; Nebojan, V.T.; Timochin, E.S.
1976-01-01
Results of the investigations of a few structures of such control systems carried out with the help of the IRT 2,000 MIFI reactor in the years 1973/74 are presented in the present work. Within the framework of this study, the successful test of using the transmitter of the direct loading in equipment to control the neutron field was carried out. (orig./TK) [de
Yang, Caiqian; Wu, Zhishen; Zhang, Yufeng
2008-06-01
The application of hybrid carbon fiber reinforced polymer (HCFRP) sensors was addressed to monitor the structural health of an existing prestressed concrete (PC) box girder bridge in a destructive test. The novel HCFRP sensors were fabricated with three types of carbon tows in order to realize distributed and broad-based sensing, which is characterized by long-gauge length and low cost. The HCFRP sensors were bonded on the bottom and side surfaces of the existing bridge to monitor its structural health. The gauge lengths of the sensors bonded on the bottom and side surfaces were 1.5 m and 1.0 m, respectively. The HCFRP sensors were distributed on the bridge for two purposes. One was to detect damage and monitor the structural health of the bridge, such as the initiation and propagation of new cracks, strain distribution and yielding of steel reinforcements. The other purpose was to monitor the propagation of existing cracks. The good relationship between the change in electrical resistance and load indicates that the HCFRP sensors can provide actual infrastructures with a distributed damage detection and structural health monitoring system. Corrections were made to this article on 13 May 2008. The corrected electronic version is identical to the print version.
Yang Caiqian; Wu Zhishen; Zhang Yufeng
2008-01-01
The application of hybrid carbon fiber reinforced polymer (HCFRP) sensors was addressed to monitor the structural health of an existing prestressed concrete (PC) box girder bridge in a destructive test. The novel HCFRP sensors were fabricated with three types of carbon tows in order to realize distributed and broad-based sensing, which is characterized by long-gauge length and low cost. The HCFRP sensors were bonded on the bottom and side surfaces of the existing bridge to monitor its structural health. The gauge lengths of the sensors bonded on the bottom and side surfaces were 1.5 m and 1.0 m, respectively. The HCFRP sensors were distributed on the bridge for two purposes. One was to detect damage and monitor the structural health of the bridge, such as the initiation and propagation of new cracks, strain distribution and yielding of steel reinforcements. The other purpose was to monitor the propagation of existing cracks. The good relationship between the change in electrical resistance and load indicates that the HCFRP sensors can provide actual infrastructures with a distributed damage detection and structural health monitoring system. Corrections were made to this article on 13 May 2008. The corrected electronic version is identical to the print version
Kim, Seonghoon
2013-01-01
With known item response theory (IRT) item parameters, Lord and Wingersky provided a recursive algorithm for computing the conditional frequency distribution of number-correct test scores, given proficiency. This article presents a generalized algorithm for computing the conditional distribution of summed test scores involving real-number item…
Multiplicity distributions of gluon and quark jets and tests of QCD analytic predictions
OPAL Collaboration; Ackerstaff, K.; et al.
Gluon jets are identified in e+e^- hadronic annihilation events by tagging two quark jets in the same hemisphere of an event. The gluon jet is defined inclusively as all the particles in the opposite hemisphere. Gluon jets defined in this manner have a close correspondence to gluon jets as they are defined for analytic calculations, and are almost independent of a jet finding algorithm. The charged particle multiplicity distribution of the gluon jets is presented, and is analyzed for its mean, dispersion, skew, and curtosis values, and for its factorial and cumulant moments. The results are compared to the analogous results found for a sample of light quark (uds) jets, also defined inclusively. We observe differences between the mean, skew and curtosis values of gluon and quark jets, but not between their dispersions. The cumulant moment results are compared to the predictions of QCD analytic calculations. A calculation which includes next-to-next-to-leading order corrections and energy conservation is observed to provide a much improved description of the data compared to a next-to-leading order calculation without energy conservation. There is agreement between the data and calculations for the ratios of the cumulant moments between gluon and quark jets.
Multiplicity distributions of gluon and quark jets and tests of QCD analytic predictions
Ackerstaff, K; Allison, J; Altekamp, N; Anderson, K J; Anderson, S; Arcelli, S; Asai, S; Axen, D A; Azuelos, Georges; Ball, A H; Barberio, E; Barlow, R J; Bartoldus, R; Batley, J Richard; Baumann, S; Bechtluft, J; Beeston, C; Behnke, T; Bell, A N; Bell, K W; Bella, G; Bentvelsen, Stanislaus Cornelius Maria; Bethke, Siegfried; Biebel, O; Biguzzi, A; Bird, S D; Blobel, Volker; Bloodworth, Ian J; Bloomer, J E; Bobinski, M; Bock, P; Bonacorsi, D; Boutemeur, M; Bouwens, B T; Braibant, S; Brigliadori, L; Brown, R M; Burckhart, Helfried J; Burgard, C; Bürgin, R; Capiluppi, P; Carnegie, R K; Carter, A A; Carter, J R; Chang, C Y; Charlton, D G; Chrisman, D; Clarke, P E L; Cohen, I; Conboy, J E; Cooke, O C; Cuffiani, M; Dado, S; Dallapiccola, C; Dallavalle, G M; Davis, R; De Jong, S; del Pozo, L A; Desch, Klaus; Dienes, B; Dixit, M S; do Couto e Silva, E; Doucet, M; Duchovni, E; Duckeck, G; Duerdoth, I P; Eatough, D; Edwards, J E G; Estabrooks, P G; Evans, H G; Evans, M; Fabbri, Franco Luigi; Fanti, M; Faust, A A; Fiedler, F; Fierro, M; Fischer, H M; Fleck, I; Folman, R; Fong, D G; Foucher, M; Fürtjes, A; Futyan, D I; Gagnon, P; Gary, J W; Gascon, J; Gascon-Shotkin, S M; Geddes, N I; Geich-Gimbel, C; Geralis, T; Giacomelli, G; Giacomelli, P; Giacomelli, R; Gibson, V; Gibson, W R; Gingrich, D M; Glenzinski, D A; Goldberg, J; Goodrick, M J; Gorn, W; Grandi, C; Gross, E; Grunhaus, Jacob; Gruwé, M; Hajdu, C; Hanson, G G; Hansroul, M; Hapke, M; Hargrove, C K; Hart, P A; Hartmann, C; Hauschild, M; Hawkes, C M; Hawkings, R; Hemingway, Richard J; Herndon, M; Herten, G; Heuer, R D; Hildreth, M D; Hill, J C; Hillier, S J; Hobson, P R; Homer, R James; Honma, A K; Horváth, D; Hossain, K R; Howard, R; Hüntemeyer, P; Hutchcroft, D E; Igo-Kemenes, P; Imrie, D C; Ingram, M R; Ishii, K; Jawahery, A; Jeffreys, P W; Jeremie, H; Jimack, Martin Paul; Joly, A; Jones, C R; Jones, G; Jones, M; Jost, U; Jovanovic, P; Junk, T R; Karlen, D A; Kartvelishvili, V G; Kawagoe, K; Kawamoto, T; Kayal, P I; Keeler, Richard K; Kellogg, R G; Kennedy, B W; Kirk, J; Klier, A; Kluth, S; Kobayashi, T; Kobel, M; Koetke, D S; Kokott, T P; Kolrep, M; Komamiya, S; Kress, T; Krieger, P; Von Krogh, J; Kyberd, P; Lafferty, G D; Lahmann, R; Lai, W P; Lanske, D; Lauber, J; Lautenschlager, S R; Layter, J G; Lazic, D; Lee, A M; Lefebvre, E; Lellouch, Daniel; Letts, J; Levinson, L; Lloyd, S L; Loebinger, F K; Long, G D; Losty, Michael J; Ludwig, J; Macchiolo, A; MacPherson, A L; Mannelli, M; Marcellini, S; Markus, C; Martin, A J; Martin, J P; Martínez, G; Mashimo, T; Mättig, P; McDonald, W J; McKenna, J A; McKigney, E A; McMahon, T J; McPherson, R A; Meijers, F; Menke, S; Merritt, F S; Mes, H; Meyer, J; Michelini, Aldo; Mikenberg, G; Miller, D J; Mincer, A; Mir, R; Mohr, W; Montanari, A; Mori, T; Morii, M; Müller, U; Mihara, S; Nagai, K; Nakamura, I; Neal, H A; Nellen, B; Nisius, R; O'Neale, S W; Oakham, F G; Odorici, F; Ögren, H O; Oh, A; Oldershaw, N J; Oreglia, M J; Orito, S; Pálinkás, J; Pásztor, G; Pater, J R; Patrick, G N; Patt, J; Pearce, M J; Pérez-Ochoa, R; Petzold, S; Pfeifenschneider, P; Pilcher, J E; Pinfold, J L; Plane, D E; Poffenberger, P R; Poli, B; Posthaus, A; Rees, D L; Rigby, D; Robertson, S; Robins, S A; Rodning, N L; Roney, J M; Rooke, A M; Ros, E; Rossi, A M; Routenburg, P; Rozen, Y; Runge, K; Runólfsson, O; Ruppel, U; Rust, D R; Rylko, R; Sachs, K; Saeki, T; Sarkisyan-Grinbaum, E; Sbarra, C; Schaile, A D; Schaile, O; Scharf, F; Scharff-Hansen, P; Schenk, P; Schieck, J; Schleper, P; Schmitt, B; Schmitt, S; Schöning, A; Schröder, M; Schultz-Coulon, H C; Schumacher, M; Schwick, C; Scott, W G; Shears, T G; Shen, B C; Shepherd-Themistocleous, C H; Sherwood, P; Siroli, G P; Sittler, A; Skillman, A; Skuja, A; Smith, A M; Snow, G A; Sobie, Randall J; Söldner-Rembold, S; Springer, R W; Sproston, M; Stephens, K; Steuerer, J; Stockhausen, B; Stoll, K; Strom, D; Szymanski, P; Tafirout, R; Talbot, S D; Tanaka, S; Taras, P; Tarem, S; Teuscher, R; Thiergen, M; Thomson, M A; Von Törne, E; Towers, S; Trigger, I; Trócsányi, Z L; Tsur, E; Turcot, A S; Turner-Watson, M F; Utzat, P; Van Kooten, R; Verzocchi, M; Vikas, P; Vokurka, E H; Voss, H; Wäckerle, F; Wagner, A; Ward, C P; Ward, D R; Watkins, P M; Watson, A T; Watson, N K; Wells, P S; Wermes, N; White, J S; Wilkens, B; Wilson, G W; Wilson, J A; Wolf, G; Wyatt, T R; Yamashita, S; Yekutieli, G; Zacek, V; Zer-Zion, D
1999-01-01
Gluon jets are identified in e+e- hadronic annihilation events by tagging two quark jets in the same hemisphere of an event. The gluon jet is defined inclusively as all the particles in the opposite hemisphere. Gluon jets defined in this manner have a close correspondence to gluon jets as they are defined for analytic calculations, and are almost independent of a jet finding algorithm. The charged particle multiplicity distribution of the gluon jets is presented, and is analyzed for its mean, dispersion, skew, and curtosis values, and for its factorial and cumulant moments. The results are compared to the analogous results found for a sample of light quark (uds) jets, also defined inclusively. We observe differences between the mean, skew and curtosis values of gluon and quark jets, but not between their dispersions. The cumulant moment results are compared to the predictions of QCD analytic calculations. A calculation which includes next-to-next-to-leading order corrections and energy conservation is observe...
Murrihy, Rachael C; Byrne, Mitchell K; Gonsalvez, Craig J
2009-02-01
Internationally, family doctors seeking to enhance their skills in evidence-based mental health treatment are attending brief training workshops, despite clear evidence in the literature that short-term, massed formats are not likely to improve skills in this complex area. Reviews of the educational literature suggest that an optimal model of training would incorporate distributed practice techniques; repeated practice over a lengthy time period, small-group interactive learning, mentoring relationships, skills-based training and an ongoing discussion of actual patients. This study investigates the potential role of group-based training incorporating multiple aspects of good pedagogy for training doctors in basic competencies in brief cognitive behaviour therapy (BCBT). Six groups of family doctors (n = 32) completed eight 2-hour sessions of BCBT group training over a 6-month period. A baseline control design was utilised with pre- and post-training measures of doctors' BCBT skills, knowledge and engagement in BCBT treatment. Family doctors' knowledge, skills in and actual use of BCBT with patients improved significantly over the course of training compared with the control period. This research demonstrates preliminary support for the efficacy of an empirically derived group training model for family doctors. Brief CBT group-based training could prove to be an effective and viable model for future doctor training.
Distribution and characterization of radionuclides in soils from Nevada Test Site
Lee, S.Y.; Tamura, T.
1985-01-01
Selected physicochemical properties of plutonium-bearing radioactive particles and their association with host soils from the Nevada Test Site (NTS) were studied to aid in the environmental assessment of the radionuclides in the area and to provide technological concepts for potential cleanup operations. The dominant radioactive particles were amorphous to X-ray diffraction, very fragile by compression tests, and extremely porous with particle density 3 . The physical properties of the particles suggest that they can be broken to smaller respirable sizes by saltation during wind erosion and that their unique physical properties may be useful for mechanically separating them from the nonradioactive soil particles. Experimental results revealed that more than 90% of the total radioactivity was recovered in about 25% of the total sample weight through density separation techniques and in about 18% of the total weight by a grinding-sieving process. Radioactive particles might therefore be removed from the contaminated soil by a controlled vacuum collector, density separation, grinding-sieving separation, or a combination of these techniques on the basis of the density and compressibility differences between radioactive and nonradioactive particles. 21 references, 5 figures, 5 tables
HIGH POWER TESTS OF A MULTIMODE X-BAND RF DISTRIBUTION SYSTEMS
Tantawi, S
2004-01-01
We present a multimode X-band rf pulse compression system suitable for the Next Linear Collider (NLC). The NLC main linacs operate at 11.424 GHz. A single NLC rf unit is required which produce 400 ns pulses with 600 MW of peak power. Each rf unit should power approximately 5 meters of accelerator structures. These rf units consist of two 75 MW klystrons and a dual-moded resonant delay line pulse compression system [1] that produce a flat output pulse. The pulse compression system components are all over moded and most components are design to operate with two modes at the same time. This approach allows increasing the power handling capabilities of the system while maintain a compact inexpensive system. We detail the design of this system and present experimental cold test results. The high power testing of the system is verified using four 50-MW solenoid focused klystrons. These Klystrons should be able to push the system beyond NLC requirements
Effects of pocket gopher burrowing on cesium-133 distribution on engineered test plots
Gonzales, G.J.; Saladen, M.T.; Hakonson, T.E.
1995-01-01
Very low levels of radionuclides exist on soil surfaces. Biological factors including vegetation and animal burrowing can influence the fate of these surface contaminants. Animal burrowing introduces variability in radionuclide migration that confounds estimation of nuclide migration pathways, risk assessment, and assessment of waste burial performance. A field study on the surface and subsurface erosional transport of surface-applied 133 Cs as affected by pocket gopher (Thomomys bottae) burrowing was conducted on simulated waste landfill caps at the Los Alamos National Laboratory in north central New Mexico. Surface loss of Cs, adhered to five soil particle size ranges, was measured several times over an 18-mo period while simulated rainfalls were in progress. Gophers reduced Cs surface loss by significant amounts, 43%. Cesium surface loss on plots with only gophers was 0.8 kg totalled for the study period. This compared with 1.4 kg for control plots, 0.5 kg for vegetated plots, and 0.2 kg for plots with both gophers and vegetation. The change in Cs surface loss over time was significant (P -1 ). Vegetation-bearing plots bad significant more total subsurface Cs (μ = 1.7 g kg -1 ) than plots without vegetation (μ = 0.8 g kg -1 ). An average of 97% of the subsurface Cs in plots with vegetation was located in the upper 15 cm of soil (SDR1 + SDR2) compared with 67% for plots without vegetation. Vegetation moderated the influence of gopher activity on the transport of Cs to soil subsurface, and stabilized subsurface Cs by concentrating it in the rhizosphere. Gopher activity may have caused Cs transport to depths below that sampled, 30 cm. The results provide distribution coefficients for models of contaminant migration where animal burrowing occurs. 35 refs., 2 figs., 3 tabs
Jacak, Monika; Jacak, Janusz; Jóźwiak, Piotr; Jóźwiak, Ireneusz
2016-06-01
The overview of the current status of quantum cryptography is given in regard to quantum key distribution (QKD) protocols, implemented both on nonentangled and entangled flying qubits. Two commercial R&D platforms of QKD systems are described (the Clavis II platform by idQuantique implemented on nonentangled photons and the EPR S405 Quelle platform by AIT based on entangled photons) and tested for feasibility of their usage in commercial TELECOM fiber metropolitan networks. The comparison of systems efficiency, stability and resistivity against noise and hacker attacks is given with some suggestion toward system improvement, along with assessment of two models of QKD.
Ilić, M.; Schlindwein, G., E-mail: georg.schlindwein@kit.edu; Meyder, R.; Kuhn, T.; Albrecht, O.; Zinn, K.
2016-02-15
Highlights: • Experimental investigations of flow distribution in HCPB TBM are presented. • Flow rates in channels close to the first wall are lower than nominal ones. • Flow distribution in central chambers of manifold 2 is close to the nominal one. • Flow distribution in the whole manifold 3 agrees well with the nominal one. - Abstract: This paper deals with investigations of flow distribution in the coolant system of the Helium-Cooled-Pebble-Bed Test Blanket Module (HCPB TBM) for ITER. The investigations have been performed by manufacturing and testing of an experimental facility named GRICAMAN. The facility involves the upper poloidal half of HCPB TBM bounded at outlets of the first wall channels, at outlet of by-pass pipe and at outlets of cooling channels in breeding units. In this way, the focus is placed on the flow distribution in two mid manifolds of the 4-manifold system: (i) manifold 2 to which outlets of the first wall channels and inlet of by-pass pipe are attached and (ii) manifold 3 which supplies channels in breeding units with helium coolant. These two manifolds are connected with cooling channels in vertical/horizontal grids and caps. The experimental facility has been built keeping the internal structure of manifold 2 and manifold 3 exactly as designed in HCPB TBM. The cooling channels in stiffening grids, caps and breeding units are substituted by so-called equivalent channels which provide the same hydraulic resistance and inlet/outlet conditions, but have significantly simpler geometry than the real channels. Using the conditions of flow similarity, the air pressurized at 0.3 MPa and at ambient temperature has been used as working fluid instead of HCPB TBM helium coolant at 8 MPa and an average temperature of 370 °C. The flow distribution has been determined by flow rate measurements at each of 28 equivalent channels, while the pressure distribution has been obtained measuring differential pressure at more than 250 positions. The
Phase I Project: Fiber Optic Distributed Acoustic Sensing for Periodic Hydraulic Tests
Becker, Matthew
2017-12-31
The extraction of heat from hot rock requires circulation of fluid through fracture networks. Because the geometry and connectivity of these fractures determines the efficiency of fluid circulation, many tools are used to characterize fractures before and after development of the reservoir. Under this project, a new tool was developed that allows hydraulic connectivity between geothermal boreholes to be identified. Nanostrain in rock fractures is measured using fiber optic distributed acoustic sensing (DAS). This strain is measured in one borehole in response to periodic pressure pulses induced in another borehole. The strain in the fractures represents hydraulic connectivity between wells. DAS is typically used at frequencies of Hz to kHz, but strain at mHz frequencies were measured for this project. The tool was demonstrated in the laboratory and in the field. In the laboratory, strain in fiber optic cables was measured in response to compression due to oscillating fluid pressure. DAS recorded strains as small as 10 picometer/m in response to 1 cm of water level change. At a fractured crystalline rock field site, strain was measured in boreholes. Fiber-optic cable was mechanically coupled borehole walls using pressured flexible liners. In one borehole 30 m from the oscillating pumping source, pressure and strain were measured simultaneously. The DAS system measured fracture displacement at frequencies of less than 1 mHz (18 min periods) and amplitudes of less than 1 nm, in response to fluid pressure changes of less 20 Pa (2 mm of water). The attenuation and phase shift of the monitored strain signal is indicative of the permeability and storage (compliance) of the fracture network that connects the two wells. The strain response as a function of oscillation frequency is characteristic of the hydraulic structure of the formation. This is the first application of DAS to the measurement of low frequency strain in boreholes. It has enormous potential for monitoring
narges javidan
2017-02-01
Full Text Available Introduction: Flood routing is a procedure to calculate flood stage and water depth along a river or to estimate flood hydrograph at river downstream or at reservoir outlets using the upstream hydrography . In river basins, excess rainfall is routed to the basin outlet using flow routing techniques to generate flow hydrograph. A GIS-based distributed hydrological model, Wet Spa, has been under development suitable for flood prediction and watershed management on a catchment scale. The model predicts outflow hydrographs at the basin outlet or at any converging point in the watershed, and it does so in a user-specified time step. The model is physically based, spatially distributed and time-continuous, and simulates hydrological processes of precipitation, snowmelt, interception, depression, surface runoff, infiltration, evapotranspiration, percolation, interflow, groundwater flow, etc. continuously both in time and space, for which the water and energy balance are maintained on each raster cell. Surface runoff is produced using a modified coefficient method based on the cellular characteristics of slope, land use, and soil type, and allowed to vary with soil moisture, rainfall intensity and storm duration. Interflow is computed based on the Darcy’s law and the kinematic approximation as a function of the effective hydraulic conductivity and the hydraulic gradient, while groundwater flow is estimated with a linear reservoir method on a small subcatchment scale as a function of groundwater storage and a recession coefficient. Special emphasis is given to the overland flow and channel flow routing using the method of linear diffusive wave approximation, which is capable to predict flow discharge at any converging point downstream by a unit response function. The model accounts for spatially distributed hydrological and geophysical characteristics of the catchment. Determination of the river flow hydrograph is a main target in hydrology
Testing and evaluating storage technology to build a distributed Tier1 for SuperB in Italy
Pardi, S; Delprete, D; Russo, G; Fella, A; Corvo, M; Bianchi, F; Ciaschini, V; Giacomini, F; Simone, A Di; Donvito, G; Santeramo, B; Gianoli, A; Luppi, E; Manzali, M; Tomassetti, L; Longo, S; Stroili, R; Luitz, S; Perez, A; Rama, M
2012-01-01
The SuperB asymmetric energy e + e −- collider and detector to be built at the newly founded Nicola Cabibbo Lab will provide a uniquely sensitive probe of New Physics in the flavor sector of the Standard Model. Studying minute effects in the heavy quark and heavy lepton sectors requires a data sample of 75 ab −-1 and a luminosity target of 10 36 cm −-2 s −-1 . This luminosity translate in the requirement of storing more than 50 PByte of additional data each year, making SuperB an interesting challenge to the data management infrastructure, both at site level as at Wide Area Network level. A new Tier1, distributed among 3 or 4 sites in the south of Italy, is planned as part of the SuperB computing infrastructure. Data storage is a relevant topic whose development affects the way to configure and setup storage infrastructure both in local computing cluster and in a distributed paradigm. In this work we report the test on the software for data distribution and data replica focusing on the experiences made with Hadoop and GlusterFS.
Jessica Cohen
Full Text Available BACKGROUND: Despite the benefits of malaria diagnosis, most presumed malaria episodes are never tested. A primary reason is the absence of diagnostic tests in retail establishments, where many patients seek care. Malaria rapid diagnostic tests (RDTs in drug shops hold promise for guiding appropriate treatment. However, retail providers generally lack awareness of RDTs and training to administer them. Further, unsubsidized RDTs may be unaffordable to patients and unattractive to retailers. This paper reports results from an intervention study testing the feasibility of RDT distribution in Ugandan drug shops. METHODS AND FINDINGS: 92 drug shops in 58 villages were offered subsidized RDTs for sale after completing training. Data on RDT purchases, storage, administration and disposal were collected, and samples were sent for quality testing. Household surveys were conducted to capture treatment outcomes. Estimated daily RDT sales varied substantially across shops, from zero to 8.46 RDTs per days. Overall compliance with storage, treatment and disposal guidelines was excellent. All RDTs (100% collected from shops passed quality testing. The median price charged for RDTs was 1000USH ($0.40, corresponding to a 100% markup, and the same price as blood slides in local health clinics. RDTs affected treatment decisions. RDT-positive patients were 23 percentage points more likely to buy Artemisinin Combination Therapies (ACTs (p = .005 and 33.1 percentage points more likely to buy other antimalarials (p<.001 than RDT-negative patients, and were 5.6 percentage points more likely to buy ACTs (p = .05 and 31.4 percentage points more likely to buy other antimalarials (p<.001 than those not tested at all. CONCLUSIONS: Despite some heterogeneity, shops demonstrated a desire to stock RDTs and use them to guide treatment recommendations. Most shops stored, administered and disposed of RDTs properly and charged mark-ups similar to those charged on common
Using modern human cortical bone distribution to test the systemic robusticity hypothesis.
Baab, Karen L; Copes, Lynn E; Ward, Devin L; Wells, Nora; Grine, Frederick E
2018-06-01
The systemic robusticity hypothesis links the thickness of cortical bone in both the cranium and limb bones. This hypothesis posits that thick cortical bone is in part a systemic response to circulating hormones, such as growth hormone and thyroid hormone, possibly related to physical activity or cold climates. Although this hypothesis has gained popular traction, only rarely has robusticity of the cranium and postcranial skeleton been considered jointly. We acquired computed tomographic scans from associated crania, femora and humeri from single individuals representing 11 populations in Africa and North America (n = 228). Cortical thickness in the parietal, frontal and occipital bones and cortical bone area in limb bone diaphyses were analyzed using correlation, multiple regression and general linear models to test the hypothesis. Absolute thickness values from the crania were not correlated with cortical bone area of the femur or humerus, which is at odds with the systemic robusticity hypothesis. However, measures of cortical bone scaled by total vault thickness and limb cross-sectional area were positively correlated between the cranium and postcranium. When accounting for a range of potential confounding variables, including sex, age and body mass, variation in relative postcranial cortical bone area explained ∼20% of variation in the proportion of cortical cranial bone thickness. While these findings provide limited support for the systemic robusticity hypothesis, cranial cortical thickness did not track climate or physical activity across populations. Thus, some of the variation in cranial cortical bone thickness in modern humans is attributable to systemic effects, but the driving force behind this effect remains obscure. Moreover, neither absolute nor proportional measures of cranial cortical bone thickness are positively correlated with total cranial bone thickness, complicating the extrapolation of these findings to extinct species where only cranial
Yoon, Dukyong; Schuemie, Martijn J; Kim, Ju Han; Kim, Dong Ki; Park, Man Young; Ahn, Eun Kyoung; Jung, Eun-Young; Park, Dong Kyun; Cho, Soo Yeon; Shin, Dahye; Hwang, Yeonsoo; Park, Rae Woong
2016-03-01
Distributed research networks (DRNs) afford statistical power by integrating observational data from multiple partners for retrospective studies. However, laboratory test results across care sites are derived using different assays from varying patient populations, making it difficult to simply combine data for analysis. Additionally, existing normalization methods are not suitable for retrospective studies. We normalized laboratory results from different data sources by adjusting for heterogeneous clinico-epidemiologic characteristics of the data and called this the subgroup-adjusted normalization (SAN) method. Subgroup-adjusted normalization renders the means and standard deviations of distributions identical under population structure-adjusted conditions. To evaluate its performance, we compared SAN with existing methods for simulated and real datasets consisting of blood urea nitrogen, serum creatinine, hematocrit, hemoglobin, serum potassium, and total bilirubin. Various clinico-epidemiologic characteristics can be applied together in SAN. For simplicity of comparison, age and gender were used to adjust population heterogeneity in this study. In simulations, SAN had the lowest standardized difference in means (SDM) and Kolmogorov-Smirnov values for all tests (p normalization performed better than normalization using other methods. The SAN method is applicable in a DRN environment and should facilitate analysis of data integrated across DRN partners for retrospective observational studies. Copyright © 2015 John Wiley & Sons, Ltd.
Marshall, Leon; Carvalheiro, Luísa G; Aguirre-Gutiérrez, Jesús; Bos, Merijn; de Groot, G Arjen; Kleijn, David; Potts, Simon G; Reemer, Menno; Roberts, Stuart; Scheper, Jeroen; Biesmeijer, Jacobus C
2015-10-01
Species distribution models (SDM) are increasingly used to understand the factors that regulate variation in biodiversity patterns and to help plan conservation strategies. However, these models are rarely validated with independently collected data and it is unclear whether SDM performance is maintained across distinct habitats and for species with different functional traits. Highly mobile species, such as bees, can be particularly challenging to model. Here, we use independent sets of occurrence data collected systematically in several agricultural habitats to test how the predictive performance of SDMs for wild bee species depends on species traits, habitat type, and sampling technique. We used a species distribution modeling approach parametrized for the Netherlands, with presence records from 1990 to 2010 for 193 Dutch wild bees. For each species, we built a Maxent model based on 13 climate and landscape variables. We tested the predictive performance of the SDMs with independent datasets collected from orchards and arable fields across the Netherlands from 2010 to 2013, using transect surveys or pan traps. Model predictive performance depended on species traits and habitat type. Occurrence of bee species specialized in habitat and diet was better predicted than generalist bees. Predictions of habitat suitability were also more precise for habitats that are temporally more stable (orchards) than for habitats that suffer regular alterations (arable), particularly for small, solitary bees. As a conservation tool, SDMs are best suited to modeling rarer, specialist species than more generalist and will work best in long-term stable habitats. The variability of complex, short-term habitats is difficult to capture in such models and historical land use generally has low thematic resolution. To improve SDMs' usefulness, models require explanatory variables and collection data that include detailed landscape characteristics, for example, variability of crops and
Jogenfors, Jonathan; Elhassan, Ashraf Mohamed; Ahrens, Johan; Bourennane, Mohamed; Larsson, Jan-Åke
2015-12-01
Photonic systems based on energy-time entanglement have been proposed to test local realism using the Bell inequality. A violation of this inequality normally also certifies security of device-independent quantum key distribution (QKD) so that an attacker cannot eavesdrop or control the system. We show how this security test can be circumvented in energy-time entangled systems when using standard avalanche photodetectors, allowing an attacker to compromise the system without leaving a trace. We reach Bell values up to 3.63 at 97.6% faked detector efficiency using tailored pulses of classical light, which exceeds even the quantum prediction. This is the first demonstration of a violation-faking source that gives both tunable violation and high faked detector efficiency. The implications are severe: the standard Clauser-Horne-Shimony-Holt inequality cannot be used to show device-independent security for energy-time entanglement setups based on Franson's configuration. However, device-independent security can be reestablished, and we conclude by listing a number of improved tests and experimental setups that would protect against all current and future attacks of this type.
Domene, Xavier; Ramirez, Wilson; Mattana, Stefania; Alcaniz, Josep Maria; Andres, Pilar
2008-01-01
Safe amendment rates (the predicted no-effect concentration or PNEC) of seven organic wastes were estimated from the species sensitivity distribution of a battery of soil biota tests and compared with different realistic amendment scenarios (different predicted environmental concentrations or PEC). None of the wastes was expected to exert noxious effects on soil biota if applied according either to the usual maximum amendment rates in Europe or phosphorus demands of crops (below 2 tonnes DM ha -1 ). However, some of the wastes might be problematic if applied according to nitrogen demands of crops (above 2 tonnes DM ha -1 ). Ammonium content and organic matter stability of the studied wastes are the most influential determinants of the maximum amendment rates derived in this study, but not pollutant burden. This finding indicates the need to stabilize wastes prior to their reuse in soils in order to avoid short-term impacts on soil communities. - Ecological risk assessment of organic waste amendments
none,
2002-05-01
More than 50 experts from energy and information technology industries, Federal and State government agencies, universities, and National Laboratories participated in the “Communication and Control Systems for Distributed Energy Implementation and Testing Workshop” in Reston, Virginia, on May 14-15, 2002. This was a unique workshop in that, for the first time, representatives from the information technology sector and those from energy-related industries, Federal and State government agencies, universities, and National Laboratories, gathered to discuss these issues and develop a set of action-oriented implementation strategies. A planning committee of industry, consultant, and government representatives laid the groundwork for the workshop by identifying key participants and developing an appropriate agenda. This document reflects the ideas and priorities discussed by workshop participants.
ATLAS, Collaboration
2013-01-01
Expected distributions of the test statistics q=log(L(0^+)/L(2^+)) for the spin-0 and spin-2 (produced by gluon fusion) hypotheses. The observed value is indicated by a vertical line. The coloured areas correspond to the integrals of the expected distributions used to compute the p-values for the rejection of each hypothesis.
Marshall, A.C.; Brown, J.R.
1975-01-01
A description is given of a test to measure the axial flux distribution at several radial locations in the Fort St. Vrain core representing unrodded, rodded, and partially rodded regions. The measurements were intended to verify the calculational accuracy of the three-dimensional calculational model used to compute axial power distributions for the Fort St. Vrain core. (U.S.)
Takeshi Sato
Full Text Available BACKGROUND AND AIMS: In mammalian spermatogenesis, glial cell line-derived neurotrophic factor (GDNF is one of the major Sertoli cell-derived factors which regulates the maintenance of undifferentiated spermatogonia including spermatogonial stem cells (SSCs through GDNF family receptor α1 (GFRα1. It remains unclear as to when, where and how GDNF molecules are produced and exposed to the GFRα1-positive spermatogonia in vivo. METHODOLOGY AND PRINCIPAL FINDINGS: Here we show the cyclical and patch-like distribution of immunoreactive GDNF-positive signals and their close co-localization with a subpopulation of GFRα1-positive spermatogonia along the basal surface of Sertoli cells in mice and hamsters. Anti-GDNF section immunostaining revealed that GDNF-positive signals are mainly cytoplasmic and observed specifically in the Sertoli cells in a species-specific as well as a seminiferous cycle- and spermatogenic activity-dependent manner. In contrast to the ubiquitous GDNF signals in mouse testes, high levels of its signals were cyclically observed in hamster testes prior to spermiation. Whole-mount anti-GDNF staining of the seminiferous tubules successfully visualized the cyclical and patch-like extracellular distribution of GDNF-positive granular deposits along the basal surface of Sertoli cells in both species. Double-staining of GDNF and GFRα1 demonstrated the close co-localization of GDNF deposits and a subpopulation of GFRα1-positive spermatogonia. In both species, GFRα1-positive cells showed a slender bipolar shape as well as a tendency for increased cell numbers in the GDNF-enriched area, as compared with those in the GDNF-low/negative area of the seminiferous tubules. CONCLUSION/SIGNIFICANCE: Our data provide direct evidence of regionally defined patch-like GDNF-positive signal site in which GFRα1-positive spermatogonia possibly interact with GDNF in the basal compartment of the seminiferous tubules.
Papa Mze Nasserdine
2016-01-01
Full Text Available In the Union of Comoros, interventions for combating malaria have contributed to a spectacular decrease in the prevalence of the disease. We studied the current distribution of Plasmodium species on the island of Grande Comore using nested PCR. The rapid diagnostic tests (RDTs currently used in the Comoros are able to identify Plasmodium falciparum but no other Plasmodium species. In this study, we tested 211 RDTs (158 positive and 53 negative. Among the 158 positive RDTs, 22 were positive for HRP2, 3 were positive only for pLDH, and 133 were positive for HRP2 and pLDH. DNA was extracted from a proximal part of the nitrocellulose membrane of RDTs. A total of 159 samples were positive by nested PCR. Of those, 156 (98.11% were positive for P. falciparum, 2 (1.25% were positive for P. vivaxI, and 1 (0.62% was positive for P. malariae. None of the samples were positive for P. ovale. Our results show that P. falciparum is still the most dominant species on the island of Grande Comore, but P. vivax and P. malariae are present at a low prevalence.
Mei-Yu LEE
2014-11-01
Full Text Available This paper investigates the effect of the nonzero autocorrelation coefficients on the sampling distributions of the Durbin-Watson test estimator in three time-series models that have different variance-covariance matrix assumption, separately. We show that the expected values and variances of the Durbin-Watson test estimator are slightly different, but the skewed and kurtosis coefficients are considerably different among three models. The shapes of four coefficients are similar between the Durbin-Watson model and our benchmark model, but are not the same with the autoregressive model cut by one-lagged period. Second, the large sample case shows that the three models have the same expected values, however, the autoregressive model cut by one-lagged period explores different shapes of variance, skewed and kurtosis coefficients from the other two models. This implies that the large samples lead to the same expected values, 2(1 – ρ0, whatever the variance-covariance matrix of the errors is assumed. Finally, comparing with the two sample cases, the shape of each coefficient is almost the same, moreover, the autocorrelation coefficients are negatively related with expected values, are inverted-U related with variances, are cubic related with skewed coefficients, and are U related with kurtosis coefficients.
Sang-Yun Yun
2014-03-01
Full Text Available Recently, a distribution management system (DMS that can conduct periodical system analysis and control by mounting various applications programs has been actively developed. In this paper, we summarize the development and demonstration of a database structure that can perform real-time system analysis and control of the Korean smart distribution management system (KSDMS. The developed database structure consists of a common information model (CIM-based off-line database (DB, a physical DB (PDB for DB establishment of the operating server, a real-time DB (RTDB for real-time server operation and remote terminal unit data interconnection, and an application common model (ACM DB for running application programs. The ACM DB for real-time system analysis and control of the application programs was developed by using a parallel table structure and a link list model, thereby providing fast input and output as well as high execution speed of application programs. Furthermore, the ACM DB was configured with hierarchical and non-hierarchical data models to reflect the system models that increase the DB size and operation speed through the reduction of the system, of which elements were unnecessary for analysis and control. The proposed database model was implemented and tested at the Gochaing and Jeju offices using a real system. Through data measurement of the remote terminal units, and through the operation and control of the application programs using the measurement, the performance, speed, and integrity of the proposed database model were validated, thereby demonstrating that this model can be applied to real systems.
Nash, Charles A. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Hamm, L. Larry [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Smith, Frank G. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); McCabe, Daniel J. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2014-12-19
The primary treatment of the tank waste at the DOE Hanford site will be done in the Waste Treatment and Immobilization Plant (WTP) that is currently under construction. The baseline plan for this facility is to treat the waste, splitting it into High Level Waste (HLW) and Low Activity Waste (LAW). Both waste streams are then separately vitrified as glass and poured into canisters for disposition. The LAW glass will be disposed onsite in the Integrated Disposal Facility (IDF). There are currently no plans to treat the waste to remove technetium, so its disposition path is the LAW glass. Due to the water solubility properties of pertechnetate and long half-life of ^{99}Tc, effective management of ^{99}Tc is important to the overall success of the Hanford River Protection Project mission. To achieve the full target WTP throughput, additional LAW immobilization capacity is needed, and options are being explored to immobilize the supplemental LAW portion of the tank waste. Removal of ^{99}Tc, followed by off-site disposal, would eliminate a key risk contributor for the IDF Performance Assessment (PA) for supplemental waste forms, and has potential to reduce treatment and disposal costs. Washington River Protection Solutions (WRPS) is developing some conceptual flow sheets for supplemental LAW treatment and disposal that could benefit from technetium removal. One of these flowsheets will specifically examine removing ^{99}Tc from the LAW feed stream to supplemental immobilization. To enable an informed decision regarding the viability of technetium removal, further maturation of available technologies is being performed. This report contains results of experimental ion exchange distribution coefficient testing and computer modeling using the resin SuperLig^{®} 639^{a} to selectively remove perrhenate from high ionic strength simulated LAW. It is advantageous to operate at higher concentration in order to treat the waste
Valhondo, Cristina; Martinez-Landa, Lurdes; Carrera, Jesús; Hidalgo, Juan J.; Ayora, Carlos
2017-04-01
Artificial recharge of aquifers (AR) is a standard technique to replenish and enhance groundwater resources, that have widely been used due to the increasing demand of quality water. AR through infiltration basins consists on infiltrate surface water, that might be affected in more or less degree by treatment plant effluents, runoff and others undesirables water sources, into an aquifer. The water quality enhances during the passage through the soil and organic matter, nutrients, organic contaminants, and bacteria are reduced mainly due to biodegradation and adsorption. Therefore, one of the goals of AR is to ensure a good quality status of the aquifer even if lesser quality water is used for recharge. Understand the behavior and transport of the potential contaminants is essential for an appropriate management of the artificial recharge system. The knowledge of the flux distribution around the recharge system and the relationship between the recharge system and the aquifer (area affected by the recharge, mixing ratios of recharged and native groundwater, travel times) is essential to achieve this goal. Evaluate the flux distribution is not always simple because the complexity and heterogeneity of natural systems. Indeed, it is not so much regulate by hydraulic conductivity of the different geological units as by their continuity and inter-connectivity particularly in the vertical direction. In summary for an appropriate management of an artificial recharge system it is needed to acknowledge the heterogeneity of the media. Aiming at characterizing the residence time distribution (RTDs) of a pilot artificial recharge system and the extent to which heterogeneity affects RTDs, we performed and evaluated a pulse injection tracer test. The artificial recharge system was simulated as a multilayer model which was used to evaluate the measured breakthrough curves at six monitoring points. Flow and transport parameters were calibrated under two hypotheses. The first
Information-theoretic metamodel of organizational evolution
Sepulveda, Alfredo
2011-12-01
Social organizations are abstractly modeled by holarchies---self-similar connected networks---and intelligent complex adaptive multiagent systems---large networks of autonomous reasoning agents interacting via scaled processes. However, little is known of how information shapes evolution in such organizations, a gap that can lead to misleading analytics. The research problem addressed in this study was the ineffective manner in which classical model-predict-control methods used in business analytics attempt to define organization evolution. The purpose of the study was to construct an effective metamodel for organization evolution based on a proposed complex adaptive structure---the info-holarchy. Theoretical foundations of this study were holarchies, complex adaptive systems, evolutionary theory, and quantum mechanics, among other recently developed physical and information theories. Research questions addressed how information evolution patterns gleamed from the study's inductive metamodel more aptly explained volatility in organization. In this study, a hybrid grounded theory based on abstract inductive extensions of information theories was utilized as the research methodology. An overarching heuristic metamodel was framed from the theoretical analysis of the properties of these extension theories and applied to business, neural, and computational entities. This metamodel resulted in the synthesis of a metaphor for, and generalization of organization evolution, serving as the recommended and appropriate analytical tool to view business dynamics for future applications. This study may manifest positive social change through a fundamental understanding of complexity in business from general information theories, resulting in more effective management.
Information-theoretic equilibrium and observable thermalization
Anzà, F.; Vedral, V.
2017-03-01
A crucial point in statistical mechanics is the definition of the notion of thermal equilibrium, which can be given as the state that maximises the von Neumann entropy, under the validity of some constraints. Arguing that such a notion can never be experimentally probed, in this paper we propose a new notion of thermal equilibrium, focused on observables rather than on the full state of the quantum system. We characterise such notion of thermal equilibrium for an arbitrary observable via the maximisation of its Shannon entropy and we bring to light the thermal properties that it heralds. The relation with Gibbs ensembles is studied and understood. We apply such a notion of equilibrium to a closed quantum system and show that there is always a class of observables which exhibits thermal equilibrium properties and we give a recipe to explicitly construct them. Eventually, an intimate connection with the Eigenstate Thermalisation Hypothesis is brought to light.
Information-theoretic equilibrium and observable thermalization
Anza, Fabio; Vedral, Vlatko
2015-01-01
To understand under which conditions thermodynamics emerges from the microscopic dynamics is the ultimate goal of statistical mechanics. Despite the fact that the theory is more than 100 years old, we are still discussing its foundations and its regime of applicability. A point of crucial importance is the definition of the notion of thermal equilibrium, which is given as the state that maximises the von Neumann entropy. Here we argue that it is necessary to propose a new way of describing th...
Information theoretic resources in quantum theory
Meznaric, Sebastian
Resource identification and quantification is an essential element of both classical and quantum information theory. Entanglement is one of these resources, arising when quantum communication and nonlocal operations are expensive to perform. In the first part of this thesis we quantify the effective entanglement when operations are additionally restricted to account for both fundamental restrictions on operations, such as those arising from superselection rules, as well as experimental errors arising from the imperfections in the apparatus. For an important class of errors we find a linear relationship between the usual and effective higher dimensional generalization of concurrence, a measure of entanglement. Following the treatment of effective entanglement, we focus on a related concept of nonlocality in the presence of superselection rules (SSR). Here we propose a scheme that may be used to activate nongenuinely multipartite nonlocality, in that a single copy of a state is not multipartite nonlocal, while two or more copies exhibit nongenuinely multipartite nonlocality. The states used exhibit the more powerful genuinely multipartite nonlocality when SSR are not enforced, but not when they are, raising the question of what is needed for genuinely multipartite nonlocality. We show that whenever the number of particles is insufficient, the degrading of genuinely multipartite to nongenuinely multipartite nonlocality is necessary. While in the first few chapters we focus our attention on understanding the resources present in quantum states, in the final part we turn the picture around and instead treat operations themselves as a resource. We provide our observers with free access to classical operations - ie. those that cannot detect or generate quantum coherence. We show that the operation of interest can then be used to either generate or detect quantum coherence if and only if it violates a particular commutation relation. Using the relative entropy, the commutation relation provides us with a measure of nonclassicality of operations. We show that the measure is a sum of two contributions, the generating power and the distinguishing power, each of which is separately an essential ingredient in quantum communication and information processing. The measure also sheds light on the operational meaning of quantum discord - we show it can be interpreted as the difference in superdense coding capacity between a quantum state and a classical state.
Vector-Quantization using Information Theoretic Concepts
Lehn-Schiøler, Tue; Hegde, Anant; Erdogmus, Deniz
2005-01-01
interpretation and relies on minimization of a well defined cost-function. It is also shown how the potential field approach can be linked to information theory by use of the Parzen density estimator. In the light of information theory it becomes clear that minimizing the free energy of the system is in fact......The process of representing a large data set with a smaller number of vectors in the best possible way, also known as vector quantization, has been intensively studied in the recent years. Very efficient algorithms like the Kohonen Self Organizing Map (SOM) and the Linde Buzo Gray (LBG) algorithm...... have been devised. In this paper a physical approach to the problem is taken, and it is shown that by considering the processing elements as points moving in a potential field an algorithm equally efficient as the before mentioned can be derived. Unlike SOM and LBG this algorithm has a clear physical...
Bendixen, Carsten
2014-01-01
Bidrag med en kortfattet, introducerende, perspektiverende og begrebsafklarende fremstilling af begrebet test i det pædagogiske univers.......Bidrag med en kortfattet, introducerende, perspektiverende og begrebsafklarende fremstilling af begrebet test i det pædagogiske univers....
Giffin, Paxton K.; Parsons, Michael S.; Unz, Ronald J.; Waggoner, Charles A.
2012-05-01
The Institute for Clean Energy Technology (ICET) at Mississippi State University has developed a test stand capable of lifecycle testing of high efficiency particulate air filters and other filters specified in American Society of Mechanical Engineers Code on Nuclear Air and Gas Treatment (AG-1) filters. The test stand is currently equipped to test AG-1 Section FK radial flow filters, and expansion is currently underway to increase testing capabilities for other types of AG-1 filters. The test stand is capable of producing differential pressures of 12.45 kPa (50 in. w.c.) at volumetric air flow rates up to 113.3 m3/min (4000 CFM). Testing is performed at elevated and ambient conditions for temperature and relative humidity. Current testing utilizes three challenge aerosols: carbon black, alumina, and Arizona road dust (A1-Ultrafine). Each aerosol has a different mass median diameter to test loading over a wide range of particles sizes. The test stand is designed to monitor and maintain relative humidity and temperature to required specifications. Instrumentation is implemented on the upstream and downstream sections of the test stand as well as on the filter housing itself. Representative data are presented herein illustrating the test stand's capabilities. Digital images of the filter pack collected during and after testing is displayed after the representative data are discussed. In conclusion, the ICET test stand with AG-1 filter testing capabilities has been developed and hurdles such as test parameter stability and design flexibility overcome.
Rhee, Bo W.; Kim, Hyoung T. [Severe Accident and PHWR Safety Research Division, Daejeon (Korea, Republic of); Kim, Tongbeum [University of the Witwatersrand, Johannesburg (South Africa); Im, Sunghyuk [KAIST, Daejeon (Korea, Republic of)
2015-10-15
threshold temperature and no further deformation is expected. Consequently, a sufficient condition to ensure fuel channel integrity following a large LOCA, is the avoidance of sustained calandria tubes dryout. If the moderator available subcooling at the onset of a large LOCA is greater than the subcooling requirements, a sustained calandria tube dryout is avoided. The temperature oscillations observed in reactor and in test measurements such as MTF need to be characterized and quantified to show that it does not jeopardize the currently available safety margins. Because of the importance of an accurate prediction of moderator temperature distributions and the related moderator subcooling, a 1/4 scaled-down moderator tank of a CANDU-6 reactor, called Moderator Circulation Test (MCT), was erected at KAERI and the current status of MCT experiment progress is described and further experiments are expected to be carried out to generate the experimental data necessary to validate the computer codes that will be used to analyze the accident analysis of operating CANDU-6 plants.
Harrington, Douglas E.; Burley, Richard R.; Corban, Robert R.
1986-01-01
Wall Mach number distributions were determined over a range of test-section free-stream Mach numbers from 0.2 to 0.92. The test section was slotted and had a nominal porosity of 11 percent. Reentry flaps located at the test-section exit were varied from 0 (fully closed) to 9 (fully open) degrees. Flow was bled through the test-section slots by means of a plenum evacuation system (PES) and varied from 0 to 3 percent of tunnel flow. Variations in reentry flap angle or PES flow rate had little or no effect on the Mach number distributions in the first 70 percent of the test section. However, in the aft region of the test section, flap angle and PES flow rate had a major impact on the Mach number distributions. Optimum PES flow rates were nominally 2 to 2.5 percent wtih the flaps fully closed and less than 1 percent when the flaps were fully open. The standard deviation of the test-section wall Mach numbers at the optimum PES flow rates was 0.003 or less.
Thirumurthy, Harsha; Masters, Samuel H; Mavedzenge, Sue Napierala; Maman, Suzanne; Omanga, Eunice; Agot, Kawango
2016-06-01
Increased uptake of HIV testing by men in sub-Saharan Africa is essential for the success of combination prevention. Self-testing is an emerging approach with high acceptability, but little evidence exists on the best strategies for test distribution. We assessed an approach of providing multiple self-tests to women at high risk of HIV acquisition to promote partner HIV testing and to facilitate safer sexual decision making. In this cohort study, HIV-negative women aged 18-39 years were recruited at two sites in Kisumu, Kenya: a health facility with antenatal and post-partum clinics and a drop-in centre for female sex workers. Participants gave informed consent and were instructed on use of oral fluid based rapid HIV tests. Participants enrolled at the health facility received three self-tests and those at the drop-in centre received five self-tests. Structured interviews were conducted with participants at enrolment and over 3 months to determine how self-tests were used. Outcomes included the number of self-tests distributed by participants, the proportion of participants whose sexual partners used a self-test, couples testing, and sexual behaviour after self-testing. Between Jan 14, 2015, and March 13, 2015, 280 participants were enrolled (61 in antenatal care, 117 in post-partum care, and 102 female sex workers); follow-up interviews were completed for 265 (96%). Most participants with primary sexual partners distributed self-tests to partners: 53 (91%) of 58 participants in antenatal care, 91 (86%) of 106 in post-partum care, and 64 (75%) of 85 female sex workers. 82 (81%) of 101 female sex workers distributed more than one self-test to commercial sex clients. Among self-tests distributed to and used by primary sexual partners of participants, couples testing occurred in 27 (51%) of 53 in antenatal care, 62 (68%) of 91 from post-partum care, and 53 (83%) of 64 female sex workers. Among tests received by primary and non-primary sexual partners, two (4%) of 53
Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F
2016-01-01
In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.
Derkach, Ivan D.; Peuntinger, Christian; Ruppert, László; Heim, Bettina; Gunthner, Kevin; Usenko, Vladyslav C.; Elser, Dominique; Marquardt, Christoph; Filip, Radim; Leuchs, Gerd
2016-10-01
Continuous-variable quantum key distribution is a practical application of quantum information theory that is aimed at generation of secret cryptographic key between two remote trusted parties and that uses multi-photon quantum states as carriers of key bits. Remote parties share the secret key via a quantum channel, that presumably is under control of of an eavesdropper, and which properties must be taken into account in the security analysis. Well-studied fiber-optical quantum channels commonly possess stable transmittance and low noise levels, while free-space channels represent a simpler, less demanding and more flexible alternative, but suffer from atmospheric effects such as turbulence that in particular causes a non-uniform transmittance distribution referred to as fading. Nonetheless free-space channels, providing an unobstructed line-of-sight, are more apt for short, mid-range and potentially long-range (using satellites) communication and will play an important role in the future development and implementation of QKD networks. It was previously theoretically shown that coherent-state CV QKD should be in principle possible to implement over a free-space fading channel, but strong transmittance fluctuations result in the significant modulation-dependent channel excess noise. In this regime the post-selection of highly transmitting sub-channels may be needed, which can even restore the security of the protocol in the strongly turbulent channels. We now report the first proof-of-principle experimental test of coherent state CV QKD protocol using different levels Gaussian modulation over a mid-range (1.6-kilometer long) free-space atmospheric quantum channel. The transmittance of the link was characterized using intensity measurements for the reference but channel estimation using the modulated coherent states was also studied. We consider security against Gaussian collective attacks, that were shown to be optimal against CV QKD protocols . We assumed a
Olivero, J.
2016-03-01
Full Text Available Statistical downscaling is used to improve the knowledge of spatial distributions from broad–scale to fine–scale maps with higher potential for conservation planning. We assessed the effectiveness of downscaling in two commonly used species distribution models: Maximum Entropy (MaxEnt and the Favourability Function (FF. We used atlas data (10 x 10 km of the fire salamander Salamandra salamandra distribution in southern Spain to derive models at a 1 x 1 km resolution. Downscaled models were assessed using an independent dataset of the species’ distribution at 1 x 1 km. The Favourability model showed better downscaling performance than the MaxEnt model, and the models that were based on linear combinations of environmental variables performed better than models allowing higher flexibility. The Favourability model minimized model overfitting compared to the MaxEnt model.
Olivero, J.; Toxopeus, A.G.; Skidmore, A.K.; Real, R.
2016-07-01
Statistical downscaling is used to improve the knowledge of spatial distributions from broad–scale to fine–scale maps with higher potential for conservation planning. We assessed the effectiveness of downscaling in two commonly used species distribution models: Maximum Entropy (MaxEnt) and the Favourability Function (FF). We used atlas data (10 x 10 km) of the fire salamander Salamandra salamandra distribution in southern Spain to derive models at a 1 x 1 km resolution. Downscaled models were assessed using an independent dataset of the species’ distribution at 1 x 1 km. The Favourability model showed better downscaling performance than the MaxEnt model, and the models that were based on linear combinations of environmental variables performed better than models allowing higher flexibility. The Favourability model minimized model overfitting compared to the MaxEnt model. (Author)
A study was conducted in the U.S. EPA Indoor Air Quality Test House to determine the spatial and temporal distribution of chlorpyrifos following a professional crack and crevice application in the kitchen. Following the application, measurements were made in the kitchen, den a...
Rayner, Keith; Juhasz, Barbara J.; Brown, Sarah J.
2007-01-01
Two experiments tested predictions derived from serial lexical processing and parallel distributed models of eye movement control in reading. The boundary paradigm (K. Rayner, 1975) was used, and the boundary location was set either at the end of word n - 1 (the word just to the left of the target word) or at the end of word n - 2. Serial lexical…
Dong Yunming; Lu Tan
2009-01-01
We investigate redshift distributions of three long burst samples, with the first sample containing 131 long bursts with observed redshifts, the second including 220 long bursts with pseudo-redshifts calculated by the variability-luminosity relation, and the third including 1194 long bursts with pseudo-redshifts calculated by the lag-luminosity relation, respectively. In the redshift range 0-1 the Kolmogorov-Smirnov probability of the observed redshift distribution and that of the variability-luminosity relation is large. In the redshift ranges 1-2, 2-3, 3-6.3 and 0-37, the Kolmogorov-Smirnov probabilities of the redshift distribution from lag-luminosity relation and the observed redshift distribution are also large. For the GRBs, which appear both in the two pseudo-redshift burst samples, the KS probability of the pseudo-redshift distribution from the lag-luminosity relation and the observed reshift distribution is 0.447, which is very large. Based on these results, some conclusions are drawn: i) the V-L iso relation might be more believable than the τ-L iso relation in low redshift ranges and the τ-L iso relation might be more real than the V-Liso relation in high redshift ranges; ii) if we do not consider the redshift ranges, the τ-L iso relation might be more physical and intrinsical than the V-L i so relation. (research papers)
Liu, Guoliang; Zhang, Feng; Hao, Lizhen
2012-01-01
We previously introduced a time record model for use in studying the duration of sand–dust storms. In the model, X is the normalized wind speed and Xr is the normalized wind speed threshold for the sand–dust storm. X is represented by a random signal with a normal Gaussian distribution. The storms occur when X ≥ Xr. From this model, the time interval distribution of N = Aexp(−bt) can be deduced, wherein N is the number of time intervals with length greater than t, A and b are constants, and b is related to Xr. In this study, sand–dust storm data recorded in spring at the Yanchi meteorological station in China were analysed to verify whether the time interval distribution of the sand–dust storms agrees with the above time interval distribution. We found that the distribution of the time interval between successive sand–dust storms in April agrees well with the above exponential equation. However, the interval distribution for the sand–dust storm data for the entire spring period displayed a better fit to the Weibull equation and depended on the variation of the sand–dust storm threshold wind speed. (paper)
Holdener, F.R.; Behne, D.; Hathaway, D. [and others
1995-04-24
We have built and tested a plate-type pre-production distributed Ion Pump (DIP) for the PEP-II B-Factory High Energy Ring (HER). The design has been an earlier design to use less materials and to costs. Penning cell hole sizes of 15, 18, and 21 mm have been tested in a uniform magnetic field of 0.18 T to optimize pumping speed. The resulting final DIP design consisting of a 7-plate, 15 mm basic cell size anode was magnetic field of the HER dipole. A description of the final optimized DIP design will be presented along with the test results of the pumping speed measurements.
Azman Ismail
2009-01-01
Full Text Available This study was conducted to examine the effect of adequacy of benefits and distributive justice on personal outcomes (i.e., job satisfaction and organizational commitment using 583 usable questionnaires gathered from Malaysian public institutions of higher learning (PLEARNINGINSTITUTE sector. The outcomes of step-wise regression analysis showed that the inclusion of distributive justice in the analysis has increased the effect of adequacy of benefits on both job satisfaction and organizational commitment. Furthermore, the findings of this study confirm that distributive justice do act as a partial mediating variable in the benefits program models of the organizational sector sample. In addition, implications and limitations, as well as directions for future research are discussed.
Chen, Z.Q.; Wang, S.J.
1999-01-01
A newly developed maximum entropy method, which was realized by the computer program MELT introduced by Shukla et al., was used to analyze positron lifetime spectra measured in semiconductors. Several simulation studies were done to test the performance of this algorithm. Reliable reconstruction of positron lifetime distributions can be extracted at relatively lower counts, which shows the applicability and superiority of this method. Two positron lifetime spectra measured in ion-implanted p-InP(Zn) at 140 and 280 K, respectively were analyzed by this program. The lifetime distribution differed greatly for the two temperatures, giving direct evidence of the existence of shallow positron traps at low temperature
This study assessed the pollutant emission offset potential of distributed grid-connected photovoltaic (PV) power systems. Computer-simulated performance results were utilized for 211 PV systems located across the U.S. The PV systems' monthly electrical energy outputs were based ...
Marshall, L.; Carvalheiro, L.G.; Aguirre-Gutierrez, J.; Bos, M.; Groot, de G.A.; Kleijn, D.; Potts, S.G.; Reemer, M.; Roberts, S.P.M.; Scheper, J.A.; Biesmeijer, J.C.
2015-01-01
Species distribution models (SDM) are increasingly used to understand the factors that regulate variation in biodiversity patterns and to help plan conservation strategies. However, these models are rarely validated with independently collected data and it is unclear whether SDM performance is
Vaurio, Jussi K.
2015-01-01
The time-dependent unavailability and the failure and repair intensities of periodically tested aging standby system components are solved with recursive equations under three categories of testing and repair policies. In these policies, tests or repairs or both can be minimal or perfect renewals. Arbitrary distributions are allowed to times to failure as well as to repair and renewal durations. Major preventive maintenance is done periodically or at random times, e.g. when a true demand occurs. In the third option process renewal is done if a true demand occurs or when a certain mission time has expired since the previous maintenance, whichever occurs first. A practical feature is that even if a repair can renew the unit, it does not generally renew the alternating process. The formalism updates and extends earlier results by using a special backward-renewal equation method, by allowing scheduled tests not limited to equal intervals and accepting arbitrary distributions and multiple failure types and causes, including failures caused by tests, human errors and true demands. Explicit solutions are produced to integral equations associated with an age-renewal maintenance policy. - Highlights: • Time-dependent unavailability, failure count and repair count for a standby system. • Free testing schedule and distributions for times to failure, repair and maintenance. • Multiple failure modes; tests or repairs or both can be minimal or perfect renewals. • Process renewals periodically, randomly or based on the process age or an initiator. • Backward renewal equations as explicit solutions to Volterra-type integral equations
Neel, John H.; Stallings, William M.
An influential statistics test recommends a Levene text for homogeneity of variance. A recent note suggests that Levene's test is upwardly biased for small samples. Another report shows inflated Alpha estimates and low power. Neither study utilized more than two sample sizes. This Monte Carlo study involved sampling from a normal population for…
Johnson, S. M.
1976-01-01
Basic test results are given for a flat plate solar collector whose performance was determined in the NASA-Lewis solar simulator. The collector was tested over ranges of inlet temperatures, fluxes and one coolant flow rate. Collector efficiency is correlated in terms of inlet temperature and flux level.
Thomas Cornelissen
2016-05-01
Full Text Available Parameterization of physically based and distributed hydrological models for mesoscale catchments remains challenging because the commonly available data base is insufficient for calibration. In this paper, we parameterize a mesoscale catchment for the distributed model HydroGeoSphere by transferring evapotranspiration parameters calibrated at a highly-equipped headwater catchment in addition to literature data. Based on this parameterization, the sensitivity of the mesoscale catchment to spatial variability in land use, potential evapotranspiration and precipitation and of the headwater catchment to mesoscale soil and land use data was conducted. Simulations of the mesoscale catchment with transferred parameters reproduced daily discharge dynamics and monthly evapotranspiration of grassland, deciduous and coniferous vegetation in a satisfactory manner. Precipitation was the most sensitive input data with respect to total runoff and peak flow rates, while simulated evapotranspiration components and patterns were most sensitive to spatially distributed land use parameterization. At the headwater catchment, coarse soil data resulted in a change in runoff generating processes based on the interplay between higher wetness prior to a rainfall event, enhanced groundwater level rise and accordingly, lower transpiration rates. Our results indicate that the direct transfer of parameters is a promising method to benefit highly equipped simulations of the headwater catchments.
Applin, Zachary T.; Gentry, Garl L., Jr.; Takallu, M. A.
1995-01-01
A wind tunnel investigation was conducted on a generic, high-wing transport model in the Langley 14- by 22-Foot Subsonic Tunnel. This report contains pressure data that document effects of various model configurations and free-stream conditions on wing pressure distributions. The untwisted wing incorporated a full-span, leading-edge Krueger flap and a part-span, double-slotted trailing-edge flap system. The trailing-edge flap was tested at four different deflection angles (20 deg, 30 deg, 40 deg, and 60 deg). Four wing configurations were tested: cruise, flaps only, Krueger flap only, and high lift (Krueger flap and flaps deployed). Tests were conducted at free-stream dynamic pressures of 20 psf to 60 psf with corresponding chord Reynolds numbers of 1.22 x 10(exp 6) to 2.11 x 10(exp 6) and Mach numbers of 0.12 to 0.20. The angles of attack presented range from 0 deg to 20 deg and were determined by wing configuration. The angle of sideslip ranged from minus 20 deg to 20 deg. In general, pressure distributions were relatively insensitive to free-stream speed with exceptions primarily at high angles of attack or high flap deflections. Increasing trailing-edge Krueger flap significantly reduced peak suction pressures and steep gradients on the wing at high angles of attack. Installation of the empennage had no effect on wing pressure distributions. Unpowered engine nacelles reduced suction pressures on the wing and the flaps.
Mendoza, C. [Permanent address: Centro de Física, Instituto Venezolano de Investigaciones Científicas (IVIC), P.O. Box 20632, Caracas 1020A, Venezuela. (Venezuela, Bolivarian Republic of); Bautista, M. A., E-mail: claudio.mendozaguardia@wmich.edu, E-mail: manuel.bautista@wmich.edu [Department of Physics, Western Michigan University, Kalamazoo, MI 49008 (United States)
2014-04-20
The classic optical nebular diagnostics [N II], [O II], [O III], [S II], [S III], and [Ar III] are employed to search for evidence of non-Maxwellian electron distributions, namely κ distributions, in a sample of well-observed Galactic H II regions. By computing new effective collision strengths for all these systems and A-values when necessary (e.g., S II), and by comparing with previous collisional and radiative data sets, we have been able to obtain realistic estimates of the electron-temperature dispersion caused by the atomic data, which in most cases are not larger than ∼10%. If the uncertainties due to both observation and atomic data are then taken into account, it is plausible to determine for some nebulae a representative average temperature while in others there are at least two plasma excitation regions. For the latter, it is found that the diagnostic temperature differences in the high-excitation region, e.g., T{sub e} (O III), T{sub e} (S III), and T{sub e} (Ar III), cannot be conciliated by invoking κ distributions. For the low-excitation region, it is possible in some, but not all, cases to arrive at a common, lower temperature for [N II], [O II], and [S II] with κ ≈ 10, which would then lead to significant abundance enhancements for these ions. An analytic formula is proposed to generate accurate κ-averaged excitation rate coefficients (better than 10% for κ ≥ 5) from temperature tabulations of the Maxwell-Boltzmann effective collision strengths.
Keilbach, D.; Drews, C.; Berger, L.; Marsch, E.; Wimmer-Schweingruber, R. F.
2017-12-01
Using a test particle approach we have investigated, how an oxygen pickup ion torus velocity distribution is modified by continuous and intermittent alfvènic waves on timescales, where the gyro trajectory of each particle can be traced.We have therefore exposed the test particles to mono frequent waves, which expanded through the whole simulation in time and space. The general behavior of the pitch angle distribution is found to be stationary and a nonlinear function of the wave frequency, amplitude and the initial angle between wave elongation and field-perpendicular particle velocity vector. The figure shows the time-averaged pitch angle distributions as a function of the Doppler shifted wave frequency (where the Doppler shift was calculated with respect to the particles initial velocity) for three different wave amplitudes (labeled in each panel). The background field is chosen to be 5 nT and the 500 test particles were initially distributed on a torus with 120° pitch angle at a solar wind velocity of 450 km/s. Each y-slice of the histogram (which has been normalized to it's respective maximum) represents an individual run of the simulation.The frequency-dependent behavior of the test particles is found to be classifiable into the regimes of very low/high frequencies and frequencies close to first order resonance. We have found, that only in the latter regime the particles interact strongly with the wave, where in the time averaged histograms a branch structure is found, which was identified as a trace of particles co-moving with the wave phase. The magnitude of pitch angle change of these particles is as well as the frequency margin, where the branch structure is found, an increasing function with the wave amplitude.We have also investigated the interaction with mono frequent intermittent waves. Exposed to such waves a torus distribution is scattered in pitch angle space, whereas the pitch angle distribution is broadened systematically over time similar to
Fearnbach, S Nicole; Thivel, David; Meyermann, Karol; Keller, Kathleen L
2015-09-01
Previous studies testing the relationship between short-term, ad libitum test-meal intake and body composition in children have shown inconsistent relationships. The objective of this study was to determine whether children's intake at a palatable, buffet meal was associated with body composition, assessed by dual-energy X-ray absorptiometry (DXA). A sample of 71 children (4-6 years) participated in 4 sessions where ad libitum food intake was measured. Children's intake at two of the test-meals was retained for the present analysis: a baseline meal consisting of moderately palatable foods and a highly palatable buffet including sweets, sweet-fats, and savory-fats. On the last visit, anthropometrics and DXA were assessed to determine child body composition. Children consumed significantly more calories at the palatable buffet compared to the baseline test-meal. Children's total fat-free mass was positively associated with intake at both the baseline meal and the palatable buffet meal. Total energy intake at both meals and intake of savory-fats at the palatable buffet were positively associated with children's total fat mass, total percent body fat, and percent android fat. Intake of sweet-fats was associated with child fat-free mass index. Intake of sweets was not correlated with body composition. Children's intake at a palatable test-meal, particularly of savory-fat foods, was associated with measures of total and regional body fat. Copyright © 2015 Elsevier Ltd. All rights reserved.
ByungRae Cha
2018-01-01
Full Text Available The megatrends and Industry 4.0 in ICT (Information Communication & Technology are concentrated in IoT (Internet of Things, BigData, CPS (Cyber Physical System, and AI (Artificial Intelligence. These megatrends do not operate independently, and mass storage technology is essential as large computing technology is needed in the background to support them. In order to evaluate the performance of high-capacity storage based on open source Ceph, we carry out the network performance test of Abyss storage with domestic and overseas sites using KOREN (Korea Advanced Research Network. And storage media and network bonding are tested to evaluate the performance of the storage itself. Additionally, the security test is demonstrated by Cuckoo sandbox and Yara malware detection among Abyss storage cluster and oversea sites. Lastly, we have proposed the draft design of Data Lake framework in order to solve garbage dump problem.
Verweij, A P
1998-01-01
Electrical measurements on samples of superconducting cables are usually performed in order to determine the critical current $I_c$ and the n-value, assuming that the voltage U at the transition from the superconducting to the normal state follows the power law, U\\sim($I/I_c$)$^n$. An accurate measurement of $I_c$ and n demands, first of all, good control of temperature and field, and precise measurement of current and voltage. The critical current and n-value of a cable are influenced by the self-field of the cable, an effect that has to be known in order to compare the electrical characteristics of the cable with those of the strands from which it is made. The effect of the self-field is dealt with taking into account the orientation and magnitude of the applied field and the n-value of the strands. An important source of inaccuracy is related to the distribution of the currents among the strands. Non-uniform distributions, mainly caused by non-equal resistances of the connections between the strands of the...
Yuan, Guixiang; Cao, Te; Fu, Hui; Ni, Leyi; Zhang, Xiaolin; Li, Wei; Song, Xin; Xie, Ping; Jeppesen, Erik
2013-12-01
Strategies of carbon (C) and nitrogen (N) utilisation are among the factors determining plant distribution. It has been argued that submersed macrophytes adapted to lower light environments are more efficient in maintaining C metabolic homeostasis due to their conservative C strategy and ability to balance C shortage. We studied how depth distributions of 12 submersed macrophytes in Lake Erhai, China, were linked to their C-N metabolic strategies when facing acute [Formula: see text] dosing.[Formula: see text] dosing changed C-N metabolism significantly by decreasing the soluble carbohydrate (SC) content and increasing the [Formula: see text]-N and free amino acid (FAA) content of plant tissues.The proportional changes in SC contents in the leaves and FAA contents in the stems induced by [Formula: see text] dosing were closely correlated (positive for SC and negative for FAA) with the colonising water depths of the plants in Lake Erhai, the plants adapted to lower light regimes being more efficient in maintaining SC and FAA homeostasis.These results indicate that conservative carbohydrate metabolism of submersed macrophytes allowed the plants to colonise greater water depths in eutrophic lakes, where low light availability in the water column diminishes carbohydrate production by the plants.
Wu, Y. S.; Dick, J. W.; Tetirick, C. W.
2006-01-01
The construction permit for Taipower's Lungmen Nuclear Units 1 and 2, two ABWR plants, was issued on March 17, 1999[1], The construction of these units is progressing actively at site. The digital I and C system supplied by GE, which is designated as the Distributed Control and Information System (DCIS) in this project, is being implemented primarily at one vendor facility. In order to ensure the reliability, safety and availability of the DCIS, it is required to comprehensively test the whole DCIS in factory. This article describes the test requirements and acceptance criteria for functional testing of the Non-Safety Distributed Control and Information system (DCIS) for Taiwan Power's Lungmen Units 1 and 2 GE selected Invensys as the equipment supplier for this Non-Safety portion of DCIS. The DCIS system of the Lungmen Units is a physically distributed control system. Field transmitters are connected to hard I/O terminal inputs on the Invensys I/A system. Once the signal is digitized on FBMs (Field Bus Modules) in Remote Multiplexing Units (RMUs), the signal is passed into an integrated control software environment. Control is based on the concept of compounds and blocks where each compound is a logical collection of blocks that performs a control function. Each point identified by control compound and block can be individually used throughout the DCIS system by referencing its unique name. In the Lungmen Project control logic and HSI (Human System Interface) requirements are divided into individual process systems called MPLs (Master Parts List). Higher-level Plant Computer System (PCS) algorithms access control compounds and blocks in these MPLs to develop functions. The test requirements and acceptance criteria for the DCIS system of the Lungmen Project are divided into three general categories (see 1,2,3 below) of verification, which in turn are divided into several specific tests: 1. DCIS System Physical Checks a) RMU Test - To confirm that the hard I
Wu, Y. S. [Taiwan Power Company, 242, Roosevelt Road, Sec. 3, Taipei 100, Taiwan (China); Dick, J. W. [Invensys System Inc., 33 Commercial St., Foxboro, MA 02035 (United States); Tetirick, C. W. [GE Energy, 1989 Little Orchard Street, San Jose, CA 95125-1030 (United States)
2006-07-01
The construction permit for Taipower's Lungmen Nuclear Units 1 and 2, two ABWR plants, was issued on March 17, 1999[1], The construction of these units is progressing actively at site. The digital I and C system supplied by GE, which is designated as the Distributed Control and Information System (DCIS) in this project, is being implemented primarily at one vendor facility. In order to ensure the reliability, safety and availability of the DCIS, it is required to comprehensively test the whole DCIS in factory. This article describes the test requirements and acceptance criteria for functional testing of the Non-Safety Distributed Control and Information system (DCIS) for Taiwan Power's Lungmen Units 1 and 2 GE selected Invensys as the equipment supplier for this Non-Safety portion of DCIS. The DCIS system of the Lungmen Units is a physically distributed control system. Field transmitters are connected to hard I/O terminal inputs on the Invensys I/A system. Once the signal is digitized on FBMs (Field Bus Modules) in Remote Multiplexing Units (RMUs), the signal is passed into an integrated control software environment. Control is based on the concept of compounds and blocks where each compound is a logical collection of blocks that performs a control function. Each point identified by control compound and block can be individually used throughout the DCIS system by referencing its unique name. In the Lungmen Project control logic and HSI (Human System Interface) requirements are divided into individual process systems called MPLs (Master Parts List). Higher-level Plant Computer System (PCS) algorithms access control compounds and blocks in these MPLs to develop functions. The test requirements and acceptance criteria for the DCIS system of the Lungmen Project are divided into three general categories (see 1,2,3 below) of verification, which in turn are divided into several specific tests: 1. DCIS System Physical Checks a) RMU Test - To confirm that the hard
2010-01-01
... Measured quantity Test systemaccuracy Power Losses ± 3.0% Voltage ± 0.5% Current ± 0.5% Resistance ± 0.5... and then take simultaneous readings of voltage and current. Determine the winding resistance Rdc by... resistance measurements: (a) Use separate current and voltage leads when measuring small (< 10 ohms...
Kyllingsbæk, Søren; Markussen, Bo; Bundesen, Claus
2012-06-01
The authors propose and test a simple model of the time course of visual identification of briefly presented, mutually confusable single stimuli in pure accuracy tasks. The model implies that during stimulus analysis, tentative categorizations that stimulus i belongs to category j are made at a constant Poisson rate, v(i, j). The analysis is continued until the stimulus disappears, and the overt response is based on the categorization made the greatest number of times. The model was evaluated by Monte Carlo tests of goodness of fit against observed probability distributions of responses in two extensive experiments and also by quantifications of the information loss of the model compared with the observed data by use of information theoretic measures. The model provided a close fit to individual data on identification of digits and an apparently perfect fit to data on identification of Landolt rings.
Dyck, P J; Zimmerman, I; Gillen, D A; Johnson, D; Karnes, J L; O'Brien, P C
1993-08-01
We recently found that vibratory detection threshold is greatly influenced by the algorithm of testing. Here, we study the influence of stimulus characteristics and algorithm of testing and estimating threshold on cool (CDT), warm (WDT), and heat-pain (HPDT) detection thresholds. We show that continuously decreasing (for CDT) or increasing (for WDT) thermode temperature to the point at which cooling or warming is perceived and signaled by depressing a response key ("appearance" threshold) overestimates threshold with rapid rates of thermal change. The mean of the appearance and disappearance thresholds also does not perform well for insensitive sites and patients. Pyramidal (or flat-topped pyramidal) stimuli ranging in magnitude, in 25 steps, from near skin temperature to 9 degrees C for 10 seconds (for CDT), from near skin temperature to 45 degrees C for 10 seconds (for WDT), and from near skin temperature to 49 degrees C for 10 seconds (for HPDT) provide ideal stimuli for use in several algorithms of testing and estimating threshold. Near threshold, only the initial direction of thermal change from skin temperature is perceived, and not its return to baseline. Use of steps of stimulus intensity allows the subject or patient to take the needed time to decide whether the stimulus was felt or not (in 4, 2, and 1 stepping algorithms), or whether it occurred in stimulus interval 1 or 2 (in two-alternative forced-choice testing). Thermal thresholds were generally significantly lower with a large (10 cm2) than with a small (2.7 cm2) thermode.(ABSTRACT TRUNCATED AT 250 WORDS)
Chiara Antinoro
2012-12-01
Full Text Available Application of the Arya and Paris (AP model to estimate the soil water retention curve requires a detailed description of the particlesize distribution (PSD but limited experimental PSD data are generally determined by the conventional sieve-hydrometer (SH method. Detailed PSDs can be obtained by fitting a continuous model to SH data or performing measurements by the laser diffraction (LD method. The AP model was applied to 40 Sicilian soils for which the PSD was measured by both the SH and LD methods. The scale factor was set equal to 1.38 (procedure AP1 or estimated by a logistical model with parameters gathered from literature (procedure AP2. For both SH and LD data, procedure AP2 allowed a more accurate prediction of the water retention than procedure AP1, confirming that it is not convenient to use a unique value of for soils that are very different in texture. Despite the differences in PSDs obtained by the SH and LD methods, the water retention predicted by a given procedure (AP1 or AP2 using SH or LD data was characterized by the same level of accuracy. Discrepancies in the estimated water retention from the two PSD measurement methods were attributed to underestimation of the finest diameter frequency obtained by the LD method. Analysis also showed that the soil water retention estimated using the SH method was affected by an estimation bias that could be corrected by an optimization procedure (OPT. Comparison of a-distributions and water retention shape indices obtained by the two methods (SH or LD indicated that the shape-similarity hypothesis is better verified if the traditional sieve-hydrometer data are used to apply the AP model. The optimization procedure allowed more accurate predictions of the water retention curves than the traditional AP1 and AP2 procedures. Therefore, OPT can be considered a valid alternative to the more complex logistical model for estimating the water retention curve of Sicilian soils.
Naseem Cassim
2017-02-01
Full Text Available Introduction: CD4 testing in South Africa is based on an integrated tiered service delivery model that matches testing demand with capacity. The National Health Laboratory Service has predominantly implemented laboratory-based CD4 testing. Coverage gaps, over-/under-capacitation and optimal placement of point-of-care (POC testing sites need investigation. Objectives: We assessed the impact of relational algebraic capacitated location (RACL algorithm outcomes on the allocation of laboratory and POC testing sites. Methods: The RACL algorithm was developed to allocate laboratories and POC sites to ensure coverage using a set coverage approach for a defined travel time (T. The algorithm was repeated for three scenarios (A: T = 4; B: T = 3; C: T = 2 hours. Drive times for a representative sample of health facility clusters were used to approximate T. Outcomes included allocation of testing sites, Euclidian distances and test volumes. Additional analysis included platform distribution and space requirement assessment. Scenarios were reported as fusion table maps. Results: Scenario A would offer a fully-centralised approach with 15 CD4 laboratories without any POC testing. A significant increase in volumes would result in a four-fold increase at busier laboratories. CD4 laboratories would increase to 41 in scenario B and 61 in scenario C. POC testing would be offered at two sites in scenario B and 20 sites in scenario C. Conclusion: The RACL algorithm provides an objective methodology to address coverage gaps through the allocation of CD4 laboratories and POC sites for a given T. The algorithm outcomes need to be assessed in the context of local conditions.
Caricati, Luca
2017-01-01
The status-legitimacy hypothesis was tested by analyzing cross-national data about social inequality. Several indicators were used as indexes of social advantage: social class, personal income, and self-position in the social hierarchy. Moreover, inequality and freedom in nations, as indexed by Gini and by the human freedom index, were considered. Results from 36 nations worldwide showed no support for the status-legitimacy hypothesis. The perception that income distribution was fair tended to increase as social advantage increased. Moreover, national context increased the difference between advantaged and disadvantaged people in the perception of social fairness: Contrary to the status-legitimacy hypothesis, disadvantaged people were more likely than advantaged people to perceive income distribution as too large, and this difference increased in nations with greater freedom and equality. The implications for the status-legitimacy hypothesis are discussed.
Guzmán, Enrique; Aguilar, Cristina; Taguas, Encarnación V.
2014-05-01
Olive groves constitute a traditional Mediterranean crop and thus, an important source of income to these regions and a crucial landscape component. Despite its importance, most of the olive groves in the region of Andalusia, Southern Spain, are located in sloping areas, which implies a significant risk of erosion. The combination of data and models allow enhancing the knowledge about processes taking place in these areas as well as the prediction of future scenarios. This aspect might be essential to plan soil protection strategies within a context of climate change where the IPCC estimates a significant increase of soil aridity and torrential events by the end of the century. The objective of this study is to estimate the rainfall-runoff-sediment dynamics in a microcatchment olive grove with the aid of a physically-based distributed hydrological model in order to evaluate the effect of extreme events on runoff and erosion. This study will allow to improve land-use and management planning activities in similar areas. In addition, the scale of the study (microcatchment) will allow to contrast the results in larger areas such as catchment regional spatial scales.
Knob, P.J.
1983-01-01
The impossibility of using internal instrumentation in high temperature reactor with spherical fuel, lead to the development of an instrumentation system that will be able to monitorate power perturbations only using detectors located in the reflectors. This instrumentation is divided in three parts: one for each reflector, higher, lower and lateral. The development of a system located in the lateral reflector is shown. The system was tested for Kahter from IRE-KFA of very low dimensions and for the PNP-300 power reactor of very large dimensions. Good results were obtained. (E.G.) [pt
QCD Tests Using b-bbar-g Events and a new Measurement of the B Hadron Energy Distribution
Muller, David
1998-10-02
We present new studies of 3-jet final states from hadronic Z^{0} decays recorded by the SLD experiment, in which jets are identified as quark, anatiquark or gluon. our gluon energy spectrum, measured over the full kinematic range, is consistent with the predictions of QCD, and we derive a limit on an anomalous chromomagnetic bbg coupling. We measure the parity violation in Z^{0} decays into b anti-bg to be consistent with the predictions of electroweak theory and QCD, and perform new tests of T- and CP-conservation at the bbg vertex. We also present a new technique for reconstructing the energy of a B hadron using the set of charged tracks attached to a secondary vertex. The B hadron energy spectrum is measured over the full kinematic range, allowing improved tests of predictions for the shape of the spectrum. The average scaled energy is measured to be
Kim, Young Suk; Jain, Mukesh K.; Metzger, Don R.
2005-01-01
From various draw-bend friction tests with sheet metals at lubricated conditions, it has been unanimously reported that the friction coefficient increases as the pin diameter decreases. However, a proper explanation for this phenomenon has not been given yet. In those experiments, tests were performed for different pin diameters while keeping the same average contact pressure by adjusting applied tension forces. In this paper, pressure profiles at pin/strip contacts and the changes in the pressure profiles depending on pin diameters are investigated using finite element simulations. To study the effect of the pressure profile changes on friction measurements, a non-constant friction model (Stribeck friction model), which is more realistic for the lubricated sheet metal contacts, is implemented into the finite element code and applied to the simulations. The study shows that the non-uniformity of the pressure profile increases and the pin/strip contact angle decreases as the pin diameter decreases, and these phenomena increase the friction coefficient, which is calculated from the strip tension forces using a conventional rope-pulley equation
Distributed authentication for randomly compromised networks
Beals, Travis R; Hynes, Kevin P; Sanders, Barry C
2009-01-01
We introduce a simple, practical approach with probabilistic information-theoretic security to solve one of quantum key distribution's major security weaknesses: the requirement of an authenticated classical channel to prevent man-in-the-middle attacks. Our scheme employs classical secret sharing and partially trusted intermediaries to provide arbitrarily high confidence in the security of the protocol. Although certain failures elude detection, we discuss preemptive strategies to reduce the probability of failure to an arbitrarily small level: the probability of such failures is exponentially suppressed with increases in connectivity (i.e. connections per node).
Santoro, R.T.; Barnes, J.M.; Alsmiller, R.G. Jr.; Emmett, M.B.; Drischler, J.D.
1985-12-01
A recent paper presented neutron spectral distributions (energy greater than or equal to0.91 MeV) measured at various locations around the Tokamak Fusion Test Reactor (TFTR) at the Princeton Plasma Physics Laboratory. The neutron source for the series of measurements was a small D-T generator placed at various positions in the TFTR vacuum chamber. In the present paper the results of neutron transport calculations are presented and compared with these experimental data. The calculations were carried out using Monte Carlo methods and a very detailed model of the TFTR and the TFTR test cell. The calculated and experimental fluences per unit energy are compared in absolute units and are found to be in substantial agreement for five different combinations of source and detector positions
Entropy Based Test Point Evaluation and Selection Method for Analog Circuit Fault Diagnosis
Yuan Gao
2014-01-01
Full Text Available By simplifying tolerance problem and treating faulty voltages on different test points as independent variables, integer-coded table technique is proposed to simplify the test point selection process. Usually, simplifying tolerance problem may induce a wrong solution while the independence assumption will result in over conservative result. To address these problems, the tolerance problem is thoroughly considered in this paper, and dependency relationship between different test points is considered at the same time. A heuristic graph search method is proposed to facilitate the test point selection process. First, the information theoretic concept of entropy is used to evaluate the optimality of test point. The entropy is calculated by using the ambiguous sets and faulty voltage distribution, determined by component tolerance. Second, the selected optimal test point is used to expand current graph node by using dependence relationship between the test point and graph node. Simulated results indicate that the proposed method more accurately finds the optimal set of test points than other methods; therefore, it is a good solution to minimize the size of the test point set. To simplify and clarify the proposed method, only catastrophic and some specific parametric faults are discussed in this paper.
Blomquist, Kevin W. [EG& G Energy Measurements, Gaithersburg, MD (United States); Lindemann, Tim A. [EG& G Energy Measurements, Gaithersburg, MD (United States); Lyon, Glen E. [EG& G Energy Measurements, Gaithersburg, MD (United States); Steen, Dan C. [EG& G Energy Measurements, Gaithersburg, MD (United States); Wills, Cathy A. [EG& G Energy Measurements, Gaithersburg, MD (United States); Flick, Sarah A. [EG& G Energy Measurements, Gaithersburg, MD (United States); Ostler, W. Kent [EG& G Energy Measurements, Gaithersburg, MD (United States)
1995-12-31
Results of surveys conducted between 1991 and 1995 were used to document the distribution and habitat of 11 Category 2 candidate plant species known to occur on or near the Nevada Test Site (NTS). Approximately 200 areas encompassing about 13,000 ha were surveyed. Distributions of all species except Frasera-pahutensis and Phaceliaparishii were increased, and the ranges of Camissonia megalantha, Galium hilendiae ssp. kingstonense, Penstemon albomarginatus, and Penstemon pahutensis were expanded. The status of each species was assessed based on current distribution population trends, and potential threats. Recommendations were made to reclassi& the following five species to Category 3C: Arctomecon merriamii, F. pahutensis, P. pahutensis, Phacelia beatleyae, and Phaceliaparishii. Two species, C. megalantha and Cymopterus ripIeyi var. saniculoides, were recommended for reclassification to Category 3B status. No recommendation was made to reclassify Astragalus funereus, G. hilendiae ssp. kingstonense, P. albomarginatus, or Penstemon fruticiformis var. amargosae from their current Category 2 status. Populations of these four species are not threatened on NTS, but the NTS populations represent only a.small portion of each species’ range and the potential threats of mining or grazing activities off NTS on these species was notassessed. Conservation measures recommended included the development of an NTS ecosystem conservation plan, continued conduct of preactivity and plant surveys on NTS, and protection of plant type localities on NTS.
Sun Mi Choi
Full Text Available Despite being a major public health problem, chronic obstructive pulmonary disease (COPD remains underdiagnosed, and only 2.4% COPD patients are aware of their disease in Korea. The objective of this study was to estimate the prevalence of COPD detected by spirometry performed as a preoperative screening test and to determine the Global Initiative for Chronic Obstructive Lung Disease (GOLD group distribution and self-awareness of COPD.We reviewed the medical records of adults (age, ≥ 40 years who had undergone spirometry during preoperative screening between April and August 2013 at a tertiary hospital in Korea. COPD was defined as a postbronchodilator forced expiratory volume in 1 s/forced vital capacity ratio of 40 years who had undergone spirometry as a preoperative screening test, 474 (15.6%; 404 men; median age, 70 years; range, 44-93 years were diagnosed with COPD. Only 26 (5.5% patients reported previous diagnosis of COPD (2.1%, emphysema (0.8%, or chronic bronchitis (2.5%. The GOLD group distribution was as follows: 63.3% in group A, 31.2% in group B, 1.7% in group C, and 3.8% in group D.The prevalence of COPD diagnosed by preoperative spirometry was 15.6%, and only 5.5% patients were aware of their disease. Approximately one-third of the COPD patients belonged to GOLD groups B, C, and D, which require regular treatment.
High-Rate Field Demonstration of Large-Alphabet Quantum Key Distribution
2016-10-12
count rate of Bob’s detectors. In this detector-limited regime , it is advantageous to increase M to encode as much information as possible in each...High- rate field demonstration of large-alphabet quantum key distribution Catherine Lee,1, 2 Darius Bunandar,1 Zheshen Zhang,1 Gregory R. Steinbrecher...October 12, 2016) 2 Quantum key distribution (QKD) enables secure symmetric key exchange for information-theoretically secure com- munication via one-time
Yamamoto, M; Tomita, J; Sakaguchi, A; Imanaka, T; Fukutani, S; Endo, S; Tanaka, K; Hoshi, M; Gusev, B I; Apsalikov, A N
2008-04-01
The village of Dolon located about 60 km northeast from the border of the Semipalatinsk Nuclear Test Site in Kazakhstan is one of the most affected inhabited settlements as a result of nuclear tests by the former USSR. Radioactive contamination in Dolon was mainly caused by the first USSR nuclear test on 29 August 1949. As part of the efforts to reconstruct the radiation dose in Dolon, Cs and Pu in soil samples collected from 26 locations in the vicinity of and within the village were measured to determine the width and position of the center-axis of the radioactive plume that passed over the village from the 29 August 1949 nuclear test. Measured soil inventories of Cs and Pu were plotted as a function of the distance from the supposed center-axis of the plume. A clear shape similar to a Gaussian function was observed in their spatial distributions with each maximum around a center-axis. It was suggested that the plume width that contaminated Dolon was at most 10 km and the real center-axis of the radioactive plume passed 0.7-0.9 km north of the supposed centerline. A peak-like shape with the maximum near the center-axis was also observed in the spatial distribution of the Pu/Cs activity ratio, which may reflect the fractionation effect between Pu and Cs during the deposition process. These results support the recently reported results. The data obtained here will provide useful information on the efforts to estimate radiation dose in Dolon as reliably as possible. Health Phys. 94(4):328-337; 2008.
Data-driven approach for assessing utility of medical tests using electronic medical records.
Skrøvseth, Stein Olav; Augestad, Knut Magne; Ebadollahi, Shahram
2015-02-01
To precisely define the utility of tests in a clinical pathway through data-driven analysis of the electronic medical record (EMR). The information content was defined in terms of the entropy of the expected value of the test related to a given outcome. A kernel density classifier was used to estimate the necessary distributions. To validate the method, we used data from the EMR of the gastrointestinal department at a university hospital. Blood tests from patients undergoing surgery for gastrointestinal surgery were analyzed with respect to second surgery within 30 days of the index surgery. The information content is clearly reflected in the patient pathway for certain combinations of tests and outcomes. C-reactive protein tests coupled to anastomosis leakage, a severe complication show a clear pattern of information gain through the patient trajectory, where the greatest gain from the test is 3-4 days post index surgery. We have defined the information content in a data-driven and information theoretic way such that the utility of a test can be precisely defined. The results reflect clinical knowledge. In the case we used the tests carry little negative impact. The general approach can be expanded to cases that carry a substantial negative impact, such as in certain radiological techniques. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Modelling the distribution of chickens, ducks, and geese in China
Prosser, Diann J.; Wu, Junxi; Ellis, Erie C.; Gale, Fred; Van Boeckel, Thomas P.; Wint, William; Robinson, Tim; Xiao, Xiangming; Gilbert, Marius
2011-01-01
Global concerns over the emergence of zoonotic pandemics emphasize the need for high-resolution population distribution mapping and spatial modelling. Ongoing efforts to model disease risk in China have been hindered by a lack of available species level distribution maps for poultry. The goal of this study was to develop 1 km resolution population density models for China's chickens, ducks, and geese. We used an information theoretic approach to predict poultry densities based on statistical relationships between poultry census data and high-resolution agro-ecological predictor variables. Model predictions were validated by comparing goodness of fit measures (root mean square error and correlation coefficient) for observed and predicted values for 1/4 of the sample data which were not used for model training. Final output included mean and coefficient of variation maps for each species. We tested the quality of models produced using three predictor datasets and 4 regional stratification methods. For predictor variables, a combination of traditional predictors for livestock mapping and land use predictors produced the best goodness of fit scores. Comparison of regional stratifications indicated that for chickens and ducks, a stratification based on livestock production systems produced the best results; for geese, an agro-ecological stratification produced best results. However, for all species, each method of regional stratification produced significantly better goodness of fit scores than the global model. Here we provide descriptive methods, analytical comparisons, and model output for China's first high resolution, species level poultry distribution maps. Output will be made available to the scientific and public community for use in a wide range of applications from epidemiological studies to livestock policy and management initiatives.
Hwang, James Ho-Jin; Duran, Adam
2016-08-01
Most of the times pyrotechnic shock design and test requirements for space systems are provided in Shock Response Spectrum (SRS) without the input time history. Since the SRS does not describe the input or the environment, a decomposition method is used to obtain the source time history. The main objective of this paper is to develop a decomposition method producing input time histories that can satisfy the SRS requirement based on the pyrotechnic shock test data measured from a mechanical impact test apparatus. At the heart of this decomposition method is the statistical representation of the pyrotechnic shock test data measured from the MIT Lincoln Laboratory (LL) designed Universal Pyrotechnic Shock Simulator (UPSS). Each pyrotechnic shock test data measured at the interface of a test unit has been analyzed to produce the temporal peak acceleration, Root Mean Square (RMS) acceleration, and the phase lag at each band center frequency. Maximum SRS of each filtered time history has been calculated to produce a relationship between the input and the response. Two new definitions are proposed as a result. The Peak Ratio (PR) is defined as the ratio between the maximum SRS and the temporal peak acceleration at each band center frequency. The ratio between the maximum SRS and the RMS acceleration is defined as the Energy Ratio (ER) at each band center frequency. Phase lag is estimated based on the time delay between the temporal peak acceleration at each band center frequency and the peak acceleration at the lowest band center frequency. This stochastic process has been applied to more than one hundred pyrotechnic shock test data to produce probabilistic definitions of the PR, ER, and the phase lag. The SRS is decomposed at each band center frequency using damped sinusoids with the PR and the decays obtained by matching the ER of the damped sinusoids to the ER of the test data. The final step in this stochastic SRS decomposition process is the Monte Carlo (MC
Rubin, Adam; Avramova, Maria; Velazquez-Lozada, Alexander
2016-03-01
This report summarised the first phase of the Nuclear Energy Agency (NEA) and the US Nuclear Regulatory Commission Benchmark based on NUPEC PWR Sub-channel and Bundle Tests (PSBT), which was intended to provide data for the verification of void distribution models in participants' codes. This phase was composed of four exercises; Exercise 1: steady-state single sub-channel benchmark, Exercise 2: steady-state rod bundle benchmark, Exercise 3: transient rod bundle benchmark and Exercise 4: a pressure drop benchmark. The experimental data provided to the participants of this benchmark is from a series of void measurement tests using full-size mock-up tests for both Boiling Water Reactors (BWRs) and Pressurised Water Reactors (PWRs). These tests were performed from 1987 to 1995 by the Nuclear Power Engineering Corporation (NUPEC) in Japan and made available by the Japan Nuclear Energy Safety Organisation (JNES) for the purposes of this benchmark, which was organised by Pennsylvania State University. Twenty-one institutions from nine countries participated in this benchmark. Seventeen different computer codes were used in Exercises 1, 2, 3 and 4. Among the computer codes were porous media, sub-channel, systems thermal-hydraulic code and Computational Fluid Dynamics (CFD) codes. It was observed that the codes tended to overpredict the thermal equilibrium quality at lower elevations and under predict it at higher elevations. There was also a tendency to overpredict void fraction at lower elevations and underpredict it at high elevations for the bundle test cases. The overprediction of void fraction at low elevations is likely caused by the x-ray densitometer measurement method used. Under sub-cooled boiling conditions, the voids accumulate at heated surfaces (and are therefore not seen in the centre of the sub-channel, where the measurements are being taken), so the experimentally-determined void fractions will be lower than the actual void fraction. Some of the best
Conover, W.J.; Cox, D.D.; Martz, H.F.
1997-12-01
When using parametric empirical Bayes estimation methods for estimating the binomial or Poisson parameter, the validity of the assumed beta or gamma conjugate prior distribution is an important diagnostic consideration. Chi-square goodness-of-fit tests of the beta or gamma prior hypothesis are developed for use when the binomial sample sizes or Poisson exposure times vary. Nine examples illustrate the application of the methods, using real data from such diverse applications as the loss of feedwater flow rates in nuclear power plants, the probability of failure to run on demand and the failure rates of the high pressure coolant injection systems at US commercial boiling water reactors, the probability of failure to run on demand of emergency diesel generators in US commercial nuclear power plants, the rate of failure of aircraft air conditioners, baseball batting averages, the probability of testing positive for toxoplasmosis, and the probability of tumors in rats. The tests are easily applied in practice by means of corresponding Mathematica reg-sign computer programs which are provided
Soinski, Arthur; Hanson, Mark
2006-06-28
A current barrier to public acceptance of distributed generation (DG) and combined heat and power (CHP) technologies is the lack of credible and uniform information regarding system performance. Under a cooperative agreement, the Association of State Energy Research and Technology Transfer Institutions (ASERTTI) and the U.S. Department of Energy have developed four performance testing protocols to provide a uniform basis for comparison of systems. The protocols are for laboratory testing, field testing, long-term monitoring and case studies. They have been reviewed by a Stakeholder Advisory Committee made up of industry, public interest, end-user, and research community representatives. The types of systems covered include small turbines, reciprocating engines (including Stirling Cycle), and microturbines. The protocols are available for public use and the resulting data is publicly available in an online national database and two linked databases with further data from New York State. The protocols are interim pending comments and other feedback from users. Final protocols will be available in 2007. The interim protocols and the national database of operating systems can be accessed at www.dgdata.org. The project has entered Phase 2 in which protocols for fuel cell applications will be developed and the national and New York databases will continue to be maintained and populated.
Li, Li; Xiong, De-fu; Liu, Jia-wen; Li, Zi-xin; Zeng, Guang-cheng; Li, Hua-liang
2014-03-01
We aimed to evaluate the interference of 50 Hz extremely low frequency electromagnetic field (ELF-EMF) occupational exposure on the neurobehavior tests of workers performing tour-inspection close to transformers and distribution power lines. Occupational short-term "spot" measurements were carried out. 310 inspection workers and 300 logistics staff were selected as exposure and control. The neurobehavior tests were performed through computer-based neurobehavior evaluation system, including mental arithmetic, curve coincide, simple visual reaction time, visual retention, auditory digit span and pursuit aiming. In 500 kV areas electric field intensity at 71.98% of total measured 590 spots were above 5 kV/m (national occupational standard), while in 220 kV areas electric field intensity at 15.69% of total 701 spots were above 5 kV/m. Magnetic field flux density at all the spots was below 1,000 μT (ICNIRP occupational standard). The neurobehavior score changes showed no statistical significance. Results of neurobehavior tests among different age, seniority groups showed no significant changes. Neurobehavior changes caused by daily repeated ELF-EMF exposure were not observed in the current study.
Lee, L.; Helsel, D.
2007-01-01
Analysis of low concentrations of trace contaminants in environmental media often results in left-censored data that are below some limit of analytical precision. Interpretation of values becomes complicated when there are multiple detection limits in the data-perhaps as a result of changing analytical precision over time. Parametric and semi-parametric methods, such as maximum likelihood estimation and robust regression on order statistics, can be employed to model distributions of multiply censored data and provide estimates of summary statistics. However, these methods are based on assumptions about the underlying distribution of data. Nonparametric methods provide an alternative that does not require such assumptions. A standard nonparametric method for estimating summary statistics of multiply-censored data is the Kaplan-Meier (K-M) method. This method has seen widespread usage in the medical sciences within a general framework termed "survival analysis" where it is employed with right-censored time-to-failure data. However, K-M methods are equally valid for the left-censored data common in the geosciences. Our S-language software provides an analytical framework based on K-M methods that is tailored to the needs of the earth and environmental sciences community. This includes routines for the generation of empirical cumulative distribution functions, prediction or exceedance probabilities, and related confidence limits computation. Additionally, our software contains K-M-based routines for nonparametric hypothesis testing among an unlimited number of grouping variables. A primary characteristic of K-M methods is that they do not perform extrapolation and interpolation. Thus, these routines cannot be used to model statistics beyond the observed data range or when linear interpolation is desired. For such applications, the aforementioned parametric and semi-parametric methods must be used.
Adaptive Metropolis Sampling with Product Distributions
Wolpert, David H.; Lee, Chiu Fan
2005-01-01
The Metropolis-Hastings (MH) algorithm is a way to sample a provided target distribution pi(z). It works by repeatedly sampling a separate proposal distribution T(x,x') to generate a random walk {x(t)}. We consider a modification of the MH algorithm in which T is dynamically updated during the walk. The update at time t uses the {x(t' less than t)} to estimate the product distribution that has the least Kullback-Leibler distance to pi. That estimate is the information-theoretically optimal mean-field approximation to pi. We demonstrate through computer experiments that our algorithm produces samples that are superior to those of the conventional MH algorithm.
Das, Shantanu; Yadav, Ramnayan
2016-01-01
Electronics devices when designed to meet specific requirements, the designers do not generally envisage the amount of electromagnetic interference that this particular device may give as power line conducted noise and radiated noise. After the product is developed, the quantification of the same is carried out in certified EMI-EMC set-up to get these figures of conducted emissions (CE) and radiated emissions (RE), and its mitigation as per limits of the chosen standard. In the latest TM embodiment of Fault Tolerant Power Distribution System ECPS"T"M (Electronics Corporation Power Supply) developed for NPCIL (PHWR700MW plant) we carried out CE and RE tests and quantified the spectrum obtained for CE and RE, and mitigated them as per CISPR22 standards. In this short article we bring out the CE and RE results of the latest product ECPS, done at EMI-EMC Centre of ECIL Hyderabad. (author)
Allehyani, Ahmed [University of Southern California, Department of Electrical Engineering; Beshir, Mohammed [University of Southern California, Department of Electrical Engineering
2015-02-01
Voltage regulators help maintain an acceptable voltage profile for the system. This paper discusses the effect of installing voltage regulators to the system to fix the voltage drop resulting from the electrical vehicles loading increase when they are being charged. The effect will be studied in the afternoon, when the peak load occurs, using the IEEE 34 bus test feeder. First, only one spot node is used to charge the electric vehicles while a voltage regulator is present. Second, five spot nodes are loaded at the same time to charge the electric vehicles while voltage regulators are installed at each node. After that, the impact of electric vehicles on distribution feeders that do not have voltage regulators will appear.
Yu.M.Gledenov; M.Sedysheva; G.Khuukhenkhuu
1997-01-01
<正>On the basis of measurements of double differential cross sections for （n,α） reactions in 5-7 MeV neutron energy region using gridded ionization chamber （GIC）, we constructed a new GIC which, compared with the old ones, can bear higher pressure and makes it possible to measure （n,p） reactions up to 6 MeV and （n,xα） reactions up to 20 MeV. To test the new chamber, the saturation property for argon and krypton mixed with a few percent CO2 was studied using 241Am and compound Pu α source and tritium from 6Li(nth,t)4He, and the two dimensional spectra for 241Am and Pu α source, 6Li(nth,t)4He and H（n,p） reactions were measured. The measured energy spectra and angular distributions for α and tritium are reasonable, and the derived data for α, proton and tritium in argon and krypton from the measured spectra data were compared with the calculated ones. They are in good agreement. The angular distributions and energy spectra for 58Ni（n,p）58Co reaction at 4.1 MeV neutron energy were m
Oualkacha, Karim; Lakhal-Chaieb, Lajmi; Greenwood, Celia Mt
2016-04-01
RVPedigree (Rare Variant association tests in Pedigrees) implements a suite of programs facilitating genome-wide analysis of association between a quantitative trait and autosomal region-based genetic variation. The main features here are the ability to appropriately test for association of rare variants with non-normally distributed quantitative traits, and also to appropriately adjust for related individuals, either from families or from population structure and cryptic relatedness. RVPedigree is available as an R package. The package includes calculation of kinship matrices, various options for coping with non-normality, three different ways of estimating statistical significance incorporating triaging to enable efficient use of the most computationally-intensive calculations, and a parallelization option for genome-wide analysis. The software is available from the Comprehensive R Archive Network [CRAN.R-project.org] under the name 'RVPedigree' and at [https://github.com/GreenwoodLab]. It has been published under General Public License (GPL) version 3 or newer. © The Author 2016; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.
Wu, Gang; Duan, Yu-Han
2018-02-01
To study the positive distribution rate of Coombs test in patients with clinical anemia and blood transfusion, and its effect on clinical blood transfusion. Seventy patients with hemoglobin level in the normal range were enrolled into control group, while 130 patients with anemia or blood transfusion who' s hemoglobin level was lower comfirmed by micro-column gel antihuman globin detection card and 70 surgical patients with anemia or blood transfusion who' s hemoglobin level was lower comfirmed by micro-column gel anti-human globin card were enrolled into anemia or blood transfusion (A or BT) group. And coomb' s test performed for all the patients, in which the positive patients in Department of Internal Medicine need to be re-typed. Among 70 surgical patients with anemia or blood transfusion, 14 cases were directly detected to be anti-human globine positive with detection rate 20%; among 130 internal medicine patients with anemia or blood transfusion, 54 cases were directly detected to be anti-human globine positive with detection rate 41.4%. Among 270 cases, the highest positive rate (66.7%) was observed in patients with 50-59 g/L of hemoglobin. According to type test, the samples of 54 patients with anemia in Department of Internal Medicine, who were directly selected to be anti-human globin positive, could be divided into anti-C3d(7 cases, accounting for 13.0%), anti-IgG(12 cases accounting for, 22.2%) and anti-C3d+anti-IgG(35 cases, accounting for 64.8%), while according to diseases, the anti-human globin positive ratio was high in tumor cancer, hephropathy and gastroenteropathy patients, and patients in intensive care unit, moreover the blood transfusion frequency of these patients was higher than that of patients with anti-human globin negative(Pblood transfusion, so as to ensure the effectiveness of blood transfusion.
MASM: a market architecture for sensor management in distributed sensor networks
Viswanath, Avasarala; Mullen, Tracy; Hall, David; Garga, Amulya
2005-03-01
Rapid developments in sensor technology and its applications have energized research efforts towards devising a firm theoretical foundation for sensor management. Ubiquitous sensing, wide bandwidth communications and distributed processing provide both opportunities and challenges for sensor and process control and optimization. Traditional optimization techniques do not have the ability to simultaneously consider the wildly non-commensurate measures involved in sensor management in a single optimization routine. Market-oriented programming provides a valuable and principled paradigm to designing systems to solve this dynamic and distributed resource allocation problem. We have modeled the sensor management scenario as a competitive market, wherein the sensor manager holds a combinatorial auction to sell the various items produced by the sensors and the communication channels. However, standard auction mechanisms have been found not to be directly applicable to the sensor management domain. For this purpose, we have developed a specialized market architecture MASM (Market architecture for Sensor Management). In MASM, the mission manager is responsible for deciding task allocations to the consumers and their corresponding budgets and the sensor manager is responsible for resource allocation to the various consumers. In addition to having a modified combinatorial winner determination algorithm, MASM has specialized sensor network modules that address commensurability issues between consumers and producers in the sensor network domain. A preliminary multi-sensor, multi-target simulation environment has been implemented to test the performance of the proposed system. MASM outperformed the information theoretic sensor manager in meeting the mission objectives in the simulation experiments.
Deville, Craig; O'Neill, Thomas; Wright, Benjamin D.; Woodcock, Richard W.; Munoz-Sandoval, Ana; Gershon, Richard C.; Bergstrom, Betty
1998-01-01
Articles in this special section consider (1) flow in test taking (Craig Deville); (2) testwiseness (Thomas O'Neill); (3) test length (Benjamin Wright); (4) cross-language test equating (Richard W. Woodcock and Ana Munoz-Sandoval); (5) computer-assisted testing and testwiseness (Richard Gershon and Betty Bergstrom); and (6) Web-enhanced testing…
Biard, B.
2013-01-01
Highlights: • The FP quantitative distribution in the fuel bundle is measured by gamma-spectrometry. • The FP location is obtained with emission tomograms and other experiment results. • X-ray tomograms provide the material and density mapping of the degraded bundle. • The self-attenuation may then be computed for each isotope at its key line energy. • Results are consistent with other FPT3 measurements, with acceptable uncertainties. -- Abstract: The international Phébus FP programme, initiated in 1988 by the French “Institut de Radioprotection et de Sûreté Nucléaire” (IRSN), in cooperation with the European Commission (EC) and with financial support from USNRC, Canada, PSI/HSK (Switzerland), Japan and Korea, was aimed at studying severe accident phenomena: the fuel degradation, the release of fission products (FPs) and their transport through the reactor coolant system to the containment building. The FPT3 test, conducted in 2004, was the last of the five light water reactor core meltdown accident tests performed on irradiated fuel rods. After the experiment, the test device was recovered and analysed through a full set of non-destructive examinations performed over the fuel bundle zone, including gamma-scanning, gamma emission tomography, X-ray radiography and X-ray transmission tomography. The gamma-scanning was specifically devoted to the location, identification and amount quantification of the FPs remaining in the bundle. Since the fuel bundle became highly degraded during the experiment, the geometry was different at each level examined, and did not correspond to the well-known initial state. The self-attenuation of the test device and consequently the efficiency correction could then not be estimated by classical means that need to know the geometry of the object. Using the results of the other non-destructive examinations, specific computational tools and methods have therefore been developed to compute the self-attenuation of the bundle
Biard, B., E-mail: bruno.biard@irsn.fr
2013-09-15
Highlights: • The FP quantitative distribution in the fuel bundle is measured by gamma-spectrometry. • The FP location is obtained with emission tomograms and other experiment results. • X-ray tomograms provide the material and density mapping of the degraded bundle. • The self-attenuation may then be computed for each isotope at its key line energy. • Results are consistent with other FPT3 measurements, with acceptable uncertainties. -- Abstract: The international Phébus FP programme, initiated in 1988 by the French “Institut de Radioprotection et de Sûreté Nucléaire” (IRSN), in cooperation with the European Commission (EC) and with financial support from USNRC, Canada, PSI/HSK (Switzerland), Japan and Korea, was aimed at studying severe accident phenomena: the fuel degradation, the release of fission products (FPs) and their transport through the reactor coolant system to the containment building. The FPT3 test, conducted in 2004, was the last of the five light water reactor core meltdown accident tests performed on irradiated fuel rods. After the experiment, the test device was recovered and analysed through a full set of non-destructive examinations performed over the fuel bundle zone, including gamma-scanning, gamma emission tomography, X-ray radiography and X-ray transmission tomography. The gamma-scanning was specifically devoted to the location, identification and amount quantification of the FPs remaining in the bundle. Since the fuel bundle became highly degraded during the experiment, the geometry was different at each level examined, and did not correspond to the well-known initial state. The self-attenuation of the test device and consequently the efficiency correction could then not be estimated by classical means that need to know the geometry of the object. Using the results of the other non-destructive examinations, specific computational tools and methods have therefore been developed to compute the self-attenuation of the bundle
Nielsen, Katrine; Kalmykova, Yuliya; Strömvall, Ann-Margret
2015-01-01
The distribution of polycyclic aromatic hydrocarbons (PAHs) in different particulate fractions in stormwater: Total, Particulate, Filtrated, Colloidal and Dissolved fractions, were examined and compared to synthetic suspensions of humic acid colloids and iron nano-sized particles. The distribution...
Nigam, R.; Khare, N.
in this region, quantitative spatial distribution data was generated for morpho-groups (angular-asymmetrical and rounded-symmetrical). The distribution revealed less abundance of angular-asymmetrical forms at the river mouth thus indicating an inverse...
Edgington, Eugene S
1980-01-01
.... This book provides all the necessary theory and practical guidelines, such as instructions for writing computer programs, to permit experimenters to transform any statistical test into a distribution-free test...
Rose-Hansen, J.; Soerensen, H.
1983-01-01
The Ilimaussaq intrusion may be characterized as a geochemically abnormal region, since its rocks are strongly enriched in a number of rare elements, including elements which accompany uranium in deposits in other parts of the world. Examples are the rare earth metals, Nb, Ta, Be, Li, and metals as Cu, Pb, Zn, Mo and Sn. It was proposed to develop and test a model for the supergene distribution of uranium and accompanying elements around a known uranium deposit associated with an alkaline intrusion. The most promising results are those obtained by the PCA technique. For a more preliminary study of a region fjord and river sediments might be the sampling target. These sediments were found to be mixtures in which the proportion of material from the Ilimaussaq U-deposit could be evaluated by the PCA technique involving a distance function related to the loadings in the first principal dimen- sion of the elements characterizing the Ilimaussaq Intrusion. One of the major features of the material sampled in this study is the general high degree of preservation in the sub-arctic environment of the primary igneous mineralogy in the sediments, and in other areas, the structure of data should be investigated in order to test them in this respect. One obvious way is X-ray diffraction analysis. It was indicated that uranium is selectively absorbed on the organic material in lakes and is able to reflect the concentration of U in the lake waters, informing the ultimate potential of the drainage areas in question. It is however yet to be established whether the correlation of uranium and the organic material of the lake sediments actually reflects the long term U concentrations of the lake water. The use of the cluster analysis and discriminant analysis techniques proved to be of lesser value in this project. (author)
Kadionik, P.
1992-01-01
The Eurogam gamma ray multidetector involves, in a first phase, 45 hyper pure Ge detectors, each surrounded by an Anti Compton shield of 10 BGO detectors. In order to ensure the highest reliability and an easy upgrade of the array, the electronic cards have been designed in the new VXI (VME Bus Extension to Instrumentation) standard; this allows to drive the 495 detectors with 4300 parameters to be adjusted by software. The data acquisition architecture is distributed on an Ethernet network. The software for set up and tests of the VXI cards have been written in C, it uses a real time kernel (VxWorks from Wind River Systems) interfaced to the Sun Unix environment. The inter-tasks communications use the Remote Procedure Calls protocol. The inner-shell of the software is connected to a data base and to a graphic interface which allows the engineers or physicists to have a very easy set-up for so many parameters to adjust
V. A. Mubassarova
2014-01-01
Full Text Available Results of uniaxial compression tests of rock samples in electromagnetic fields are presented. The experiments were performed in the Laboratory of Basic Physics of Strength, Institute of Continuous Media Mechanics, Ural Branch of RAS (ICMM. Deformation of samples was studied, and acoustic emission (AE signals were recorded. During the tests, loads varied by stages. Specimens of granite from the Kainda deposit in Kyrgyzstan (similar to samples tested at the Research Station of RAS, hereafter RS RAS were subject to electric pulses at specified levels of compression load. The electric pulses supply was galvanic; two graphite electrodes were fixed at opposite sides of each specimen. The multichannel Amsy-5 Vallen System was used to record AE signals in the six-channel mode, which provided for determination of spatial locations of AE sources. Strain of the specimens was studied with application of original methods of strain computation based on analyses of optical images of deformed specimen surfaces in LaVISION Strain Master System.Acoustic emission experiment data were interpreted on the basis of analyses of the AE activity in time, i.e. the number of AE events per second, and analyses of signals’ energy and AE sources’ locations, i.e. defects.The experiment was conducted at ICMM with the use of the set of equipment with advanced diagnostic capabilities (as compared to earlier experiments described in [Zakupin et al., 2006a, 2006b; Bogomolov et al., 2004]. It can provide new information on properties of acoustic emission and deformation responses of loaded rock specimens to external electric pulses.The research task also included verification of reproducibility of the effect (AE activity when fracturing rates responded to electrical pulses, which was revealed earlier in studies conducted at RS RAS. In terms of the principle of randomization, such verification is methodologically significant as new effects, i.e. physical laws, can be considered
Distributed Energy Technology Laboratory
Federal Laboratory Consortium — The Distributed Energy Technologies Laboratory (DETL) is an extension of the power electronics testing capabilities of the Photovoltaic System Evaluation Laboratory...
Ana Lúcia A. Sampaio
Full Text Available Considering th e morphology, diet and spatial distribution of Satanoperca pappaterraand Crenicichla britskii (Perciformes: Cichlidae in the Upper Paraná River floodplain (Brazil, the following questions were investigated: (1 Could the body shape predict the use of trophic resources and habitat by C. britskiiand S. pappaterra? (2 Could the relationship between morphology and use of trophic resources and habitat be also extended to the intraspecific scale? (3 What are the most important morphological traits used to predict the variation on diet and habitat occupation within and between species? We hypothesized that intra and interspecific differences in morphological patterns imply in different forms of resource exploitation and that the ecomorphological analysis enables the identification of trophic and spatial niche segregation. Fish samplings were performed in different types of habitats (rivers, secondary channels, connected and disconnected lagoons in the Upper Paraná River floodplain. Analyses of the stomach content was conducted to characterize the feeding patterns and twenty-two ecomorphological indices were calculated from linear morphological measurements and areas. A principal component analysis (PCA run with these indices evidenced the formation of two significant axes, revealing in the axis 1 an ecomorphological ordination according to the type of habitat, regardless the species. The individuals of both species exploiting lotic habitats tended to have morphological traits that enable rapid progressive and retrograde movements, braking and continuous swimming, whereas individuals found in lentic and semi-lotic habitats presented morphology adapted to a greater maneuverability and stabilization in deflections. On the other hand the axis 2 evidenced a segregation related to the feeding ecology, between S. pappaterra and C. britskii. The relationship between morphology and use of spatial and feeding resource was corroborated by the
Liu, Tao; Liu, Xuewei
2018-06-01
Pore-filling and fracture-filling are two basic distribution morphologies of gas hydrates in nature. A clear knowledge of gas hydrate morphology is important for better resource evaluation and exploitation. Improper exploitation may cause seafloor instability and exacerbate the greenhouse effect. To identify the gas hydrate morphologies in sediments, we made a thorough analysis of the characteristics of gas hydrate bearing sediments (GHBS) based on rock physics modeling. With the accumulation of gas hydrate in sediments, both the velocities of two types of GHBS increase, and their densities decrease. Therefore, these two morphologies cannot be differentiated only by velocity or density. After a series of tests, we found the attribute ρ {{V}{{P}}}0.5 as a function of hydrate concentration show opposite trends for these two morphologies due to their different formation mechanisms. The morphology of gas hydrate can thus be identified by comparing the measured ρ {{V}{{P}}}0.5 with its background value, which means the ρ {{V}{{P}}}0.5 of the hydrate-free sediments. In 2013, China’s second gas hydrate expedition was conducted by Guangzhou Marine Geologic Survey to explore gas hydrate resources in the northern South China Sea, and both two hydrate morphologies were recovered. We applied this method to three sites, which include two pore-filling and three fracture-filling hydrate layers. The data points, that agree with the actual situations, account for 72% and 82% of the total for the two pore-filling hydrate layers, respectively, and 86%, 74%, and 69% for the three fracture-filling hydrate layers, respectively.
Christenson, M.; Stemmley, S.; Jung, S.; Mettler, J.; Sang, X.; Martin, D.; Kalathiparambil, K.; Ruzic, D. N.
2017-08-01
The ThermoElectric-driven Liquid-metal plasma-facing Structures (TELS) experiment at the University of Illinois is a gas-puff driven, theta-pinch plasma source that is used as a test stand for off-normal plasma events incident on materials in the edge and divertor regions of a tokamak. The ion temperatures and resulting energy distributions are crucial for understanding how well a TELS pulse can simulate an extreme event in a larger, magnetic confinement device. A retarding field energy analyzer (RFEA) has been constructed for use with such a transient plasma due to its inexpensive and robust nature. The innovation surrounding the use of a control analyzer in conjunction with an actively sampling analyzer is presented and the conditions of RFEA operation are discussed, with results presented demonstrating successful performance under extreme conditions. Such extreme conditions are defined by heat fluxes on the order of 0.8 GW m-2 and on time scales of nearly 200 μs. Measurements from the RFEA indicate two primary features for a typical TELS discharge, following closely with the pre-ionizing coaxial gun discharge characteristics. For the case using the pre-ionization pulse (PiP) and the theta pinch, the measured ion signal showed an ion temperature of 23.3 ± 6.6 eV for the first peak and 17.6 ± 1.9 eV for the second peak. For the case using only the PiP, the measured signal showed an ion temperature of 7.9 ± 1.1 eV for the first peak and 6.6 ± 0.8 eV for the second peak. These differences illustrate the effectiveness of the theta pinch for imparting energy on the ions. This information also highlights the importance of TELS as being one of the few linear pulsed plasma sources whereby moderately energetic ions will strike targets without the need for sample biasing.
Dionne, B.; Tzanos, C.P.
2011-01-01
To support the safety analyses required for the conversion of the Belgian Reactor 2 (BR2) from highly-enriched uranium (HEU) to low-enriched uranium (LEU) fuel, the simulation of a number of loss-of-flow tests, with or without loss of pressure, has been undertaken. These tests were performed at BR2 in 1963 and used instrumented fuel assemblies (FAs) with thermocouples (TC) imbedded in the cladding as well as probes to measure the FAs power on the basis of their coolant temperature rise. The availability of experimental data for these tests offers an opportunity to better establish the credibility of the RELAP5-3D model and methodology used in the conversion analysis. In order to support the HEU to LEU conversion safety analyses of the BR2 reactor, RELAP simulations of a number of loss-of-flow/loss-of-pressure tests have been undertaken. Preliminary analyses showed that the conservative power distributions used historically in the BR2 RELAP model resulted in a significant overestimation of the peak cladding temperature during the transient. Therefore, it was concluded that better estimates of the steady-state and decay power distributions were needed to accurately predict the cladding temperatures measured during the tests and establish the credibility of the RELAP model and methodology. The new approach ('best estimate' methodology) uses the MCNP5, ORIGEN-2 and BERYL codes to obtain steady-state and decay power distributions for the BR2 core during the tests A/400/1, C/600/3 and F/400/1. This methodology can be easily extended to simulate any BR2 core configuration. Comparisons with measured peak cladding temperatures showed a much better agreement when power distributions obtained with the new methodology are used.
Signal correlations in biomass combustion. An information theoretic analysis
Ruusunen, M.
2013-09-01
Increasing environmental and economic awareness are driving the development of combustion technologies to efficient biomass use and clean burning. To accomplish these goals, quantitative information about combustion variables is needed. However, for small-scale combustion units the existing monitoring methods are often expensive or complex. This study aimed to quantify correlations between flue gas temperatures and combustion variables, namely typical emission components, heat output, and efficiency. For this, data acquired from four small-scale combustion units and a large circulating fluidised bed boiler was studied. The fuel range varied from wood logs, wood chips, and wood pellets to biomass residue. Original signals and a defined set of their mathematical transformations were applied to data analysis. In order to evaluate the strength of the correlations, a multivariate distance measure based on information theory was derived. The analysis further assessed time-varying signal correlations and relative time delays. Ranking of the analysis results was based on the distance measure. The uniformity of the correlations in the different data sets was studied by comparing the 10-quantiles of the measured signal. The method was validated with two benchmark data sets. The flue gas temperatures and the combustion variables measured carried similar information. The strongest correlations were mainly linear with the transformed signal combinations and explicable by the combustion theory. Remarkably, the results showed uniformity of the correlations across the data sets with several signal transformations. This was also indicated by simulations using a linear model with constant structure to monitor carbon dioxide in flue gas. Acceptable performance was observed according to three validation criteria used to quantify modelling error in each data set. In general, the findings demonstrate that the presented signal transformations enable real-time approximation of the studied combustion variables. The potentiality of flue gas temperatures to monitor the quality and efficiency of combustion allows development toward cost effective control systems. Moreover, the uniformity of the presented signal correlations could enable straightforward copies of such systems. This would cumulatively impact the reduction of emissions and fuel consumption in small-scale biomass combustion. (orig.)
Sentence Comprehension as Mental Simulation: An Information-Theoretic Perspective
Gabriella Vigliocco
2011-11-01
Full Text Available It has been argued that the mental representation resulting from sentence comprehension is not (just an abstract symbolic structure but a “mental simulation” of the state-of-affairs described by the sentence. We present a particular formalization of this theory and show how it gives rise to quantifications of the amount of syntactic and semantic information conveyed by each word in a sentence. These information measures predict simulated word-processing times in a dynamic connectionist model of sentence comprehension as mental simulation. A quantitatively similar relation between information content and reading time is known to be present in human reading-time data.
Towards an Information Theoretic Analysis of Searchable Encryption (Extended Version)
Sedghi, S.; Doumen, J.M.; Hartel, Pieter H.; Jonker, Willem
2008-01-01
Searchable encryption is a technique that allows a client to store data in encrypted form on a curious server, such that data can be retrieved while leaking a minimal amount of information to the server. Many searchable encryption schemes have been proposed and proved secure in their own
Towards an Information Theoretic Analysis of Searchable Encryption
Sedghi, S.; Doumen, J.M.; Hartel, Pieter H.; Jonker, Willem
2008-01-01
Searchable encryption is a technique that allows a client to store data in encrypted form on a curious server, such that data can be retrieved while leaking a minimal amount of information to the server. Many searchable encryption schemes have been proposed and proved secure in their own
Multi-way Communications: An Information Theoretic Perspective
Chaaban, Anas; Sezgin, Aydin
2015-01-01
Multi-way communication is a means to significantly improve the spectral efficiency of wireless networks. For instance, in a bi-directional (or two-way) communication channel, two users can simultaneously use the transmission medium to exchange
Multi-way Communications: An Information Theoretic Perspective
Chaaban, Anas
2015-09-15
Multi-way communication is a means to significantly improve the spectral efficiency of wireless networks. For instance, in a bi-directional (or two-way) communication channel, two users can simultaneously use the transmission medium to exchange information, thus achieving up to twice the rate that would be achieved had each user transmitted separately. Multi-way communications provides an overview on the developments in this research area since it has been initiated by Shannon. The basic two-way communication channel is considered first, followed by the two-way relay channel obtained by the deployment of an additional cooperative relay node to improve the overall communication performance. This basic setup is then extended to multi-user systems. For all these setups, fundamental limits on the achievable rates are reviewed, thereby making use of a linear high-SNR deterministic channel model to provide valuable insights which are helpful when discussing the coding schemes for Gaussian channel models in detail. Several tools and communication strategies are used in the process, including (but not limited to) computation, signal-space alignment, and nested-lattice codes. Finally, extensions of multi-way communication channels to multiple antenna settings are discussed. © 2015 A. Chaaban and A. Sezgin.
Information Theoretic Secret Key Generation: Structured Codes and Tree Packing
Nitinawarat, Sirin
2010-01-01
This dissertation deals with a multiterminal source model for secret key generation by multiple network terminals with prior and privileged access to a set of correlated signals complemented by public discussion among themselves. Emphasis is placed on a characterization of secret key capacity, i.e., the largest rate of an achievable secret key,…
Information-theoretical aspects of quantum-mechanical entropy
Wehrl, A.
1990-01-01
Properties of the quantum ( = von Neumann) entropy S(ρ) -k Trρ lnρ, ρ being a compact operator, are proved first, and differences against the classical case, e.g. the Shannon entropy, are worked out. The main result is on the strong subadditivity of this quantum entropy. Then another entropy, a function not of the state but of the dynamics of the system, is considered as a quantum analogue of the classical Kolmogorov-Sinai-entropy. An attempt in defining such a quantity had only recently sucess in a paper of Connes, Narnhofer and Thirring. A definition of this entropy is given. 34 refs
Information theoretical assessment of visual communication with subband coding
Rahman, Zia-ur; Fales, Carl L.; Huck, Friedrich O.
1994-09-01
A well-designed visual communication channel is one which transmits the most information about a radiance field with the fewest artifacts. The role of image processing, encoding and restoration is to improve the quality of visual communication channels by minimizing the error in the transmitted data. Conventionally this role has been analyzed strictly in the digital domain neglecting the effects of image-gathering and image-display devices on the quality of the image. This results in the design of a visual communication channel which is `suboptimal.' We propose an end-to-end assessment of the imaging process which incorporates the influences of these devices in the design of the encoder and the restoration process. This assessment combines Shannon's communication theory with Wiener's restoration filter and with the critical design factors of the image gathering and display devices, thus providing the metrics needed to quantify and optimize the end-to-end performance of the visual communication channel. Results show that the design of the image-gathering device plays a significant role in determining the quality of the visual communication channel and in designing the analysis filters for subband encoding.
Information-theoretical analysis of private content identification
Voloshynovskiy, S.; Koval, O.; Beekhof, F.; Farhadzadeh, F.; Holotyak, T.
2010-01-01
In recent years, content identification based on digital fingerprinting attracts a lot of attention in different emerging applications. At the same time, the theoretical analysis of digital fingerprinting systems for finite length case remains an open issue. Additionally, privacy leaks caused by
Object Recognition via Information-Theoretic Measures/Metrics
Repperger, Daniel W; Pinkus, Alan R; Skipper, Julie A; Schrider, Christian D
2006-01-01
.... In aerial military images, objects with different orientation can be reasonably approximated by a single identification signature consisting of the average histogram of the object under rotations...
Information Theoretical Limits of Free-Space Optical Links
Ansari, Imran Shafique
2016-08-25
Generalized fading has been an imminent part and parcel of wireless communications. It not only characterizes the wireless channel appropriately but also allows its utilization for further performance analysis of various types of wireless communication systems. Under the umbrella of generalized fading channels, a unified ergodic capacity analysis of a free-space optical (FSO) link under both types of detection techniques (i.e., intensity modulation/direct detection (IM/DD) as well as heterodyne detection) over generalized atmospheric turbulence channels that account for generalized pointing errors is presented. Specifically, unified exact closed-form expressions for the moments of the end-to-end signal-to-noise ratio (SNR) of a single link FSO transmission system are presented. Subsequently, capitalizing on these unified statistics, unified exact closed-form expressions for ergodic capacity performance metric of FSO link transmission systems is offered. Additionally, for scenarios wherein the exact closed-form solution is not possible to obtain, some asymptotic results are derived in the high SNR regime. All the presented results are verified via computer-based Monte-Carlo simulations.
Information theoretic methods for image processing algorithm optimization
Prokushkin, Sergey F.; Galil, Erez
2015-01-01
Modern image processing pipelines (e.g., those used in digital cameras) are full of advanced, highly adaptive filters that often have a large number of tunable parameters (sometimes > 100). This makes the calibration procedure for these filters very complex, and the optimal results barely achievable in the manual calibration; thus an automated approach is a must. We will discuss an information theory based metric for evaluation of algorithm adaptive characteristics ("adaptivity criterion") using noise reduction algorithms as an example. The method allows finding an "orthogonal decomposition" of the filter parameter space into the "filter adaptivity" and "filter strength" directions. This metric can be used as a cost function in automatic filter optimization. Since it is a measure of a physical "information restoration" rather than perceived image quality, it helps to reduce the set of the filter parameters to a smaller subset that is easier for a human operator to tune and achieve a better subjective image quality. With appropriate adjustments, the criterion can be used for assessment of the whole imaging system (sensor plus post-processing).
Information-theoretic outlook of the quantum dissipation problem
Kowalski, A.M.; Plastino, A.; Proto, A.N.
1992-08-01
The interaction between two harmonic oscillators, a classical and a quantum one, coupled through a linear term, is analyzed by recourse to the generalized Ehrenfest theorem. The model is able to mimic dissipating behaviour for the quantum oscillator without violation of any quantum rule. (author). 13 refs, 5 figs
Representation and management of narrative information theoretical principles and implementation
Zarri, Gian Piero
2009-01-01
Written from a multidisciplinary perspective, this book supplies an exhaustive description of NKRL and of the associated knowledge representation principles. It also constitutes an invaluable source of reference for practitioners, researchers and graduates.
Information-theoretic treatment of tripartite systems and quantum channels
Coles, Patrick J.; Yu Li; Gheorghiu, Vlad; Griffiths, Robert B.
2011-01-01
A Holevo measure is used to discuss how much information about a given positive operator valued measure (POVM) on system a is present in another system b, and how this influences the presence or absence of information about a different POVM on a in a third system c. The main goal is to extend information theorems for mutually unbiased bases or general bases to arbitrary POVMs, and especially to generalize ''all-or-nothing'' theorems about information located in tripartite systems to the case of partial information, in the form of quantitative inequalities. Some of the inequalities can be viewed as entropic uncertainty relations that apply in the presence of quantum side information, as in recent work by Berta et al. [Nature Physics 6, 659 (2010)]. All of the results also apply to quantum channels: For example, if E accurately transmits certain POVMs, the complementary channel F will necessarily be noisy for certain other POVMs. While the inequalities are valid for mixed states of tripartite systems, restricting to pure states leads to the basis invariance of the difference between the information about a contained in b and c.
Information theoretic approach to tactile encoding and discrimination
Saal, Hannes
2011-01-01
The human sense of touch integrates feedback from a multitude of touch receptors, but how this information is represented in the neural responses such that it can be extracted quickly and reliably is still largely an open question. At the same time, dexterous robots equipped with touch sensors are becoming more common, necessitating better methods for representing sequentially updated information and new control strategies that aid in extracting relevant features for object man...
An Information-Theoretic Justification for Covariance Intersectionand Its Generalization
Hurley, Michael
2001-01-01
.... that addresses the problems that arise from fusing correlated measurements. The researchers have named this technique 'covariance intersection' and have presented papers on it at several robotics and control theory conferences...
Information Theoretical Limits of Free-Space Optical Links
Ansari, Imran Shafique; Al-Quwaiee, Hessa; Zedini, Emna; Alouini, Mohamed-Slim
2016-01-01
detection) over generalized atmospheric turbulence channels that account for generalized pointing errors is presented. Specifically, unified exact closed-form expressions for the moments of the end-to-end signal-to-noise ratio (SNR) of a single link FSO
Visual words assignment via information-theoretic manifold embedding.
Deng, Yue; Li, Yipeng; Qian, Yanjun; Ji, Xiangyang; Dai, Qionghai
2014-10-01
Codebook-based learning provides a flexible way to extract the contents of an image in a data-driven manner for visual recognition. One central task in such frameworks is codeword assignment, which allocates local image descriptors to the most similar codewords in the dictionary to generate histogram for categorization. Nevertheless, existing assignment approaches, e.g., nearest neighbors strategy (hard assignment) and Gaussian similarity (soft assignment), suffer from two problems: 1) too strong Euclidean assumption and 2) neglecting the label information of the local descriptors. To address the aforementioned two challenges, we propose a graph assignment method with maximal mutual information (GAMI) regularization. GAMI takes the power of manifold structure to better reveal the relationship of massive number of local features by nonlinear graph metric. Meanwhile, the mutual information of descriptor-label pairs is ultimately optimized in the embedding space for the sake of enhancing the discriminant property of the selected codewords. According to such objective, two optimization models, i.e., inexact-GAMI and exact-GAMI, are respectively proposed in this paper. The inexact model can be efficiently solved with a closed-from solution. The stricter exact-GAMI nonparametrically estimates the entropy of descriptor-label pairs in the embedding space and thus leads to a relatively complicated but still trackable optimization. The effectiveness of GAMI models are verified on both the public and our own datasets.
Ana Lúcia A. Sampaio
Full Text Available Considering th e morphology, diet and spatial distribution of Satanoperca pappaterra and Crenicichla britskii (Perciformes: Cichlidae in the Upper Paraná River floodplain (Brazil, the following questions were investigated: (1 Could the body shape predict the use of trophic resources and habitat by C. britskii and S. pappaterra? (2 Could the relationship between morphology and use of trophic resources and habitat be also extended to the intraspecific scale? (3 What are the most important morphological traits used to predict the variation on diet and habitat occupation within and between species? We hypothesized that intra and interspecific differences in morphological patterns imply in different forms of resource exploitation and that the ecomorphological analysis enables the identification of trophic and spatial niche segregation. Fish samplings were performed in different types of habitats (rivers, secondary channels, connected and disconnected lagoons in the Upper Paraná River floodplain. Analyses of the stomach content was conducted to characterize the feeding patterns and twenty-two ecomorphological indices were calculated from linear morphological measurements and areas. A principal component analysis (PCA run with these indices evidenced the formation of two significant axes, revealing in the axis 1 an ecomorphological ordination according to the type of habitat, regardless the species. The individuals of both species exploiting lotic habitats tended to have morphological traits that enable rapid progressive and retrograde movements, braking and continuous swimming, whereas individuals found in lentic and semi-lotic habitats presented morphology adapted to a greater maneuverability and stabilization in deflections. On the other hand the axis 2 evidenced a segregation related to the feeding ecology, between S. pappaterra and C. britskii. The relationship between morphology and use of spatial and feeding resource was corroborated by the
Johnson, S.
1976-01-01
This preliminary data report gives basic test results of a flat-plate solar collector whose performance was determined in the NASA-Lewis solar simulator. The collector was tested over ranges of inlet temperatures, fluxes and coolant flow rates. Collector efficienty is correlated in terms of inlet temperature and flux level.
Kjelgaard, S. O.; Morgan, H. L., Jr.
1983-01-01
A high-lift transport aircraft model equipped with full-span leading-edge slat and part-span double-slotted trailing-edge flap was tested in the Ames 12-ft pressure tunnel to determine the low-speed performance characteristics of a representative high-aspect-ratio supercritical wing. These tests were performed in support of the Energy Efficient Transport (EET) program which is one element of the Aircraft Energy Efficiency (ACEE) project. Static longitudinal forces and moments and chordwise pressure distributions at three spanwise stations were measured for cruise, climb, two take-off flap, and two landing flap wing configurations. The tabulated and plotted pressure distribution data is presented without analysis or discussion.
Federal Laboratory Consortium — The Test Control Center (TCC) provides a consolidated facility for planning, coordinating, controlling, monitoring, and analyzing distributed test events. ,The TCC...
2016-01-01
challenge for users. The system required reboots about every 20 hours for users who had heavy workloads such as the fire support analysts and data...cybersecurity test in two phases. The first phase was performed during NIE 15.2. The Army Research Laboratory Survivability and Lethality Analysis...positions in village. The test ended as unit took action on the third IED factory. Vignette 5 Disrupt Suicide Vehicle-Borne IED Attack DCGS-A
用于统计测试概率分布生成的自动搜索方法%Automated Search Method for Statistical Test Probability Distribution Generation
周晓莹; 高建华
2013-01-01
A strategy based on automated search for probability distribution construction is proposed, which comprises the design of representation format and evaluation function for the probability distribution. Combining with simulated annealing algorithm, an indicator is defined to formalize the automated search process based on the Markov model. Experimental results show that the method effectively improves the accuracy of the automated search, which can reduce the expense of statistical test by providing the statistical test with fairly efficient test data since it successfully finds the neat-optimal probability distribution within a certain time.%提出一种基于自动搜索的概率分布生成方法,设计对概率分布的表示形式与评估函数,同时结合模拟退火算法设计基于马尔可夫模型的自动搜索过程.实验结果表明,该方法能够有效地提高自动搜索的准确性,在一定时间内成功找到接近最优的概率分布,生成高效的测试数据,同时达到降低统计测试成本的目的.
Stihi Nadjet
2012-01-01
Full Text Available For M/G/1 retrial queues with impatient customers, we review the results, concerning the steady state distribution of the system state, presented in the literature. Since the existing formulas are cumbersome (so their utilization in practice becomes delicate or the obtaining of these formulas is impossible, we apply the information theoretic techniques for estimating the above mentioned distribution. More concretely, we use the principle of maximum entropy which provides an adequate methodology for computing a unique estimate for an unknown probability distribution based on information expressed in terms of some given mean value constraints.
Sarabeev, Volodimir; Balbuena, Juan Antonio; Morand, Serge
2017-09-01
The abundance and aggregation patterns of helminth communities of two grey mullet hosts, Liza haematocheilus and Mugil cephalus, were studied across 14 localities in Atlantic and Pacific marine areas. The analysis matched parasite communities of (i) L. haematocheilus across its native and introduced populations (Sea of Japan and Sea of Azov, respectively) and (ii) the introduced population of L. haematocheilus with native populations of M. cephalus (Mediterranean, Azov-Black and Japan Seas). The total mean abundance (TMA), as a feature of the infection level in helminth communities, and slope b of the Taylor's power law, as a measure of parasite aggregation at the infra and component-community levels, were estimated and compared between host species and localities using ANOVA. The TMA of the whole helminth community in the introduced population of L. haematocheilus was over 15 times lower than that of the native population, but the difference was less pronounced for carried (monogeneans) than for acquired (adult and larval digeneans) parasite communities. Similar to the abundance pattern, the species distribution in communities from the invasive population of L. haematocheilus was less aggregated than from its native population for endoparasitic helminths, including adult and larval digeneans, while monogeneans showed a similar pattern of distribution in the compared populations of L. haematocheilus. The aggregation level of the whole helminth community, endoparasitic helminths, adult and larval digeneans was lower in the invasive host species in comparison with native ones as shown by differences in the slope b. An important theoretical implication from this study is that the pattern of parasite aggregation may explain the success of invasive species in ecosystems. Because the effects of parasites on host mortality are likely dose-dependent, the proportion of susceptible host individuals in invasive species is expected to be lower, as the helminth distribution in
Purcell, Maureen; Thompson, Rachel L.; Evered, Joy; Kerwin, John; Meyers, Ted R.; Stewart, Bruce; Winton, James
2018-01-01
This research was initiated in conjunction with a systematic, multiagency surveillance effort in the United States (U.S.) in response to reported findings of infectious salmon anaemia virus (ISAV) RNA in British Columbia, Canada. In the systematic surveillance study reported in a companion paper, tissues from various salmonids taken from Washington and Alaska were surveyed for ISAV RNA using the U.S.-approved diagnostic method, and samples were released for use in this present study only after testing negative. Here, we tested a subset of these samples for ISAV RNA with three additional published molecular assays, as well as for RNA from salmonid alphavirus (SAV), piscine myocarditis virus (PMCV) and piscine orthoreovirus (PRV). All samples (n = 2,252; 121 stock cohorts) tested negative for RNA from ISAV, PMCV, and SAV. In contrast, there were 25 stock cohorts from Washington and Alaska that had one or more individuals test positive for PRV RNA; prevalence within stocks varied and ranged from 2% to 73%. The overall prevalence of PRV RNA-positive individuals across the study was 3.4% (77 of 2,252 fish tested). Findings of PRV RNA were most common in coho (Oncorhynchus kisutch Walbaum) and Chinook (O. tshawytscha Walbaum) salmon.
Kroese, A.H.; van der Meulen, E.A.; Poortema, Klaas; Schaafsma, W.
1995-01-01
The making of statistical inferences in distributional form is conceptionally complicated because the epistemic 'probabilities' assigned are mixtures of fact and fiction. In this respect they are essentially different from 'physical' or 'frequency-theoretic' probabilities. The distributional form is
Aydogan, F.; Hochreiter, L.; Ivanov, K.; Martin, M.; Utsuno, H.; Sartori, E.
2010-01-01
This report provides the specification for the uncertainty exercises of the international OECD/NEA, NRC and NUPEC BFBT benchmark problem including the elemental task. The specification was prepared jointly by Pennsylvania State University (PSU), USA and the Japan Nuclear Energy Safety (JNES) Organisation, in cooperation with the OECD/NEA and the Commissariat a l'energie atomique (CEA Saclay, France). The work is sponsored by the US NRC, METI-Japan, the OECD/NEA and the Nuclear Engineering Program (NEP) of Pennsylvania State University. This uncertainty specification covers the fourth exercise of Phase I (Exercise-I-4), and the third exercise of Phase II (Exercise II-3) as well as the elemental task. The OECD/NRC BFBT benchmark provides a very good opportunity to apply uncertainty analysis (UA) and sensitivity analysis (SA) techniques and to assess the accuracy of thermal-hydraulic models for two-phase flows in rod bundles. During the previous OECD benchmarks, participants usually carried out sensitivity analysis on their models for the specification (initial conditions, boundary conditions, etc.) to identify the most sensitive models or/and to improve the computed results. The comprehensive BFBT experimental database (NEA, 2006) leads us one step further in investigating modelling capabilities by taking into account the uncertainty analysis in the benchmark. The uncertainties in input data (boundary conditions) and geometry (provided in the benchmark specification) as well as the uncertainties in code models can be accounted for to produce results with calculational uncertainties and compare them with the measurement uncertainties. Therefore, uncertainty analysis exercises were defined for the void distribution and critical power phases of the BFBT benchmark. This specification is intended to provide definitions related to UA/SA methods, sensitivity/ uncertainty parameters, suggested probability distribution functions (PDF) of sensitivity parameters, and selected
Tuleubaev, B.A.; Artem'ev, O.I.; Luk'yanova, Yu.A.; Sidorovich, T.V.; Silkina, G.P.; Kurmanbaeva, D.S.
2001-01-01
This paper presents results of field and laboratory studies of soil-vegetative cover contamination by 90 Sr and 239/240 Pu. Certain parameters of radionuclide migration in the environment of some former Semipalatinsk Test Site areas were determined. (author)
Bruinsma, G.J.N.; Pauwels, L.J.R; Weerman, F.M.; Bernasco, W.
2013-01-01
Six different social disorganization models of neighbourhood crime and offender rates were tested using data from multiple sources in the city of The Hague, in the Netherlands. The sources included a community survey among 3,575 residents in 86 neighbourhoods measuring the central concepts of the
Ani Georgieva
2016-01-01
Full Text Available Two polyphosphoesters containing anthracene-derived aminophosphonate and hydrophilic H-phosphonate repeating units, poly[oxyethylene(aminophosphonate-co-H-phosphonate]s (1 and 2, were tested for the in vitro antitumour activity on cell cultures derived from ascitic form of Ehrlich mammary adenocarcinoma by 3-(4,5-dimethylthiazol-2-yl-2,5-diphenyltetrazolium bromide (MTT-dye reduction assay. The in vitro safety testing of the copolymers was performed by BALB/c 3T3 neutral red uptake assay. A study on their uptake and subcellular distribution in non-tumourigenic and tumour cells was performed by means of fluorescence microscopy. Both copolymers showed significant antitumour activity towards Ehrlich ascites carcinoma (EAC cells. However, the in vitro safety testing revealed significant toxicity of polymer 2 to BALB/c 3T3 mouse embryo cells. In contrast, polymer 1 showed complete absence of cytotoxicity to BALB/c 3T3 cells. The fluorescent studies showed that the substances were diffusely distributed in the cytoplasm in both cell culture systems. As opposed to BALB/c 3T3 cells, in EAC cells, intense fluorescent signal was observed in the nuclei and in the perinuclear region. The tested polyphosphoesters are expected to act under physiological conditions as prodrugs of aminophosphonates.